4

AWS Automation Part 4: Using Terraform for AWS Automation


This is part 4 of a blog post series, in which we explore how to automate Amazon Web Services (AWS) using the Terraform open source software by HashiCorp. Similar to Cloudify, Terraform is a versatile way to codify any type of infrastructure and to spin up a production-like demo or staging environment on any IaaS cloud like AWS, Azure or Google Cloud within minutes.

In this blog post, we will compare Terraform with Vagrant (providing links to other comparisons like Cloudify along the way), before we will use Terraform to spin up a single Ubuntu virtual machine instance on AWS. In the Appendix, we will also show, how to access the instance using SSH.

The series is divided into four parts:

  • In Part 1: AWS EC2 Introduction, introduces Amazon Web Services EC2 (AWS EC2) and will show how to sign into a free trial of Amazon, create, start, shut down and terminate a virtual machine on the AWS EC2 console.
  • Part 2: Automate AWS using Vagrant will lead you through the process how to use Vagrant to perform the same tasks you have performed in part 1, but now we will use local Vagrantfiles in order to automate the process. Please be sure to check out part 4, which shows a much simpler way to perform the same using Terraform.
  • Part 3: Deploy Docker Host on AWS using Vagrant shows, how Vagrant helps you to go beyond simple creation, startup, shutdown and termination of a virtual machine. In less than 10 minutes, you will be able to install a Docker host on AWS. With a few additional clicks on the AWS EC2 console, you are ready to start your first Docker container in the AWS cloud.
  • Part 4: Automate AWS using Terraform (this post) is showing that spinning up a virtual machine instance on AWS using Terraform is even simpler than using the Vagrant AWS plugin we have used in part 2 and 3. Additionally, Terraform opens up the we will be able to use the same tool to provision our resources to other clouds like Azure and Google Cloud.

Document Versions

  • 2016-09-22: initial published version

Contents

In this blog post we will explore, how to get started with Amazon Web Services (AWS). After signing in to a free trial of Amazon, we will show how to create, spin up and terminate virtual machines in the cloud using Amazon’s AWS EC2 web based console. After that, a step by step guide will lead us through the process of performing the same tasks in an automated way using Terraform.

While the shown tasks could also be performed with AWS CLI commands, Terraform potentially allows for more sophisticated provisioning tasks like Software Installation and upload & execution of arbitrary shell scripts.

Why Terraform?

In part 2 of this series, we had chosen the Vagrant AWS provider plugin to automatically spin up a virtual machine instance on Amazon AWS. Developers are used to Vagrant, since Vagrant offers a great way to locally spin up the virtual machines developers need to perform their development tasks. Even though Vagrant is most often used for local VirtualBox provider, the AWS plugin allows Vagrant to spin up virtual machine instances on AWS EC2, as we have demonstrated in the blog post.

One of the differences between Vagrant and Terraform is the language used: While Vagrant requires Chef-like Ruby programming (developers like it), Terraform Terraform uses a language called HashiCorp Configuration Language (HCL). Here is an example:

# A variable definition:
variable "ami" {
  description = "the AMI to use"
}

# A resource definition:
resource "aws_instance" "web" {
  ami               = "${var.ami}"
  count             = 2
  source_dest_check = false

  ...(AWS credential management skipped here)

  connection {
    user = "myuser"
  }
}

The same code would look similar to following in a Vagrantfile:

AMI = "ami-1234567"
$instance_name_prefix = "myinstance"
Vagrant.configure("2") do |config|
  (1..2).each do |i|
    config.vm.provider :aws do |aws, override|
      config.vm.define vm_name = "%s-%02d" % [$instance_name_prefix, i] do |config|
      aws.ami = "#{AMI}"
      ...(AWS credential management skipped here)
      override.vm.box = "dummy"
      override.ssh.username = "myuser"

    end
  end
end

We can see that a Vagrantfile is a Ruby program, while the Terraform language reads more like a status description file. It is a matter of taste, whether you prefer the one over the other. I assume that Ruby programming gives you more fine-grained possibilities to adapt the environments to your needs, while Terraform potentially offers the possibility to gain a better overview on the desired state.

In my opinion, the biggest difference between Vagrant and Terraform is the scope of those tools: according to HashiCorp, Vagrant is not designed for production-like environments. HashiCorp’s Terraform Intro is pointing out the following:

Modern software is increasingly networked and distributed. Although tools like Vagrant exist to build virtualized environments for demos, it is still very challenging to demo software on real infrastructure which more closely matches production environments.

Software writers can provide a Terraform configuration to create, provision and bootstrap a demo on cloud providers like AWS. This allows end users to easily demo the software on their own infrastructure, and even enables tweaking parameters like cluster size to more rigorously test tools at any scale.

List of supported Terraform Providers
List of supported Terraform Providers

We could argue that all of that can also be done with Vagrant and its AWS plugin. However, the big difference is that Terraform comes with a long, long list of supported providers as seen on the right-hand side of this page. We find all major IaaS Providers like AWS, MS Azure, Google Engine, DigitialOcean and SoftLayer but also an important PaaS provider like Heroku. Moreover, we find support for local virtual infrastructure providers like OpenStack and some “initial support” for VMware tools like vSphere and vCloud. Unfortunately, VirtualBox is missing in the official list, so developers either keep working with Vagrant locally, or they could try using a third party Terraform VirtualBox provider. Also Docker support is also classified as “initial support” and Docker Cloud as well as Kubernetes or OpenShift Origin are missing altogether.

Terraform tries to codify any type of resources to its list, so we even can find interfaces to DNS providers Databases, Mailproviders and many more. With that, it can spin up a whole environment including virtual machine instances, DNS services, networking, content delivery network services and more. HashiCorp’s Terraform introductory web page about use cases tells us that Terraform can spin up a distributed sophisticated demo or staging environment in less than 30 sec.

Further Reading about Terraform vs XYZ: You may also want to check out Terraform’s comparison page or an informative slideset of Nati Shalom, GigaSpaces or this CloudFormation vs Terraform comparison.

Why offering yet another ‘Hello World’ for Amazon Web Service Automation via Terraform?

The Terraform web portal is providing an AWS hello world example already. The reason I am offering yet another ‘Hello World’ example is, that the other guides assume that you already have created an AWS user with the appropriate rights. In this guide, we will describe, how this is done. Moreover, we will show in an Appendix, which steps are necessary to access the created virtual machine instance via SSH.

Why Amazon Web Services?

According to Gartner’s 2015 report, Amazon Web Services is the leader in the IaaS space, followed by Microsoft Azure. See below the Gartner’s magic quadrant on IaaS:

Gartner 2015 MQ

Source: Gartner (May 2015)

There are many articles out there that compare AWS with Microsoft Azure. From reading those articles, the following over-simplified summary has burnt its traces into my brain:

Amazon Web Services vs. Microsoft Azure is like Open Source Linux world vs. the commercial Microsoft Software world. For a long time, we will need both sides of the world.

Now that we have decided to begin with the open source side of the world, let us get started.

Signing into Amazon Web Services

In order to get started, you need to sign into the Amazon Web Services, if not already done so. For that, visit https://aws.amazon.com/, scroll down and push the Get Started for Free button. This is starting a free tier trial account for up to 12 months and up to two time 750 hrs of computing time; Linux and Windows 2012 server on a small virtual machine.

Note that you will be offered options that are free along with other services that are not for free, so you need to be a little bit careful. Terraform with its easy automation will help us to minimize the resources needed.

2016-03-27_231950_capture_008

I had signed into AWS long ago, but as far as I remember, you need to choose “I am a new User”, add your email address and desired password and a set of personal data (I am not sure whether I had to add my credit card, since I am an Amazon customer anyway).

2016.03.31-19_50_22-hc_001

If you are interested to creating, launching, stopping and terminating virtual machine instances using the Amazon EC2 console (a web portal), you might want to have a look to part 1 of this series:

2016.04.03-21_24_41-hc_001

In this part of the series, we will concentrate on automating the tasks.

AWS Automation using Terraform

Now we sill use Terraform in order to automate launching a virtual machine instance on AWS from an existing image (AMI). Let us start:

Step 0: Set HTTP proxy, if needed

The tests within this Hello World blog post have been performed without a HTTP proxy. If you are located behind a proxy, this should be supported as pointed out here. Try following commands:

On Mac/*nix systems:

export http_proxy='http://myproxy.dns.name:8080'
export https_proxy='http://myproxy.dns.name:8080'

On Windows:

set http_proxy='http://myproxy.dns.name:8080'
set https_proxy='http://myproxy.dns.name:8080'

Replace myproxy.dns.name and 8080 by the IP address or DNS name and port owned by the HTTP proxy in your environment.

Step 1a Native Installation

It is best, if you have direct Internet access (behind a firewall, but without any HTTP proxy).  Install Terraform on your local machine. The installation procedure depends on your operating system and is described here. I have taken the Docker alternative in Step 1b instead, though.

Step 1b Docker Alternative

If you have access to a Docker host, you also just can run any terraform command by creating a function like follows:

terraform() { 
  docker run -it --rm -v `pwd`:/currentdir --workdir=/currentdir hashicorp/terraform:light $@; 
}

For permanent definition, write those three lines it in the ~/.bashrc file of your Docker host.

After that terraform commands can be issued on the docker host as if terraform was installed on the Docker host. The first time the command is performed, a 20 MB terraform light image will be downloaded automatically from Docker Hub:

$ terraform --version
Unable to find image 'hashicorp/terraform:light' locally
light: Pulling from hashicorp/terraform

ece78a7c791f: Downloading [=====> ] 2.162 MB/18.03 MB
...
Terraform v0.7.4

Once the image is downloaded, the next time you issue the command, the output will look the same, as if the software was installed locally:

$ terraform --version
Terraform v0.7.4

Step 2: Create a Terraform Plan

We create a file named aws_example.tf like follows:

provider "aws" {
  access_key = "ACCESS_KEY_HERE"
  secret_key = "SECRET_KEY_HERE"
  region     = "us-east-1"
}

resource "aws_instance" "example" {
  ami           = "ami-0d729a60"
  instance_type = "t2.micro"
}

You can get the access key from AWS IAM Users page (click the user), if it exists already. However, the secret key is secret and has been provided at the time the access key has been created. If the secret key is unavailable, try creating a new one on the AWS IAM Users page (click the user).

The ami of the main images can be retrieved from the AWS console after being logged in. Simulate installing an instance by clicking “Lauch Instance” and browse through the main images. The image number starting with “ami” is displayed there:

2016-09-20-20_57_46-ec2-management-console

We have copied the ami number of the Ubuntu Server this time. Then you can cancel the instance creation.

Your region is displayed after the question mark as part of the AWS console URL, once you are logged in:

2016-09-20-21_00_43-ec2-management-console

If you are a “free tier” user of AWS, only use “t1-micro” or “t2-micro” as instance-type. None of the other types are free tier eligible, even not the smaller “t2-nano”, see this comment.

Step 3: Simulate the Terraform Plan

To see, what will happen, if you execute a terraform template, just issue the following command in bold:

$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but
will not be persisted to local or remote state storage.


The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

+ aws_instance.example
    ami:                      "ami-26c43149"
    availability_zone:        "<computed>"
    ebs_block_device.#:       "<computed>"
    ephemeral_block_device.#: "<computed>"
    instance_state:           "<computed>"
    instance_type:            "t2.micro"
    key_name:                 "<computed>"
    network_interface_id:     "<computed>"
    placement_group:          "<computed>"
    private_dns:              "<computed>"
    private_ip:               "<computed>"
    public_dns:               "<computed>"
    public_ip:                "<computed>"
    root_block_device.#:      "<computed>"
    security_groups.#:        "<computed>"
    source_dest_check:        "true"
    subnet_id:                "<computed>"
    tenancy:                  "<computed>"
    vpc_security_group_ids.#: "<computed>"


Plan: 1 to add, 0 to change, 0 to destroy.

Step 4: Set permissions of the AWS User

This step is not described in the Quick Start guides I have come across. You can try to skip this step and proceed with the next step. However, if the user owning the AWS credentials you have specified above, you may encounter following Error:

Error launching source instance: UnauthorizedOperation: You are not authorized to perform this operation.

In this case, following steps will fix the issue:

Step 4.1: Create a new user on the AWS IAM Users page , if not already done.

Step 4.2: Assign the needed access rights to the user like follows:

Adapt and goto the  AWS IAM Link https://console.aws.amazon.com/iam/home?region=eu-central-1#policies. The link needs to be adapted to your region; e.g. change eu-central-1 by the right one from the region list that applies to your account.

Click the “Get Started” button, if the list of policies is not visible already. After that, you should see the list of policies and a filter field:

2016.05.06-10_42_28-hc_001

In the Filter field, search for the term AmazonEC2FullAccess. 

Click on the AmazonEC2FullAccess Policy Name and then choose the tab Attached Identities.

2016.05.06-10_50_14-hc_001

Click the Attach button and attach the main user (in the screenshot above, my main user “oveits” is already attached; in your case, the list will be empty before you click the Attach button, most likely).

Step 5: Apply the Terraform Plan

Note: this step will launch AWS EC2 virtual machine instances. Depending on your pay plan, this might cause some cost.

To apply the Terraform plan, issue the following command:

$ terraform apply
aws_instance.example: Creating...
  ami:                      "" => "ami-26c43149"
  availability_zone:        "" => "<computed>"
  ebs_block_device.#:       "" => "<computed>"
  ephemeral_block_device.#: "" => "<computed>"
  instance_state:           "" => "<computed>"
  instance_type:            "" => "t2.micro"
  key_name:                 "" => "<computed>"
  network_interface_id:     "" => "<computed>"
  placement_group:          "" => "<computed>"
  private_dns:              "" => "<computed>"
  private_ip:               "" => "<computed>"
  public_dns:               "" => "<computed>"
  public_ip:                "" => "<computed>"
  root_block_device.#:      "" => "<computed>"
  security_groups.#:        "" => "<computed>"
  source_dest_check:        "" => "true"
  subnet_id:                "" => "<computed>"
  tenancy:                  "" => "<computed>"
  vpc_security_group_ids.#: "" => "<computed>"
aws_instance.example: Still creating... (10s elapsed)
aws_instance.example: Still creating... (20s elapsed)
aws_instance.example: Creation complete

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate

Note also the information on the terraform.tfstate file, which should not get lost. This shows us that Terraform is not stateless: it will not synchronize with the current state of the Provider AWS, leading to potential problems if the tfstate file and the real world get out of sync.

In the AWS console, we indeed can see that a Ubuntu instance has been launched:

2016-09-20-16_05_10-ec2-management-console

2016-09-20-16_05_55-ec2-management-console

I have not expected it to be that easy, because:

  • Unlike the Vagrant example, I was not forced to specify the SSH key
  • Unlike the Vagrant example, I was not forced to adapt the security rule to allow SSH traffic to the instance.

Unlike Vagrant, Terraform does not need SSH access to the virtual machine instance in order to spin it up.

Step 6: Destroy the Instance

Now let us destroy the instance again:

Step 6.1: Check the Plan for Destruction
$ terraform plan -destroy
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but
will not be persisted to local or remote state storage.

aws_instance.example: Refreshing state... (ID: i-8e3f1832)

The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

- aws_instance.example
Step 6.2 Apply the Plan with “Destroy” Option

And now let us apply the destruction:

$ terraform destroy
Do you really want to destroy?
  Terraform will delete all your managed infrastructure.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes

aws_instance.example: Refreshing state... (ID: i-8e3f1832)
aws_instance.example: Destroying...
aws_instance.example: Still destroying... (10s elapsed)
aws_instance.example: Still destroying... (20s elapsed)
aws_instance.example: Still destroying... (30s elapsed)
aws_instance.example: Still destroying... (40s elapsed)
aws_instance.example: Still destroying... (50s elapsed)
aws_instance.example: Still destroying... (1m0s elapsed)
aws_instance.example: Destruction complete

Checking with AWS console:

2016-09-20-16_21_36-ec2-management-console

And yes, indeed, the instance was terminated. Note that AWS will keep the instance in terminated status for some time before automatically removing it.

Note also that a created instance will be charged as if it was up 15 minutes minimum. Therefore, it is not a good idea to run such examples in a loop, or with a large number of instances.

 

DONE!

thumps_up_3

Appendix A: Access the virtual machine via SSH

Step A.1: Check, whether you already have SSH access

Try to connect to the virtual machine instance via SSH (for information on SSH clients, check out Appendix C). If you are prompted to accept the SSH fingerprint, the security rule does not need to be updated and you can go to the next step. If there is a timeout instead, perform the steps in Appendix B: Adapt Security Rule manually.

Step A.2: Provision the SSH Key

Step A.2.1 Create or find your SSH key pair

You can follow this guide and let AWS create it for you on this page, or you can use a local OpenSSH installation to create the key pair. I have gone the AWS way this time.

Step A.2.2 Retrieve public Key Data

The key pair you have created contains a public data that looks similar to follows:

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC22rLY2IreKhTqhdiYMIo+8p+LkuRroHZdm4OPagVuxmT1iz9o4NsPP3MIgwXjGFNtC0Sb6puuFilat+sswXEFLca2G2dWVtITpGfb4EJt72i2AmSi8TL/0UudhO9bkfZUPtlxJNrMKsxLQ62ukIC3b927CMgBMBFrLcAIh/WWQsB/KInOAID8GN+MssR7RwpAxEDXb1ZFtaaAzR2p3B3QdTzazUCZgzEMY6c3K4I4eaIzzONRV7rUUH3UC61GwXAORQLXsOBzHW0uOgIhlOTIMG0zkQtwJfLBoQKz/zQFFYX9gEoA/ElVNTrwWwX9gsJzpz6hdL/koD3tionbE6vJ

with or without email-address appended. We need the whole string (including the email-address, if present) in the next step:

Step A.2.3 Specify SSH Key as Terraform Resource

The public key data is now written into a .tf file (I have used the name aws_keyfile.tf) as described here.

resource "aws_key_pair" "deployer" {
  key_name = "deployer-key" 
  public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD3F6tyPEFEzV0LX3X8BsXdMsQz1x2cEikKDEY0aIj41qgxMCP/iteneqXSIFZBp5vizPvaoIR3Um9xK7PGoW8giupGn+EPuxIA4cDM4vzOqOkiMPhz5XK0whEjkVzTo4+S0puvDZuwIsdiW9mxhJc7tgBNL0cYlWSYVkz4G/fslNfRPW5mYAM49f4fhtxPb5ok4Q2Lg9dPKVHO/Bgeu5woMc7RY0p1ej6D4CKFE6lymSDJpW0YHX/wqE9+cfEauh7xZcG0q9t2ta6F6fmX0agvpFyZo8aFbXeUBr7osSCJNgvavWbM/06niWrOvYX2xwWdhXmXSrbX8ZbabVohBK41 email@example.com"
}

As you will see below, this resource will be added to the list of available SSH keys in the AWS console -> EC2 Dashboard -> Key Pairs.

Step A.2.4 Assign Key to the AWS Instance

We now use the key named “deployer_key” in the instance definition. For that, we edit aws_example.tf and add the key_name:

provider "aws" {
  access_key = "MYKEY"
  secret_key = "MYSECRETKEY"
  region     = "eu-central-1"
}

resource "aws_instance" "example" {
  ami           = "ami-26c43149"
  instance_type = "t2.micro"
  key_name = "deployer_key"
}

As you will see below, the key_name will be applied to the new instance, allowing us to SSH into the virtual machine instance.

Step A.2.5 Review Terraform Plan

After that the plan looks like follows (in the shown output, the instance is running already, but this is irrelevant, since a new instance will be created anyway):

vagrant@localhost /mnt/nfs/terraform $ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but
will not be persisted to local or remote state storage.


The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

+ aws_instance.example
    ami:                      "ami-26c43149"
    availability_zone:        "<computed>"
    ebs_block_device.#:       "<computed>"
    ephemeral_block_device.#: "<computed>"
    instance_state:           "<computed>"
    instance_type:            "t2.micro"
    key_name:                 "deployer-key"
    network_interface_id:     "<computed>"
    placement_group:          "<computed>"
    private_dns:              "<computed>"
    private_ip:               "<computed>"
    public_dns:               "<computed>"
    public_ip:                "<computed>"
    root_block_device.#:      "<computed>"
    security_groups.#:        "<computed>"
    source_dest_check:        "true"
    subnet_id:                "<computed>"
    tenancy:                  "<computed>"
    vpc_security_group_ids.#: "<computed>"

+ aws_key_pair.deployer
    fingerprint: "<computed>"
    key_name:    "deployer-key"
    public_key:  "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC22rLY2IreKhTqhdiYMIo+8p+LkuRroHZdm4OPagVuxmT1iz9o4NsPP3MIgwXjGFNtC0Sb6puuFilat+sswXEFLca2G2dWVtITpGfb4EJt72i2AmSi8TL/0UudhO9bkfZUPtlxJNrMKsxLQ62ukIC3b927CMgBMBFrLcAIh/WWQsB/KInOAID8GN+MssR7RwpAxEDXb1ZFtaaAzR2p3B3QdTzazUCZgzEMY6c3K4I4eaIzzONRV7rUUH3UC61GwXAORQLXsOBzHW0uOgIhlOTIMG0zkQtwJfLBoQKz/zQFFYX9gEoA/ElVNTrwWwX9gsJzpz6hdL/koD3tionbE6vJ"


Plan: 2 to add, 0 to change, 0 to destroy.

The public key will be provisioned to AWS and an instance will be created with the appropriate SSH key. Let us try:

Step A.2.6 Apply the Terraform Plan

vagrant@localhost /mnt/nfs/terraform $ terraform apply
aws_key_pair.deployer: Creating...
  fingerprint: "" => "<computed>"
  key_name:    "" => "deployer-key"
  public_key:  "" => "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC22rLY2IreKhTqhdiYMIo+8p+LkuRroHZdm4OPagVuxmT1iz9o4NsPP3MIgwXjGFNtC0Sb6puuFilat+sswXEFLca2G2dWVtITpGfb4EJt72i2AmSi8TL/0UudhO9bkfZUPtlxJNrMKsxLQ62ukIC3b927CMgBMBFrLcAIh/WWQsB/KInOAID8GN+MssR7RwpAxEDXb1ZFtaaAzR2p3B3QdTzazUCZgzEMY6c3K4I4eaIzzONRV7rUUH3UC61GwXAORQLXsOBzHW0uOgIhlOTIMG0zkQtwJfLBoQKz/zQFFYX9gEoA/ElVNTrwWwX9gsJzpz6hdL/koD3tionbE6vJ"
aws_instance.example: Creating...
  ami:                      "" => "ami-26c43149"
  availability_zone:        "" => "<computed>"
  ebs_block_device.#:       "" => "<computed>"
  ephemeral_block_device.#: "" => "<computed>"
  instance_state:           "" => "<computed>"
  instance_type:            "" => "t2.micro"
  key_name:                 "" => "deployer-key"
  network_interface_id:     "" => "<computed>"
  placement_group:          "" => "<computed>"
  private_dns:              "" => "<computed>"
  private_ip:               "" => "<computed>"
  public_dns:               "" => "<computed>"
  public_ip:                "" => "<computed>"
  root_block_device.#:      "" => "<computed>"
  security_groups.#:        "" => "<computed>"
  source_dest_check:        "" => "true"
  subnet_id:                "" => "<computed>"
  tenancy:                  "" => "<computed>"
  vpc_security_group_ids.#: "" => "<computed>"
aws_key_pair.deployer: Creation complete
aws_instance.example: Still creating... (10s elapsed)
aws_instance.example: Still creating... (20s elapsed)
aws_instance.example: Creation complete

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate

Now the new key is visible in the AWS console -> EC2 (Dashboard) -> click “Key Pairs”:

2016-09-21-21_17_11-ec2-management-console

The second key named “deployer-key” is the one we just have created.

A new instance has been launched and the correct key is assigned (AWS console -> EC2 (Dashboard) -> click “Running Instances”):

2016-09-22-11_21_19-ec2-management-console

Now I should be able to connect to the system using the key. But which user? I have first tried “root”, but during login, I was informed that I should use “ubuntu” instead of “root”.
It has worked with following data:

2016-09-22-11_27_52-putty-configuration

The public DNS name or public IP address to be used can be retrieved either from the AWS console

2016-09-22-11_24_17-ec2-management-console

or better from a terraform show command:

$ terraform show
aws_instance.example:
  id = i-fc755340
  ami = ami-26c43149
  availability_zone = eu-central-1b
  disable_api_termination = false
  ebs_block_device.# = 0
  ebs_optimized = false
  ephemeral_block_device.# = 0
  iam_instance_profile =
  instance_state = running
  instance_type = t2.micro
  key_name = deployer-key
  monitoring = false
  network_interface_id = eni-b84a0cc4
  private_dns = ip-172-31-17-159.eu-central-1.compute.internal
  private_ip = 172.31.17.159
  public_dns = ec2-52-29-3-233.eu-central-1.compute.amazonaws.com
  public_ip = 52.29.3.233
  root_block_device.# = 1
  root_block_device.0.delete_on_termination = true
  root_block_device.0.iops = 100
  root_block_device.0.volume_size = 8
  root_block_device.0.volume_type = gp2
  security_groups.# = 0
  source_dest_check = true
  subnet_id = subnet-f373b088
  tags.% = 0
  tenancy = default
  vpc_security_group_ids.# = 1
  vpc_security_group_ids.611464124 = sg-0433846d
aws_key_pair.deployer:
  id = deployer-key
  key_name = deployer-key
  public_key = ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC22rLY2IreKhTqhdiYMIo+8p+LkuRroHZdm4OPagVuxmT1iz9o4NsPP3MIgwXjGFNtC0Sb6puuFilat+sswXEFLca2G2dWVtITpGfb4EJt72i2AmSi8TL/0UudhO9bkfZUPtlxJNrMKsxLQ62ukIC3b927CMgBMBFrLcAIh/WWQsB/KInOAID8GN+MssR7RwpAxEDXb1ZFtaaAzR2p3B3QdTzazUCZgzEMY6c3K4I4eaIzzONRV7rUUH3UC61GwXAORQLXsOBzHW0uOgIhlOTIMG0zkQtwJfLBoQKz/zQFFYX9gEoA/ElVNTrwWwX9gsJzpz6hdL/koD3tionbE6vJ

2016-09-21-21_32_06-putty-configuration

We need to specify the private key, which is named AWS_SSH_Key.ppk in my case (even though we have chosen to call it deployer_key in the resource). In case of putty, the key needs to be in .ppk format. See Appendix C, how a .pem file (as you get it from AWS) can be converted to .ppk format.
2016-09-22-11_34_09-ubuntuip-172-31-17-159_

With this information, we can log in via SSH (assuming that you have performed step A.1 and Appendix B, if needed; otherwise, you may get a timeout).

Appendix B: Adapt Security Rule manually

Step B.1: Check, whether you have SSH access

Try to connect to the virtual machine instance. If you are prompted to accept the SSH fingerprint, the security rule does not need to be updated and you can stop here. If there is a timeout, go to the next step.

Step B.1: Updating the security group

In this step, we will adapt the security group manually in order to allow SSH access to the instance. Note that in Appendix B, we show how this step can be automated with a shell script. But now, let us perform the step manually.

2016.04.01-13_00_29-hc_001

In the EC2 console, under Network&Security -> Security Groups (in my case in EU Central 1: https://eu-central-1.console.aws.amazon.com/ec2/v2/home?region=eu-central-1#SecurityGroups:sort=groupId), we can find the default security group. We need to edit the inbound rule to allow the current source IP address. For that, select the policy group, click on the “Inbound” tab on the bottom, specify “My IP” as source and save the policy:

2016.04.01-13_05_18-hc_001

Now, if you try to connect to the virtual machine instance, you should be asked by your SSH client, whether or not you permanently add the SSH key fingerprint locally.

DONE

Note: if your IP address changes frequently, you might want to automate the update of the security rule. Check out Appendix B of part 2 of this blog series for this.

Appendix C: SSH Connection Client Alternatives

C.1. SSH Connection via a *nix operating Client (or bash on Windows)

On a *nix machine or on a bash shell on Windows, you can connect via the *nix built-in SSH client. The following command line connection worked for me on a bash shell on my Windows machine. Replace the path to the private PEM file and the public DNS name, so that it works for you as well:

$ssh ubuntu@ec2-52-29-14-175.eu-central-1.compute.amazonaws.com -i /g/veits/PC/PKI/AWS/AWS_SSH_Key.pem
The authenticity of host 'ec2-52-29-14-175.eu-central-1.compute.amazonaws.com (52.29.14.175)' can't be established.
ECDSA key fingerprint is e2:34:6c:92:e6:5d:73:b0:95:cc:1f:b7:43:bb:54:39.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ec2-52-29-14-175.eu-central-1.compute.amazonaws.com,52.29.14.175' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 14.04.3 LTS (GNU/Linux 3.13.0-74-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

  System information as of Fri Apr  1 20:38:25 UTC 2016

  System load:  0.08              Processes:           98
  Usage of /:   10.0% of 7.74GB   Users logged in:     0
  Memory usage: 6%                IP address for eth0: 172.31.21.237
  Swap usage:   0%

  Graph this data and manage this system at:
    https://landscape.canonical.com/

  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.
ubuntu@ip-172-31-21-237:~$

DONE

C.2 SSH Connection via putty on Windows

Since I am using a Windows machine and the formatting of a ssh session in a CMD console does not work well (especially, if you try to use vim), I prefer to use putty on Windows.

In putty, add the host ubuntu@<public DNS>:

2016-04-01_224807_capture_004

and add the path to the private key file on Connection->SSH->Auth->Private key file for authentication:

2016-04-01_131935_capture_003

Note that the pem file needs to be converted to a ppk format putty understands. For that, import the pem file using Putty Key Generator (puttygen) via Conversions->Import Key->choose pem file -> Save private key with ppk extension.

2016.04.01-13_23_46-hc_001

2016.04.01-13_26_46-hc_001

Now add the path to the ppk file to Connection->SSH->Auth->Private key file for authentication: in the putty client, press the “yes” button, and we are logged in:

2016-04-01_224815_capture_005

DONE

Appendix D: Sharing Files between Windows Host and Docker Host using NFS (temporary Solution)

Locally, I am running Windows 10 and I am using a Docker Host created by Vagrant as a virtual Machine on VirtualBox. I have not (yet) configured/installed Vagrant synced folders as described on this web page. Instead, I have set up an NFS server on Windows and map it within the Docker Host like follows:

Step D.1: Install winnfsd

Step D.2: On the Windows machine, create shared folder and start NFS daemon

On a DOS window (run “CMD”), run following commands (adapt the path to a path that is appropriate for you):

mkdir D:\NFS
winnfsd.exe D:\NFS

On Linux host, mount the folder:

$ sudo mkdir /mnt/nfs; sudo mount -t nfs -o 'vers=3,nolock,udp' LAPTOP-P5GHOHB7:/D/NFS /mnt/nfs

where LAPTOP-P5GHOHB7 is my Windows machine’s name and I have found the required options here (without the -o options, I had received a message mount.nfs: mount system call failed.).

Note that this solution is not permanent, since

  • winnfsd.exe is running in foreground within the DOS Window
  • after reboot of the Docker host, the mount command needs to be issued again

TODO: describe the permanent solution, which survives the reboot of the Windows host (need to evaluate different NFS servers, or run winnfsd.exe as a daemon) and the Docker host (via fstab file)

Summary

It is much simpler to spin up and down AWS virtual machine instances by using Terraform than by using Vagrant’s AWS plugin. The reason is that Terraform’s AWS API (unlike Vagrant) does not require SSH access to the instance. Therefore, we were not forced to adapt any SSH related security settings. It just works with Terraform.

To be fair: if the AWS instances are part of a development or staging environment, you will need SSH access to the virtual machine instances in most of the cases anyway. In those cases, a few additional steps are necessary, as shown in Appendix A and B. However, at the end, you only need to add a few lines to the Terraform template.

Adapting the default security rule in order to allow SSH access is the same as with Vagrant. Here, we have shown, how this is done manually and in part 2 we offer an automated way of performing this based on the AWS CLI.

The actual SSH access is more convenient with Vagrant with its  nice vagrant ssh command.

All in all, the effort of automating AWS using Terraform requires equal or less effort than Vagrant. And we gain a more clearly laid out description of the infrastructure resources and more flexibility to apply the same set of resources on a mixed hybrid environment on OpenStack and the IaaS clouds like AWS, Azure and Google Cloud, among others.

Possible Next Steps

  • Provisioning and assignment of your own security rules based on Terraform described here
  • Test remote file upload and remote command execution (are SSH keys needed for this to be successful?)
  • Upload/Synchronization of local images with AWS images

 

 


<< Part 1 | Part 2Part 3 | Part 4 >>

1

LXD vs. Docker — or: getting started with LXD Containers


Container technology is not new: it had existed long before the Docker hype around container technology has started after 2013. Now, with Docker containers having reached mainstream usage, there is a high potential of getting confused about available container types like Docker, LXC, LXD and CoreOS rocket. In this blog post we will explain, why LXD is not competing with Docker.

We will show in a few steps how to install and run LXC containers using LXD container management functions. For that, we will make use of an automated installation process based on Vagrant and VirtualBox.

What is LXD and why not using it as a Docker Replacement?

After working with Docker for quite some time, I have stumbled over another container technology: Ubuntu’s LXD (say “lex-dee”). What is the difference to Docker, and do they really compete with each other, as an article in the German “Linux Magazin” (May 2015) states?

As the developers of LXD point out, a main difference between Docker and LXD is that Docker focuses on application delivery from development to production, while LXD’s focus is system containers. This is, why LXD is more likely to compete with classical hypervisors like XEN and KVM and it is less likely to compete with Docker.

Ubuntu’s web page points out that LXD’s main goal is to provide a user experience that’s similar to that of virtual machines but using Linux containers rather than hardware virtualization.

For providing a user experience that is similar to that of virtual machines, Ubuntu integrates LXD with OpenStack through its REST API.  Although there are attempts to integrate Docker with OpenStack (project Magnum), Ubuntu comes much closer to feature parity with real hypervisors like XEN and KVM by offering features like snapshots and live migration. As any container technology, LXD offers a much lower resource footprint than virtual machines: this is, why LXD is sometimes called lightervisor.

One of the main remaining concerns of IT operations teams against usage of the container technology is that containers leave a “larger surface” to attackers than virtual machines. Canonical, the creators of Ubuntu and LXD is tackling security concerns by making LXD-based containers secure by default. Still, any low level security feature developed for LXC potentially is available for both, Docker and LXD, since they are based on LXC technology.

What does this mean for us?

We have learned that Docker offers a great way to deliver applications, while LXD offers a great way to reduce the footprint of virtual-machine-like containers. What, if you want to leverage the best of both worlds? One way is to run Docker containers within LXD containers. This and its current restrictions are described in this blog post of Stéphane Graber.

Okay; one step after the other: let us postpone the Docker in LXD discussion and let us get started with LXD now.

LXD Getting Started: a Step by Step Guide

This chapter largely follows this getting started web page. However, instead of trying to be complete, we will go through a simple end to end example. Moreover, we will add some important commands found on this nice LXD cheat sheet. In addition, we will explicitly record the example output of the commands.

Prerequisites:

  • Administration rights on you computer.
  • I have performed my tests with direct access to the Internet: via a Firewall, but without HTTP proxy. However, if you cannot get rid of your HTTP proxy, read this blog post.

Step 1: Install VirtualBox

If not already done, you need to install VirtualBox found here. See appendix A, if you encounter installation problems on Windows with the error message “Setup Wizard ended prematurely”. For my tests, I am using the already installed VirtualBox 5.0.20 r106931 on Windows 10.

Step 2: Install Vagrant

If not already done, you need to install Vagrant found here. For my tests, I am using an already installed Vagrant version 1.8.1 on my Windows 10 machine.

Step 3: Initialize and download an Ubuntu 16.0.4 Vagrant Box

In a future blog post, we want to test Docker in LXD containers. This is supported in Ubuntu 16.0.4 and higher. Therefore, we download the latest daily build of the corresponding Vagrant box. As a preparation, we create a Vagrantfile in a separate directory by issuing the following command:

vagrant init ubuntu/xenial64

You can skip the next command and directly run the vagrant up command, if you wish, since the box will downloaded automatically, if nocurrent version of the Vagrant box is found locally. However, I prefer to download the box first, and run the box later, since it is easier to observe, what happens during the boot.

vagrant box add ubuntu/xenial64

Depending on the speed of your Internet connection, you can take a break here.

Step 4: Boot the Vagrant Box as VirtualBox Image and connect to it

Then, we will boot the box with:

vagrant up

Note: if you encounter an error message like “VT-x is not available”, this may be caused by booting Windows 10 with Hyper-V enabled or by nested virtualization. According to this stackoverflow Q&A, running Virtualbox without VT-x is possible, if you make sure that the number of CPUs is one. For that, try to set vb.cpus = 1 in the Vagrantfile. Remove any statement like vb.customize ["modifyvm", :id, "--cpus", "2"] in the Vagrantfile. If you prefer to use VT-x on your Windows 10 machine, you need to disable Hyper-V. The Appendix: “Error message: “VT-x is not available” describes how to add a boot menu item that allows to boot without Hyper-V enabled.

Now let us connect to the machine:

vagrant ssh

Step 5: Install and initialize LXD

Now we need to install LXD on the Vagrant image by issuing the commands

sudo apt-get update
sudo apt-get install -y lxd
newgrp lxd

Now we need to initialize LXD with the lxd init interactive command:

ubuntu@ubuntu-xenial:~$ sudo lxd init
sudo: unable to resolve host ubuntu-xenial
Name of the storage backend to use (dir or zfs) [default=zfs]: dir
Would you like LXD to be available over the network (yes/no) [default=no]? yes
Address to bind LXD to (not including port) [default=0.0.0.0]:
Port to bind LXD to [default=8443]:
Trust password for new clients:
Again:
Do you want to configure the LXD bridge (yes/no) [default=yes]? no
LXD has been successfully configured.

I have decided to use dir as storage (since zfs was not enabled), have configured the LXD server to be available via the default network port 8443, and I have chosen to start without LXD bridge, since this article was pointing out that the LXD bridge does not allow SSH connections per default.

The configuration is written to a key value store and, which can be read with the lxc config get commands, e.g.

ubuntu@ubuntu-xenial:~$ lxc config get core.https_address
0.0.0.0:8443

The list of available system config keys can be found on this Git-hosted document. However, I have not found the storage backend type “dir”, I have configured. I guess, the system assumes that “dir” is used as long as the zfs and lvm variables are not set. Also, it is a little bit confusing that we configure LXD, but the config is read out via LXC commands.

Step 6: Download and start an LXC Image

Step 6.1 (optional): List remote LXC Repository Servers:

The images are stored on image repositories. Apart from the local repository, the default repositories have aliases images, ubuntu and ubuntu-daily:

ubuntu@ubuntu-xenial:~$ sudo lxc remote list
sudo: unable to resolve host ubuntu-xenial
+-----------------+------------------------------------------+---------------+--------+--------+
|      NAME       |                   URL                    |   PROTOCOL    | PUBLIC | STATIC |
+-----------------+------------------------------------------+---------------+--------+--------+
| images          | https://images.linuxcontainers.org       | simplestreams | YES    | NO     |
+-----------------+------------------------------------------+---------------+--------+--------+
| local (default) | unix://                                  | lxd           | NO     | YES    |
+-----------------+------------------------------------------+---------------+--------+--------+
| ubuntu          | https://cloud-images.ubuntu.com/releases | simplestreams | YES    | YES    |
+-----------------+------------------------------------------+---------------+--------+--------+
| ubuntu-daily    | https://cloud-images.ubuntu.com/daily    | simplestreams | YES    | YES    |
+-----------------+------------------------------------------+---------------+--------+--------+
Step 6.2 (optional): List remote LXC Images:

List all available ubuntu images for amd64 systems on the images repository:

ubuntu@ubuntu-xenial:~$ sudo lxc image list images: amd64 ubuntu
sudo: unable to resolve host ubuntu-xenial
+-------------------------+--------------+--------+---------------------------------------+--------+---------+------------------------------+
|          ALIAS          | FINGERPRINT  | PUBLIC |              DESCRIPTION              |  ARCH  |  SIZE   |         UPLOAD DATE          |
+-------------------------+--------------+--------+---------------------------------------+--------+---------+------------------------------+
| ubuntu/precise (3 more) | adb92b46d8fc | yes    | Ubuntu precise amd64 (20160906_03:49) | x86_64 | 77.47MB | Sep 6, 2016 at 12:00am (UTC) |
+-------------------------+--------------+--------+---------------------------------------+--------+---------+------------------------------+
| ubuntu/trusty (3 more)  | 844bbb45f440 | yes    | Ubuntu trusty amd64 (20160906_03:49)  | x86_64 | 77.29MB | Sep 6, 2016 at 12:00am (UTC) |
+-------------------------+--------------+--------+---------------------------------------+--------+---------+------------------------------+
| ubuntu/wily (3 more)    | 478624089403 | yes    | Ubuntu wily amd64 (20160906_03:49)    | x86_64 | 85.37MB | Sep 6, 2016 at 12:00am (UTC) |
+-------------------------+--------------+--------+---------------------------------------+--------+---------+------------------------------+
| ubuntu/xenial (3 more)  | c4804e00842e | yes    | Ubuntu xenial amd64 (20160906_03:49)  | x86_64 | 80.93MB | Sep 6, 2016 at 12:00am (UTC) |
+-------------------------+--------------+--------+---------------------------------------+--------+---------+------------------------------+
| ubuntu/yakkety (3 more) | c8155713ecdf | yes    | Ubuntu yakkety amd64 (20160906_03:49) | x86_64 | 79.16MB | Sep 6, 2016 at 12:00am (UTC) |
+-------------------------+--------------+--------+---------------------------------------+--------+---------+------------------------------+

Instead of the “ubuntu” filter keyword in the image list command above, you can use any filter expression. E.g. sudo lxc image list images: amd64 suse will find OpenSuse images available for x86_64.

Step 6.3 (optional): Copy remote LXC Image to local Repository:

This command is optional, since the download will be done automatically with the lxc launch command below, if the image is not found on the local repository already.

lxc image copy images:ubuntu/trusty local:
Step 6.4 (optional): List local LXC Images:

We can list the locally stored images with the following image list command. If you have not skipped the last step, you will find following output:

ubuntu@ubuntu-xenial:~$ sudo lxc image list
sudo: unable to resolve host ubuntu-xenial
+-------+--------------+--------+-------------------------------------------+--------+----------+------------------------------+
| ALIAS | FINGERPRINT  | PUBLIC |                DESCRIPTION                |  ARCH  |   SIZE   |         UPLOAD DATE          |
+-------+--------------+--------+-------------------------------------------+--------+----------+------------------------------+
|       | 844bbb45f440 | no     | Ubuntu trusty amd64 (20160906_03:49)      | x86_64 | 77.29MB  | Sep 6, 2016 at 5:04pm (UTC)  |
+-------+--------------+--------+-------------------------------------------+--------+----------+------------------------------+
Step 6.5 (mandatory): Launch LXC Container from Image

With the lxc launch command, a container is created from the image. Moreover, if the image is not available in the local repository, it will automatically download the image.

ubuntu@ubuntu-xenial:~$ lxc launch images:ubuntu/trusty myTrustyContainer
Creating myTrustyContainer
Retrieving image: 100%
Starting myTrustyContainer

If the image is already in the local repository, the Retrievien image line is missing and the container can start within seconds (~6-7 sec in my case).

Step 7 (optional): List running Containers

We can list the running containers with the lxc list command, similar to a docker ps -a for those, who know Docker:

ubuntu@ubuntu-xenial:~$ lxc list
+-------------------+---------+------+------+------------+-----------+
|       NAME        |  STATE  | IPV4 | IPV6 |    TYPE    | SNAPSHOTS |
+-------------------+---------+------+------+------------+-----------+
| myTrustyContainer | RUNNING |      |      | PERSISTENT | 0         |
+-------------------+---------+------+------+------------+-----------+

Step 8: Run a Command on the LXC Container

Now we are ready to run our first command on the container:

ubuntu@ubuntu-xenial:~$ sudo lxc exec myTrustyContainer ls /
sudo: unable to resolve host ubuntu-xenial
bin  boot  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var

Step 9: Log into and exit the LXC Container

We can log into the container just by running the shell with the lxc exec command:

ubuntu@ubuntu-xenial:~$ sudo lxc exec myTrustyContainer bash
sudo: unable to resolve host ubuntu-xenial
root@myTrustyContainer:~# exit
exit
ubuntu@ubuntu-xenial:~$

The container can be exited just by issuing the “exit” command. Different from Docker containers, this will not stop the container.

Step 10: Stop the LXC Container

The container can be stopped with the sudo lxc stop command:

ubuntu@ubuntu-xenial:~$ sudo lxc stop myTrustyContainer
sudo: unable to resolve host ubuntu-xenial
ubuntu@ubuntu-xenial:~$ lxc list
+-------------------+---------+------+------+------------+-----------+
|       NAME        |  STATE  | IPV4 | IPV6 |    TYPE    | SNAPSHOTS |
+-------------------+---------+------+------+------------+-----------+
| myTrustyContainer | STOPPED |      |      | PERSISTENT | 0         |
+-------------------+---------+------+------+------------+-----------+

Summary

We have discussed the differences of Docker and LXD. We have found that Docker focuses on application delivery, while LXD seeks to offer Linux virtual environments as systems.

We have provided the steps needed to get started with LXD by showing how to

  • install the software,
  • download images,
  • start containers from the images and
  • running simple Linux commands on the images.

Next steps:

Here is a list of possible next steps on the path to Docker in LXC:

  • Networking
  • Docker in LXC container
  • LXD: Integration into OpenStack
  • Put it all together

Appendix A: VirtualBox Installation Problems: “Setup Wizard ended prematurely”

  • Download the VirtualBox installer
  • When I start the installer, everything seems to be on track until I see “rolling back action” and I finally get this:
    “Oracle VM Virtualbox x.x.x Setup Wizard ended prematurely”

Resolution of the “Setup Wizard ended prematurely” Problem

Let us try to resolve the problem: the installer of Virtualbox downloaded from Oracle shows the exact same error: “…ended prematurely”. This is not a docker bug. Playing with conversion tools from Virtualbox to VMware did not lead to the desired results.

The Solution: Google is your friend: the winner is: https://forums.virtualbox.org/viewtopic.php?f=6&t=61785. After backing up the registry and changing the registry entry

HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Network -> MaxFilters from 8 to 20 (decimal)

and a reboot of the Laptop, the installation of Virtualbox is successful.

Note: while this workaround has worked on my Windows 7 notebook, it has not worked on my new Windows 10 machine. However, I have managed to install VirtualBox on Windows 10 by de-selecting the USB support module during the VirtualBox installation process. I remember having seen a forum post pointing to that workaround, with the additional information that the USB drivers were installed automatically at the first time a USB device was added to a host (not yet tested on my side).

Appendix B: Vagrant VirtualBox Error message: “VT-x is not available”

Error:

If you get an error message during vagrant up telling you that VT-x is not available, a reason may be that you have enabled Hyper-V on your Windows 10 machine: VirtualBox and Hyper-V cannot share the VT-x CPU:

$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Checking if box 'thesteve0/openshift-origin' is up to date...
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
 default: Adapter 1: nat
 default: Adapter 2: hostonly
==> default: Forwarding ports...
 default: 8443 (guest) => 8443 (host) (adapter 1)
 default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.

Command: ["startvm", "8ec20c4c-d017-4dcf-8224-6cf530ee530e", "--type", "headless"]

Stderr: VBoxManage.exe: error: VT-x is not available (VERR_VMX_NO_VMX)
VBoxManage.exe: error: Details: code E_FAIL (0x80004005), component ConsoleWrap, interface IConsole

Resolution:

Step 1: prepare your Windows machine for dual boot with and without Hyper-V

As Administrator, open a CMD and issue the commands

bcdedit /copy "{current}" /d "Hyper-V" 
bcdedit /set "{current}" hypervisorlaunchtype off
bcdedit /set "{current}" description "non Hyper-V"

Step 2: Reboot the machine and choose the “non Hyper-V” option.

Now, the vagrant up command should not show the “VT-x is not available” error message anymore.