0

Automating Network Provisioning with Cisco APIC — Exploring the REST API


How to automate the configuration of network devices? Cisco APIC-EM (= Application Policy Infrastructure Controller) is a controller that can help us with that task. In this blog post, we will explore its modern REST API for accomplishing basic tasks like creating, reading, updating and deleting (CRUD) objects like

  • APIC users (e.g. administrators)
  • network discovery tasks
  • network devices and their configurations.

We will use the Chrome Postman plugin for exploring the API.

Tldr;

Click here to have a look at the summary section.

What is Cisco APIC?

See here a nice 4 minute Cisco youtube commercial about APIC. And here you find a Cisco live! 40 minute session, which gives a short overview. I will summarize:

  • APIC is a controller with Web Interface and REST API, which can be used to Create, Read, Update and Delete (CRUD) following kind of objects (and more)
    • Network Devices
      • Locations
      • Interfaces
        • Links
        • Hosts
    • Policies
    • Tasks
  • Anything you can do in the Web interface can also be done via the REST API (similar to Ruby on Rails, RoR)
  • The video is showing how a REST test tool like Chrome Postman or simple Python scripts can be used to interface with the REST API. Some examples are:
    • displaying a list of all network devices
    • displaying a the network path from arbitrary address A to arbitrary address B
    • finding an ACL on a network path that is blocking a certain application
  • The REST API is self-documenting using Swagger
    hc_001
    Note: “self-documenting” means

    • that it can be explored through the same interface (but different path) like the API itself
    • the documentation is generated from source code. Note, that the developer needs to add the content of the swagger documentation to the source code, similar to javadocs. Swagger takes care of converting the source code into swagger interactive web pages.

Why should you install Cisco APIC?

Quick answer: you should not. The installation on VMware 5.1 or 5.5 would require 6 vCPUs and 64 GB of RAM, 500 GB disk and 200 MB disk I/O speed. A monster of an application. Wow.

To be more specific: you can install Cisco APIC, but you do not need to install Cisco APIC, if you just want to explore the Cisco APIC interfaces: there is a more clever, cloud-based possibility: DevNet sandboxes offered by Cisco.

If you still want to install it, the SW can be found via this DevNet page (use IE or Firefox; since Chrome is not supported; click on “3”). The SW is also available on Cisco’s SW repository. However, authorization is required there, while in DevNet, the access is less restrictive.

A clever Alternative: DevNet Sandboxes:

Instead of installing this monster application, we can also connect to one of the DevNet Learning Labs found in the DevNet Sandbox catalog.

2016.03.08-23_56_12-hc_001

Note that you need to register with Cisco, if you have not done already, either as a Customer or a Partner. In both cases, they want you to enter a contract number. I am a CCIE, so I already had an account.

See below more information on the pre-installed labs.

 

2016.03.09-00_42_31-hc_001

Most of the labs are APIC labs. When clicking on the “Develop in the Sandbox” button on the upper right of the page, I reach at the lab catalog.

2016.03.09-00_49_05-hc_001

Let us keep simple, and try to enter the APIC-EM DB Only Always-On lab: this seems to be a kind of guided lab. Cool. 😉

2016.03.09-00_50_40-hc_001

scrolled down on the left pane:

2016.03.09-15_31_28-hc_001_credentials_removed

we find a link to the APIC-EM portal: https://sandboxapic.cisco.com:9443. There, I could log in with the specified default credentials (removed).

2016.03.09-00_57_14-hc_001

With that, I have reached the APIC-EM portal:

2016.03.09-15_28_14-hc_001

Manipulating Users

Note: In this example, we will show how to create, read and delete users using Postman, a chrome app for sending RESTful HTTP commands (most important: POST, GET, PUT, DELETE for Create, Read, Update and Delete of an object).

Let us go back to the guided tour on the “APIC-EM DB Only Always-On lab” we had reached from the lab catalog: on the left pane, we can find a link to the sandbox:

2016.03.09-15_31_28-hc_001_credentials_removed

Furthermore, we can find a link to a hello world guided tour in order to explore the northbound REST interface.

There, they are requesting that the user should install Postman. This is the tool I use often to test my ProvisioningEngine and its target systems. Within Postman, I have created a new collection called “Cisco APIC – hello world sandbox”:

2016.03.14-01_22_40-hc_001

Before sending my first REST commands to the Cisco APIC, I would like to explore the self-documented interface. From what I have seen in the video, I just need to enter the link https://sandboxapic.cisco.com:9443/swagger in a browser. Yes; here we are:

2016.03.09-15_54_23-hc_001

In the hello world example, we will start with the creation of a ticket. Let us explore the ticket documentation:

2016.03.09-16_00_49-hc_001

Init: Creation of an authentication Token

Now, on Postman, let us perform our first real command: creation of a ticket. In Postman, I enter:

2016.03.09-16_17_17-hc_001

and the body:

2016.03.09-16_11_19-hc_001_password_removed

After pressing the “Send” button we get:

2016.03.09-16_15_40-hc_001

Good. So, what do we do with that?

Read Users: GET /user

The serviceTicket above then is used as an X-Auth-token for all subsequent requests. Let us use it to show all users:

2016.03.09-16_30_03-hc_001

Do not forget to switch from POST to GET, before you press the Send button.

Oups, we get: “Not Found”

That was, because I had a double-slash instead of a slash in the URL. After correcting the URL and pressing the Send button again, we get a meaningful answer:

{
  "response": [
    {
      "username": "greg",
      "authorization": [
        {
          "scope": "ALL",
          "role": "ROLE_ADMIN"
        }
      ]
    },
    ...
    {
      "username": "admin",
      "authorization": [
        {
          "scope": "ALL",
          "role": "ROLE_ADMIN"
        }
      ]
    },
    ...
  ,
  "version": "1.0"
}

Note: The API developers have chosen to use singular nouns like GET /user. This is a little bit awkward, since a GET /user is returning a collection of users, not a single user. GET /users sounds more natural, especially, if you are used to Ruby on Rails. However, at the end, it is a matter of convention. Unfortunately, there is no agreement on the convention either way and we find popular API examples for both conventions. Fortunately, they have chosen not to mix plural and singular nouns in the URL. Good.

Create a User: POST /user

With a little bit of trial and error, I also have been successful in creating a new user entry:

2016.03.09-17_04_53-hc_001

And yes: the user can be seen with a GET /user request as above:

2016.03.09-17_11_14-hc_001

The password seems to be saved more securely elsewhere.

Now, I can log into APIC portal as user “olli”:

2016.03.10-16_20_32-hc_001

Read single User: GET /user/{name}

Now let us read a single user. This is not straightforward, since the API developers have chosen not to auto-generate and display user IDs. Maybe the username is the ID? Then it needs to be unique. Let us test this first: trying to create a user with the same name:

2016.03.09-17_16_16-hc_001

yepp: the username is the unique ID. Following REST semantics, we should be able to read a single user with GET /user/olli:

2016.03.09-17_18_30-hc_001

Works fine.

Update User: PUT /user

The tricky part is to update the user. From swagger we see that the more obvious syntax

PUT /user/{id}

is not supported. We are required to issue the command

PUT /user

without ID, and the ID is specified in the body. This is not following REST API best practices and will be discussed after the Summary.

2016.03.09-17_55_44-hc_001

Even if I use the right syntax PUT /user, I am not allowed to change the password of user “olli” when using an authentication token (ticket) that has been created by the admin:

2016.03.10-16_35_47-hc_001

Password change via the REST API is supported only, if you log in as the user in question. Therefore I need to get a new ticket (token) for user olli instead:

2016.03.10-16_37_10-hc_001

And now we can use the serviceTicket as an authentication token in the update user command:

2016.03.10-16_38_38-hc_001

Note: you need to remove the /olli part from the URL above!

Now the update works:

2016.03.10-16_41_04-hc_001

Yesss! That was not easy. I have removed my first unsuccessful attempts, where I had tried with the URL /user/olli in order to simplify the post and in order to not confuse you.

Now I can log in to the APIC-EM portal using the updated password:

2016.03.10-16_43_11-hc_001

Delete user: DELETE /user/{name}

With the ticket (token) of the admin (again), it is easy to delete the entry via DELETE /user/olli:

2016.03.09-17_21_12-hc_001

Note: we have got a 200 OK upon a DELETE request, which indicates that it has been deleted immediately and that the response body is not empty. For asynchronous processing, 202 would have been the right answer HTTP code and for a successful deletion without response body, code 204 is used. See Section 9 of RFC2616.

As expected, after the deletion us user “olli”, a GET /user/olli then leads to “404 Not Found” error:

2016.03.09-17_39_25-hc_001

Discovering the Network

We did not start playing around with Cisco APIC in order to manipulate users, did we? Let us get into the real stuff: discovering and manipulating networks.

2016.03.09-18_11_42-hc_001

Read network discoveries: GET /discovery/{startIndex}/{recordsToReturn}

When I understand it right, I can manipulate network discovery tasks on /discovery. Two network discovery tasks have already been conducted:

2016.03.09-18_13_23-hc_001

What is missing in the interface, is a GET /discovery to return a list of all discoveries. The only request, which seems to come close is

GET /discovery/{startIndex}/{recordsToReturn}

2016.03.09-18_17_53-hc_001

Yes, GET /discovery/1/2 was, what I was looking for. Also GET /discovery/1/500 works fine, and comes close to GET  /discovery I was looking for. Note that 500 seems to be the largest possible value: GET /discovery/1/501 fails with following error message:

2016.03.10-14_23_42-hc_001

Unlike user objects, discovery objects have explicit IDs. Those are the IDs you need to use, if you want to manipulate a single discovery.

Read single network discoveriy: GET /discovery/{id}

Let us read a single discovery with ID=1: GET /discovery/1

2016.03.10-14_45_11-hc_001

I guess, you need to run a discovery first, which will populate the database with all network-devices, links, hosts, policies, etc. After that, the network-devices can be manipulated.

TODO: clear the network database, run a discovery and observe, what happens.

But for now, let us review the existing data in the network topology database.

Collecting Network Device Info

Read network device info: GET /network-device

Let us look for network-devices: with GET /network-device, we get a list of devices:

2016.03.10-15_50_25-hc_001

Read running-config of network devices: GET /network-device/config

That is interesting as well: we can retrieve the configuration of all known network-devices with a GET /network-device/config:

2016.03.10-15_55_27-hc_001

The id of the network device can be found if we scroll down:2016.03.10-15_56_30-hc_001

However, the config cannot be changed: there is no POST or PUT command defined for /network-device/config:

2016.03.10-15_58_37-hc_001

Summary

Cisco Application Policy Infrastructure Controller is a software based centralized provisioning system for networks running on Linux Ubuntu that provides administrators with the possibility to manage network policies (Security ACLs, QoS) from a central point using a modern RESTful interface. The software can run on either on physical hardware or on a VMware VM (ESXi V5.1 or 5.5). The resource needs are too challenging to be run on a developer’s notebook: 6 cores and 64 GB RAM (see release notes). However, Cisco offers Cloud based development sandboxes on DevNet. See here a catalog of APIC labs.

Using such a sandbox, we have demonstrated the usage of the north-bound REST API of Cisco APIC (= Application Policy Infrastructure Controller), a controller for automatic provisioning of Cisco based networks. For that, we have used a DevNet sandbox (registration required), a Cisco-hosted lab for developers. We have used the simplest species of those labs that is based on a pre-populated database with no real target network behind (southbound). We have demonstrated on how to

  • browse the API documentation
  • get an authentication token
  • show or manipulate
    • APIC users
    • network discovery tasks
    • network devices
    • and network device running-configs (read only)

The Cisco APIC offers many more features, which were not yet explored here. Among others, policies can be manipulated.

If you need to download the SW, it can be found via this DevNet page (use IE or Firefox; since Chrome is not supported; click on “3”).

Appendix: Does the Cisco APIC REST API follow REST best practice?

During the hands-on lab, we have observed that the Cisco APIC REST API does not follow REST best practices (another article on REST best practice can be found here):

  • the API is using singular nouns like GET /user for getting a collection of users
  • the API is not very consequent with the assignment of unique IDs:
    • for users, the name is used as ID
    • for network devices, UUIDs are used as IDs
    • for discoveries, incremented numbers 1,2, … are used as IDs
  • the syntax of the API for reading a collection depends on the object type:
    • for users: GET /user will display all defined users.
    • for discoveries: GET /discovery is not defined. Instead, only a paginated version of the command is supported: GET/discovery/{offset}/{number} will return up to 500 entries, starting with entry #{offset}.
      • I rate it as inconvenient that GET /discovery is not defined. I deem the developers will have had their reasons; maybe this is a measure to avoid too long answers or to limit the performance required by the requests?
      • pagination is a nice feature, especially, if used for GUI display. However, I personally like a more verbose syntax better: e.g. GET/discovery/pagination/offset/{offset}/limit/{limit} or GET/discovery?pagination=yes&offset={offset}&limit={limit}.
      • If an API offers pagination, it should do so with all objects. However, GET /discovery/1/500 is defined, but GET /user/1/500 leads to an error. On the other hand, GET /user is defined, but GET /discovery leads to an error.
  • The syntax for updating an entry is counter-intuitive and does not follow REST API best practices (IMO):
    • a single user can be read with GET /user/{id}, where the user name is used as ID. Following REST best practice, the same resource can be updated with a PUT /user/{id}. However, the developers have chosen that the syntax is PUT /user without ID, while the user ID is specified in the body. This makes sense only for batch bulk update of all users. However, the body contains a single entry only instead of a a list of entries.

Those are little inconveniences, we can live with, though.

 

1

Choosing the right IaaS Provider for a custom Appliance or: how hard is it to install from ISO in the cloud?


Which Cloud Infrastructure provider allows to install custom appliances via ISO? The short answer is: none of the IaaS market leaders Amazon Web Serviese (AWS), Microsoft Azure offer the requested functionality, but they offer the workaround to locally install the virtual machine (VM) and upload the VM to the cloud. The cheaper alternative DigitalOcean does not offer any of those possibilities.

At the end, I thought I have found  the perfect solution to my problem: Ravello Systems a.k.a. Oracle ravello is a meta cloud infrastructure provider (they call it “nested virtualization provider”), which is re-selling the infrastructure from other IaaS providers like Amazon AWS and Google Engine. They offer a portal that supports the installation of a VM from an ISO in the cloud. Details see below. They write:

2016.03.30-13_57_04-hc_001

However, Ravello was ignoring my request for a trial for more than two months.

Ravello’s trial seems to be open for companies only? I even told them that I am about to found my own company, this did not help.

2016.03.30-13_08_51-hc_001

If you are representing a large company and if you are offering them a prospect to earn a lot of money, they might be reacting differently in your case, though. Good luck.

I am back at installing the image locally and uploading it to Amazon AWS. Maybe this is the cheaper alternative, anyway. I am back at the bright side of life…

2016.03.30-13_06_53-hc_001

At the end, after more than 2 months, I have got the activation link. The ISO Upload tool has some challenges with HTTP proxies, but is seems to work now.

Document Versions

v1.0 (2016-03-14): initially published version
v1.1 (2016-03-21): added a note on ravello’s nested virtualization solution, which makes the solution suitable for VMware testing on public non-VMware clouds
v1.2 (2016-03-23): added a note of my problems of getting a trial account; I have created a service ticket.
v1.3 (2016-03-30): Ravello has chosen to close my ticket without helping me. I am looking for alternatives.
v1.4 (2016-04-09): After I have complained about the closed ticket, they wrote a note that they are sorry on 2016-03-30. However, I have still not got an account. I have sent a new email asking for the status today.
v1.5 (2016-05-25): I have received an activation link on May, 11th. It has taken more than 2 months to get it. I am not sure, if I am still interested…

The Use Case

Integration of high tech systems with legacy systems is fun. At least, it is fun, if you have easy access from you development machine to the legacy systems. In my case, I was lucky enough: the legacy systems I am dealing with are modern communication systems that can be run on VMware. Using a two year old software version of the system, I have run the legacy system on my development machine. With that I could run my integration software against the real target system.

But why have I used a two year old software version of the legacy system? That is, why: the most recent versions of that system have such a high demand on the virtual resources (vCPU, DRAM) that it has outgrown my development machine: it was quite a bit…

2016.03.14-19_48_01-hc_001

…overloaded.

Possible Solutions

How to deal with this? Some thoughts of mine are:

  • I could buy a new notebook with, say 64 GB RAM.2016.03.14-20_25_24-hc_001
    • this is an expensive option. Moreover, I am a road warrior type of developer and do a lot of coding in the train. Most notebooks with 64GB RAM are bulky and heavy and you need to take a power plant with you if you do not want to run out of energy during your trip.
  • I could develop a lightweight simulator that is mocking the behavior of the legacy system.
    • In the long run, I need to do something along those lines anyway: I want to come closer to Continuous Integration+Deployment process and for the automated tests in the CI/CD system, it is much simpler to run a simulator as part of the software than to run the tests against bulky legacy systems.
  • I could develop and test (including integration tests) in the IaaS cloud.

2016.03.14-18_57_14-hc_001

The Cloud Way of Providing a Test Environment

Yes, the IaaS cloud option is a particularly interesting one; especially, if development is done as a side job because:

  • I need to pay only for resources I use.
  • For most functional tests, I do not need full performance. I can go with cheaper, shared resources.
  • I can pimp up the legacy system and reserve resources for performance tests, while freeing up the resources again after finish of the test.
  • Last but not least, I am a cloud evangelist and therefore I should eat my own dog food (or drink my own champagne, I hope).

However: which are the potential challenges?

2016.03.14-19_01_14-hc_001

  1. Installation challenges of the legacy system in the cloud.
  2. How much do you pay for the VM, if it is shut down? Open Topic, but will not (yet) investigated in this blog post.
  3. How long does it take from opening the lid of the development notebook until I can access the legacy system? Open Topic, but will not (yet) investigated in this blog post.

First things first: in this post, I will concentrate on challenge 1.

The Cloud Way of installing (a custom appliance from ISO)

2016.03.14-19_04_03-hc_001

In my case, the legacy system must be installed from ISO. From my first investigation, it seems that this is a challenge with many IaaS providers. Let us have a closer look:

Comparison of IaaS Providers

2016.03.14-19_05_07-hc_001

  • DigitalOcean: they do not support the installation from ISO. See this forum post.
    • there is no workaround like local installation and upload of the image, see here. Shoot. 😦

2016.03.14-19_06_58-hc_001

  • AWS: same thing: no ISO installation support.
    1. For AWS, the workaround is to install the system locally and to upload and convert the VM. See this stackoverflow post.
      One moment: didn’t I say the legacy system is too large for my notebook? Not a good option. 😦
    2. Another workaround for AWS is to use a nested virtualization provider like ravello systems: they claim here that the installation of an AWS image from ISO is no problem.
      Note: ravello’s nested virtualization solution places an additional Hypervisor on top of AWS’ XEN hypervisor, in order to run VMware VMs on public clouds that do not support VMware VMs natively. This will not increase the performance, though and is intended for test environments only. However, this is exactly, what I am aiming at (for now).

Ravello claims: “With Ravello, uploading an ISO file is as simple as uploading your files to dropbox. Once the file is in your library in Ravello simply add the CD-ROM device to your VM and select your customer ISO file from your library.”2016.03.14-19_38_13-hc_001

2016.03.14-19_08_58-hc_001

  • Microsoft Azure: not fully clear…
    • I have found here the information that an ISO can be attached to an existing VM. I do not know, though, whether or not the VM can be installed from the ISO by booting from ISO.
    • you can create a local image in VHD format and upload it to the cloud. However, the only (convenient) way to create this image is to install the VM on Hyper-V. I do not have access to Hyper-V and I do not want to spend any time on this for now. 😦

Among those options, it seems like only AWS and ravello are possible feasible for me.

Even so, I need to take the risk caused by the fact that my legacy systems are supported on VMware only. However, this is a risk I need to accept, if I want to go with a low cost mainstream IaaS provider. A private cloud on dedicated VMware infrastructure is prohibitive with respect to effort and price.

Decision:

I have a more powerful notebook at home and I could install the image locally. However, I will give the meta IaaS provider Ravello Systems a try and I will install the legacy system via their overlay cloud. Within Ravello systems, I will choose AWS as the backend IaaS provider, because AWS is the number one IaaS provider (see this article pointing to the Gartner report) and therefore I want to gain some experience with AWS.

Note about the pricing comparison between AWS and ravello: I believe that ravello comes at higher rates (estimated 30-50%). But please do not take this for granted and calculate yourself, using the AWS monthly calculator and the ravello pricing page.

HowTo

More than 2 months after my application, I finally got an activation link. Looking for how to import the ISO, I have found this ravello link. However, the documentation is not good. They write:

To download the tool, click Download VM Import Tool on the Library page.

However, there is no Download VM Import Tool on the Library page. Instead, you can choose Library->Disk Images ->Import Disk Image in order to reach the import tool download page (or click this direct link).

After installing the GUI tool on Windows using the exe file, I am redirected to the browser login page of the tool:

2016.05.24-14_53_30-hc_001

If you are behind a proxy, you will receive the following connectivity problem error:

2016.05.24-14_59_19-hc_001

The link will lead here. The process is to create a config.properties file on a folder named .ravello in the user’s home directory (%HOME%\.ravello).

Note: be sure to use %HOME%\.ravello and not %USERPROFILE%\.ravello, if those two pathes differ in your case (in my case they do: %HOME% is my local Git directory on F:\veits\git).

The file config.properties needs to have following content:

[upload]
proxy_address = <ip address of proxy server>
proxy_port = <port on which the proxy server accepts connections>

The nasty thing is, that you need to kill the RavelloImageImportServer.exe task in case of Windows or the ravello-vm-upload process in case of Linux.

The problem is, that

  1. they do not tell you how to restart the process. In my case, I have found RavelloImageImportServer.exe on C:\Program Files (x86)\Ravello Systems\Ravello import utility. I have restarted it.
  2. Even though I have created the properties file, the import tool does not find the proxy configuration on %USERPROFILE%\.ravello. Crap! I have found out that the import tool is looking for %HOME%\.ravello instead, which has been set by my local git installation to be on F:\veits\git. I was close to giving up…

Finally, I have managed to upload the ISO:

2016.05.24-16_01_49-hc_001

From there, it should be possible to create an empty VM, attach the ISO on it and boot the VM from ISO…

No luck: after some time, the upload is stopped due to no apparent reason:

2016.05.24-20_48_42-hc_001

The pause button as well as the resume button are greyed out. No way to resume the upload. Well thought, but not so good implemented. Okay, the service is quite new. Let us see, how ravello works, if we give them a few additional months…

After connection to the Internet without HTTP proxy (my notebook was in standby for a while), I have seen, that I could not log into the local GUI upload tool anymore. The process was consuming a constant 25% of my dual core CPU. Workaround: renaming the config.properties file (or maybe remove/comment out its content), killing and restarting of the process brought back the GUI upload process to normal.

Summary

I have shortly investigated, which options I have to run a legacy system on an IaaS provider cloud network.

Before I found out that ravello’s service times are sub-optimal, I initially thought that the meta IaaS provider called Ravello Systems is the winner of this investigation:

2016.03.14-19_38_13-hc_001

However, I see following problems:

  • it has taken ravello more than two (!) months to provide me with an activation link.
  • An ISO or VM upload requires the installation of a local tool
  • the GUI tool has problems to handle HTTP proxies. I have followed their instructions, but I could not get it to work, initially. At the end, I have found out, that the tool is not looking in Maybe the tool is looking in %USERPROFILE%\.ravello, but in %HOME%\.ravello, which is a GIT home directory and does not match C:\Users\myusername in my case.
  • another problem might be that Ravello is running the VMware VMs on top of a Hypervisor layer, which in turn translates the VM to the underlying infrastructure. There is a high risk that this will work only for test labs with low CPU consumptions. This is to be tested.

In the short time I have invested into the investigation, I have found that

  1. ravello had seemed to be the best alternative, since the system can be installed in the cloud with
  2. A reader of my blog suggests to check out Vultr. Since ravello has its own drawbacks (service: long time to respond, longer time to help, GUI import tools seems to have weaknesses: I could not get it to work from behind a HTTP proxy, even if I follow the instructions), Vultr might be a real good alternative with low pricing.
  3. Amazon AWS is an alternative, if it is O.K. for you not to install from ISO, but to install locally and upload the created custom VM.

The following alternatives have major drawbacks:

  • Microsoft Azure requires the local installation of the VM using Hyper-V and I do not have such a system. I have not found a statement, whether it is possible to boot a Microsoft Azure VM from ISO (do you know that?).
  • DigitalOcean neither supports an installation from ISO, nor does it support the upload of custom VMs.

See also:

Next steps:

  • once the ISO is uploaded, create a VM and try to boot the VM from ISO.
  • Try out Vultr.

Update 2016-03-21: I have applied for a trial with ravello on March 17th, but no reaction so far, apart from the automatic email reply. I have opened a ticket yesterday and I got an email that they will come back to me…

Danger_No Entry

Update 2016-03-23: still waiting…

Update 2016-03-30: instead of helping me, Ravello’s support has sent an email that they did not get any response from me (response about what?) and they have closed the ticket, along with a link with the possibility to give feedback. My feedback was “not satisfied”. Let us see, how they react.

Update 2016-05-11: I have received the activation link, more than 2 months after my application. I have signed in although I do not know, if I am still interested. I have added the HowTo chapter, but I have failed to upload the ISO via a HTTP proxy, even though I have followed the instructions closely.

Meanwhile, I have signed up for a native AWS account. The intent of this blog was to find a provider that makes it more easy to install an image from ISO: I did not want to install locally, and then upload and convert the image, because my SSD disk is notoriously full. Ravello was the only alternative I had found in a quick Internet research. However, Ravello had failed to provide me with a valid registration within 2 months.