3

Java Build Automation Part 2: Create executable jar using Gradle


Original title: How to build a lean JAR File with Gradle

2016-11-14-19_15_52

In this step by step guide, we will show that Gradle is a good alternative to Maven for packaging java code into executable jar files. In order to keep the executable jar files “lean”, we will keep the dependent jar files outside of the jar in a separate folder.

Tools Used

  1. Maven 3.3.9
  2. JDK 1.8.0_101
  3. log4j 1.2.17 (downloaded automatically)
  4. Joda-time 2.5 (downloaded automatically)
  5. Git-2.8.4 with GNU bash 4.3.42(5)

Why using Gradle for a Maven Project?

In this blog post, we will show how Gradle can be used to create a executable/runnable jar. The task has been accomplished on this popular Mkyong blog post by using Maven. Why would we want to do the same task using Gradle?

By working with both, Maven and Gradle, I have found that:

  • Gradle allows me to move any resource file to outside of the jar without the need of any additional Linux script or alike;
  • Gradle allows me to easily create an executable/runnable jar for the JUnit tests, even if those are not separated into a separate project.

Moreover, while Maven is descriptive, Gradle is procedural in nature. With Maven, you describe the goal and you rely on Maven and its plugins to perform the steps you had in mind. Whereas with Gradle, you have explicit control on each step of the build process. Gradle is easy to understand for programmers and it gives them fine-grained control over the build process.

The Goal: a lean, executable JAR File

In the following step by step guide, we will create a lean executable jar file with all dependent libraries and resources.

Step 1 Download Hello World Maven Project of Mkyong

Download this hello world Maven project you can find on this popular HowTo page from Mkyong:

curl -OJ http://www.mkyong.com/wp-content/uploads/2012/11/maven-create-a-jar.zip
unzip maven-create-a-jar.zip
cd dateUtils

Logs:

$ curl -OJ http://www.mkyong.com/wp-content/uploads/2012/11/maven-create-a-jar.zip
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  7439  100  7439    0     0  23722      0 --:--:-- --:--:-- --:--:-- 24963

olive@LAPTOP-P5GHOHB7  /d/veits/eclipseWorkspaceRecent/MkYong/ttt
$ unzip maven-create-a-jar.zip
Archive:  maven-create-a-jar.zip
   creating: dateUtils/
  inflating: dateUtils/.classpath
  inflating: dateUtils/.DS_Store
   creating: __MACOSX/
   creating: __MACOSX/dateUtils/
  inflating: __MACOSX/dateUtils/._.DS_Store
  inflating: dateUtils/.project
   creating: dateUtils/.settings/
  inflating: dateUtils/.settings/org.eclipse.jdt.core.prefs
  inflating: dateUtils/log4j.properties
  inflating: dateUtils/pom.xml
   creating: dateUtils/src/
   creating: dateUtils/src/main/
   creating: dateUtils/src/main/java/
   creating: dateUtils/src/main/java/com/
   creating: dateUtils/src/main/java/com/mkyong/
   creating: dateUtils/src/main/java/com/mkyong/core/
   creating: dateUtils/src/main/java/com/mkyong/core/utils/
  inflating: dateUtils/src/main/java/com/mkyong/core/utils/App.java
   creating: dateUtils/src/main/resources/
  inflating: dateUtils/src/main/resources/log4j.properties
   creating: dateUtils/src/test/
   creating: dateUtils/src/test/java/
   creating: dateUtils/src/test/java/com/
   creating: dateUtils/src/test/java/com/mkyong/
   creating: dateUtils/src/test/java/com/mkyong/core/
   creating: dateUtils/src/test/java/com/mkyong/core/utils/
  inflating: dateUtils/src/test/java/com/mkyong/core/utils/AppTest.java
olive@LAPTOP-P5GHOHB7  /d/veits/eclipseWorkspaceRecent/MkYong/ttt
$ cd dateUtils/

olive@LAPTOP-P5GHOHB7  /d/veits/eclipseWorkspaceRecent/MkYong/ttt/dateUtils
$ 

Step 2 (optional): Create GIT Repository

In order to see, which files have been changed by which step, we can create a local GIT repository like follows

git init
# echo "Converting Maven to Gradle" > Readme.txt
git add .
git commit -m "first commit"

After each step, you then can perform the last two commands with a different message, so you can always go back to a previous step, if you need to do so. If you have made changes in a step that you have not committed yet, you can go back easily to the last clean commit state by issuing the command

# go back to status of last commit:
git stash -u

Warning: this will delete any new files you have created since the last commit.

Step 3 (required): Initialize Gradle

gradle init

This will automatically create a file build.gradle file from the Maven POM file with following content:

apply plugin: 'java'
apply plugin: 'maven'

group = 'com.mkyong.core.utils'
version = '1.0-SNAPSHOT'

description = """dateUtils"""

sourceCompatibility = 1.7
targetCompatibility = 1.7

repositories {

     maven { url "http://repo.maven.apache.org/maven2" }
}
dependencies {
    compile group: 'joda-time', name: 'joda-time', version:'2.5'
    compile group: 'log4j', name: 'log4j', version:'1.2.17'
    testCompile group: 'junit', name: 'junit', version:'4.11'
}

Step 4 (required): Gather Data

Since we are starting from a Maven project, which is prepared to create a runnable JAR via Maven already, we can extract the needed data from the POM.xml file:

MAINCLASS=`grep '<mainClass' pom.xml | cut -f2 -d">" | cut -f1 -d"<"`

Note: In cases with non-existing maven plugin, you need to set the MAINCLASS manually, e.g.

MAINCLASS=com.mkyong.core.utils.App

We also can define, where the dependency jars will be copied to later:

DEPENDENCY_JARS=dependency-jars

Logs:

$ MAINCLASS=`grep '<mainClass' pom.xml | cut -f2 -d">" | cut -f1 -d"<"`
$ echo $MAINCLASS
com.mkyong.core.utils.App
$ DEPENDENCY_JARS=dependency-jars
echo $DEPENDENCY_JARS
dependency-jars

Step 5 (required): Prepare to copy dependent Jars

Here, we will add instructions to the build.gradle file, which dependency JAR files are to be copied into a directory accessible by the executable jar.

We will need to copy the jars, we depend on, to a folder the runnable jar will access later on. See e.g. this StackOverflow question on this topic.

cat << END >> build.gradle

// copy dependency jars to build/libs/$DEPENDENCY_JARS 
task copyJarsToLib (type: Copy) {
    def toDir = "build/libs/$DEPENDENCY_JARS"

    // create directories, if not already done:
    file(toDir).mkdirs()

    // copy jars to lib folder:
    from configurations.compile
    into toDir
}
END

Step 6 (required): Prepare the Creation of an executable JAR File

In this step, we define in the build.gradle file, how to create an executable jar file.

cat << END >> build.gradle
jar {
    // exclude log properties (recommended)
    exclude ("log4j.properties")

    // make jar executable: see http://stackoverflow.com/questions/21721119/creating-runnable-jar-with-gradle
    manifest {
        attributes (
            'Main-Class': '$MAINCLASS',
            // add classpath to Manifest; see http://stackoverflow.com/questions/30087427/add-classpath-in-manifest-file-of-jar-in-gradle
            "Class-Path": '. dependency-jars/' + configurations.compile.collect { it.getName() }.join(' dependency-jars/')
            )
    }
}
END

Step 7 (required): Define build Dependencies

Up to now, a task copyJarsToLib was defined, but this task will not be executed, unless we tell Gradle to do so. In this step, we will specify that each time, a Jar is created, the copyJarsToLib task is to be performed beforehand. This can be done by telling Gradle that the jar goal depends on the copyJarsToLib task like follows:

cat << END >> build.gradle

// always call copyJarsToLib when building jars:
jar.dependsOn copyJarsToLib
END

Step 8 (required): Build Project

Meanwhile, the build.gradle file should have following content:

apply plugin: 'java'
apply plugin: 'maven'

group = 'com.mkyong.core.utils'
version = '1.0-SNAPSHOT'

description = """dateUtils"""

sourceCompatibility = 1.7
targetCompatibility = 1.7

repositories {

     maven { url "http://repo.maven.apache.org/maven2" }
}
dependencies {
    compile group: 'joda-time', name: 'joda-time', version:'2.5'
    compile group: 'log4j', name: 'log4j', version:'1.2.17'
    testCompile group: 'junit', name: 'junit', version:'4.11'
}

// copy dependency jars to build/libs/dependency-jars
task copyJarsToLib (type: Copy) {
    def toDir = "build/libs/dependency-jars"

    // create directories, if not already done:
    file(toDir).mkdirs()

    // copy jars to lib folder:
    from configurations.compile
    into toDir
}

jar {
    // exclude log properties (recommended)
    exclude ("log4j.properties")

    // make jar executable: see http://stackoverflow.com/questions/21721119/creating-runnable-jar-with-gradle
    manifest {
        attributes (
            'Main-Class': 'com.mkyong.core.utils.App',
            // add classpath to Manifest; see http://stackoverflow.com/questions/30087427/add-classpath-in-manifest-file-of-jar-in-gradle
            "Class-Path": '. dependency-jars/' + configurations.compile.collect { it.getName() }.join(' dependency-jars/')
            )
    }
}

// always call copyJarsToLib when building jars:
jar.dependsOn copyJarsToLib

Now is the time to create the runnable jar file:

gradle build

Note: Be patient at this step: it can appear to be hanging for several minutes, if it is run the first time, while it is working in the background.

This will create the runnable jar on build/libs/dateUtils-1.0-SNAPSHOT.jar and will copy the dependency jars to build/libs/dependency-jars/

Logs:

$ gradle build
:compileJava
warning: [options] bootstrap class path not set in conjunction with -source 1.7
1 warning
:processResources
:classes
:copyJarsToLib
:jar
:assemble
:compileTestJava
warning: [options] bootstrap class path not set in conjunction with -source 1.7
1 warning
:processTestResources UP-TO-DATE
:testClasses
:test
:check
:build

BUILD SUCCESSFUL

Total time: 3.183 secs

$ ls build/libs/
dateUtils-1.0-SNAPSHOT.jar dependency-jars

$ ls build/libs/dependency-jars/
joda-time-2.5.jar log4j-1.2.17.jar

Step 9: Execute the JAR file

It is best practice to exclude the log4j.properties file from the runnable jar file, and place it outside of the jar file, since we want to be able to change logging levels at runtime. This is, why we had excluded the properties file in step 6. In order to avoid an error “No appenders could be found for logger”, we need not specify the location of the log4j.properties properly on the command-line.

Step 9.1 Execute JAR file on Linux

On a Linux system, we run the command like follows:

java -jar -Dlog4j.configuration=file:full_path_to_log4j.properties build/libs/dateUtils-1.0-SNAPSHOT.jar

Example:

$ java -jar -Dlog4j.configuration=file:/usr/home/me/dateUtils/log4j.properties build/libs/dateUtils-1.0-SNAPSHOT.jar
11:47:33,018 DEBUG App:18 - getLocalCurrentDate() is executed!
2016-11-14

Note: if the log4j.properties file is on the current directory on a Linux machine, we also can create a batch file run.sh with the content

#!/usr/bin/env bash
java -jar -Dlog4j.configuration=file:`pwd`/log4j.properties build/libs/dateUtils-1.0-SNAPSHOT.jar

and run it via bash run.sh

Step 9.1 Execute JAR file on Windows

In case of Windows in a CMD shell all paths need to be in Windows style:

java -jar -Dlog4j.configuration=file:D:\veits\eclipseWorkspaceRecent\MkYong\dateUtils\log4j.properties build\libs\dateUtils-1.0-SNAPSHOT.jar
11:45:30,007 DEBUG App:18 - getLocalCurrentDate() is executed!
2016-11-14

If we run the command on a Windows GNU bash shell, the syntax is kind of mixed: the path to the jar file is in Linux style while the path to the log properties file needs to be in Windows style (this is, how the Windows java.exe file expects the input of this option):

$ java -jar -Dlog4j.configuration=file:'D:\veits\eclipseWorkspaceRecent\MkYong\dateUtils\log4j.properties' build/libs/dateUtils-1.0-SNAPSHOT.jar
11:45:30,007 DEBUG App:18 - getLocalCurrentDate() is executed!
2016-11-14

Inverted commas have been used in order to avoid the necessity of escaped backslashes like D:\\veits\\eclipseWorkspaceRecent\\… needed on a Windows system.

Note: if the log4j.properties file is on the current directory on a Windows machine, we also can create a batch file run.bat with the content

java -jar -Dlog4j.configuration=file:%cd%\log4j.properties build\libs\dateUtils-1.0-SNAPSHOT.jar

To run the bat file on GNU bash on Windows, just type ./run.bat

Yepp, that is it: the hello world executable file is printing the date to the console, just as it did in Mkyong’s blog post, where the executable file was created using Maven.

simpleicons_interface_folder-download-symbol-svg

Download the source code from GIT.

Note: in the source code, you also will find a file named prepare_build.gradle.sh, which can be run on a bash shell and will replace the manual steps 4 to 7.

References

Next Steps

  • create an even leaner jar with resource files kept outside of the executable jar. This opens the opportunity to changing resource files at runtime.
  • create an executable jar file that will run the JUnit tests.

 

3

Docker Java Performance Tests


In this blog we will show the surprising result that Docker on Ubuntu seems to outperform Docker on CoreOS by ~30%, when tested with a java templating web service.

But first let us discuss the reason on why I have started the tests in the first place: on my blog Docker Web Performance Tests we had found the surprising result that a Rails Web Application on Docker on a CoreOS VM has about 50% better performance, than Rails directly on Windows hardware. Will we find the same result for java?

java on Windows Hardware vs. Ubuntu VirtualBox VM (beware: apples vs. bananas!)

The answer is no: the chosen java application, an Apache Camel templating web service has a factor ~4 better performance on Windows Hardware than on an Ubuntu VirtualBox machine (beware: apples vs. bananas!).

Now comparing apples with apples (all on VirtualBox VMs):

java on Ubuntu vs. Docker/Ubuntu vs. Docker/CoreOS vs. Docker/boot2docker (apples vs. apples)

The java performance on a the chosen Vagrant deploy-able Ubuntu image has almost the same performance on Docker as on native Ubuntu (only 5% performance degradation). With that good result, the Ubuntu Docker alternative has outperformed the other Docker alternatives by more than 30% and has impressed with java startup times that were more than 3 times as fast as on CoreOS.

Note: At the time of writing the blog post, this dockerization vs. virtualization vs. native comparison paper was unknown to me: it follows a more comprehensive scientific approach by testing many different performance aspects separately. I like it a lot: check it out!

This is only a small proof of concept (POC) of a hobby developer and the POC can only give hints about which road to take. Even though now the more scientific paper is available, I am still fine with having invested the time of testing and publishing, since my test results are focused on the applications I develop (Rails and java Apache Camel) and the infrastructure alternatives I have on my laptop: native Windows vs. Ubuntu VirtualBox VM vs. CoreOS VirtualBox VM vs. boot2docker Virtualbox VM. The scientific paper was only testingUbuntu 13.10 and no CoreOS I was interested in.

Document Versions

v1 original post
v2: updated CoreOS Vagrant image from v713.3.0 to v766.4.0 with no performance improvement
v3: giving CoreOS a last chance: updated to alpha v845.0.0 with no performance improvement
v4: changed introduction and re-organized Tldr; and added a link to a more scientific paper.

Tldr;

When comparing the java performance on a native Ubuntu VM with several Docker host VM alternatives (UbuntuCoreOS and boot2docker), we can see that the chosen Vagrant deploy-able Ubuntu image has almost the same performance on Docker as on native Ubuntu. With that, the Ubuntu Docker alternative has outperformed the other Docker alternatives by more than 30% and has impressed with java startup times that were more than 3 times as fast as the one of the second best candidate: CoreOS.

As expected, java performance has shown to be substantially (~4 times) higher on Windows Hardware than on VirtualBox Ubuntu VM with same number of cores. Note that this last comparison is like comparing apples with bananas: Windows on Hardware vs. Ubuntu Linux on VirtualBox virtual machine with less DRAM.

Test Scenarios

In the current blog, we perform performance measurements of a java Apache Camel application for following scenarios:

  1. application run natively on Windows 7 SP1 on the host system (8 GB DRAM, with ~2.2 GB free before start of the application; 2 core CPU i5-2520M@2.5 GHz)
  2. application run within a VirtualBox Ubuntu 14.04 LTS (64 bit) VM with 1.5 GB DRAM and 1 vCPU on the same Windows host system
  3. application run within a docker container on the above Ubuntu VM (vagrant box version 1.8.1)
  4. application run within a docker container on a CoreOS (Vagrant CoreOS stable 717.3.0 and v766.4.0) Virtualbox VM with same resources as the Ubuntu VM (1.5 GB DRAM and 1 vCPU) run on the Windows system
  5. application run within a docker container on a boot2docker Virtualbox VM with same resources as in the old blog, i.e. (2 GB DRAM and 2 vCPU), run on the same Windows host.
    Note that boot2docker is deprecated by now, but since we have performed the Rails Docker Web Performance Tests on boot2docker, we keep it as a reference.

Considering the results we have seen with the Rails web application, we expect that 3. and 4. have higher performance than 1.

With 2. and 3. we can easily extract the difference that comes from the additional docker layer. For the option 4., we expect the same or a higher performance than in 3, since CoreOS is optimized for running Docker containers.

As in the other performance test blog, we use Apache Bench as the measurement tool.

Test Results

Java Apache Camel Startup Time

  1. application run natively on Windows on the host system
    1. NEED TO TEST AGAIN WITH STDOUT REDIRECTION INTO FILE!!!
      2 cores: Apache Camel 2.12.2 (CamelContext: camel-1) started in 3.5 +-0.6 seconds
  2. application run within a VirtualBox Ubuntu VM with 1.5 GB DRAM and 1 vCPU
    1. 1 vCPU: Apache Camel 2.12.2 (CamelContext: camel-1) started in 5.0 seconds
    2. 2 vCPU: Apache Camel 2.12.2 (CamelContext: camel-1) started in 3.7 seconds
  3. application run within a docker container on the above Ubuntu VM
    1. 1 vCPU: Apache Camel 2.12.2 (CamelContext: camel-1) started in 5.2 seconds
    2. 2 vCPU: Apache Camel 2.12.2 (CamelContext: camel-1) started in 4.0 seconds
  4. application run within a docker container on a CoreOS VM with same resources as the Ubuntu VM
    1. 1 vCPU, v713.3.0: Apache Camel 2.12.2 (CamelContext: camel-1) started in 15.2 seconds
    2. 2 vCPU
      1. v713.3.0: Apache Camel 2.12.2 (CamelContext: camel-1) started in 13.4 seconds
      2. v766.4.0: Apache Camel 2.12.2 (CamelContext: camel-1) started in 14.5 +- 1.5 seconds
      3. v845.0.0: Apache Camel 2.12.2 (CamelContext: camel-1) started in [15.0, 13.2, 19.2,13.2] seconds (4 tests)
  5. application run within a docker container on a boot2docker VM with same resources as in the old blog, i.e. (2 GB DRAM and 2 vCPU)
    1. 2 vCPU: Apache Camel 2.12.2 (CamelContext: camel-1) started in 24.1 +- 0,5 seconds (4 Tests)

Windows: has the quickest Apache Camel startup (95 routes) with ~3.5 sec +- 0.6 sec.

Ubuntu with or without Docker: with 2 vCPUs, the startup is slightly slower. Only with 1 vCPU, the startup takes ~40% more time on Ubuntu (both, native and in Docker).

CoreOS: Apache Camel has a 3.8 and 3,4 times longer (!) startup time on CoreOS Docker than on Windows and on Ubuntu, respectively. CoreOS is more lightweight than the Ubuntu image, and is supposed to be optimized for Docker. Therefore, this is a surprising result.

boot2docker: has the worst Apache Camel bootup time: it takes almost a 7 times longer (!!) than the same process on Windows. boot2docker is deprecated by now. This performance results is another reason not to use it anymore.

Java Apache Camel Throughput

Test script:

# test:
./ab -n 10000 -c 100 "http://<IP>:<Port>/ProvisioningEngine?action=Add%20Customer&CustomerName=ttt&offlineMode=offlineMode"

Shortly before we run the actual test, we train the application by issuing the same command with -n 100 and -c 100.

  1. application run natively on Windows
    1. 2 i5-2520M CPU cores: Requests per second:    46.3 [#/sec] (mean)
  2. application run within a VirtualBox Ubuntu VM with 1.5 GB DRAM and 1 vCPU (on an i5 host)
    1. 1 vCPU: Requests per second:    14.3 [#/sec] (mean)
    2. 2 vCPU: Requests per second:    17.9 [#/sec] (mean) (i.e. ~25% higher than 1 vCPU)
  3. application run within a docker container on the above Ubuntu VM
    1. 1 vCPU: Requests per second:    12.5 [#/sec] (mean)
    2. 2 vCPU: Requests per second:    17.0 [#/sec] (mean) (i.e. ~35% higher than 1 vCPU)
  4. application run within a docker container on a CoreOS VM with same resources as the Ubuntu VM (1.5 GB DRAM and 1 or 2 vCPU)
    1. 1 vCPU, v713.3.0:: Requests per second:    8.82 [#/sec] (mean)
    2. 2 vCPU
      1. v713.3.0: Requests per second:    13.0 [#/sec] (mean) (i.e. ~48% higher than 1 vCPU)
      2. v766.4.0: Requests per second:    11.8 [#/sec] (mean)
      3. v845.0.0: Requests per second:    13.3 [#/sec] (mean)
  5. application run within a docker container on a boot2docker VM with same resources as in the old blog, i.e. (2 GB DRAM and 2 vCPU)
    1. 1 vCPU: not tested
    2. 2 vCPU: We get the error “Test aborted after 10 failures”, a memory allocation failure, even though we have 2 GB DRAM instead of 1.5 GB!

Note that each requests is creating tons of output on the terminal (deep tracing). However, in call cases, I have redirected the stream into a log file (using the “>” operator).

Note also that the comparison between Windows and Linux variants is not fair, since Windows was tested on hardware with 2 real CPU cores, while the Linux variants was tested on Virtualbox VMs with one or two vCPUs (Ubuntu, Docker on Ubuntu and CoreOS) or 2 vCPUs (boot2docker). However, considering you have a Windows notebook and you want to decide, whether to perform your development and tests on native Windows vs. Linux VM vs. on Docker, this is what you get:

  • Windows: on the host system with 2 real CPU cores is ~4 times higher throughput performance than any of the Linux VM variants on 1 vCPU (comparing apples with bananas, but still relevant for my decision)
  • Ubuntu vs Docker on Ubuntu: Native Linux has a ~5 to 15% higher throughput performance than Docker on Linux
  • CoreOS: Interestingly, the Ubuntu Docker VM image has a ~40% higher throughput performance than the optimized CoreOS image. Because of this negative result for CoreOS, I have updated the v713.3.0 image to the latest versions available: stable-v766.4.0 and alpha-v845.0.0. This had no substantial impact on the (still bad) performance.
  • boot2docker: The boot2docker image has memory allocation problems. Since boot2docker is deprecated, I did not investigate this further.

Summary

We have compared the performance of a java Apache Camel application on following platforms: on a Windows 7 laptop, on an Ubuntu Virtualbox image and in a Docker container, where Docker was tested on Ubuntu, CoreOS and the official, but deprecated boot2docker image.

Note that Windows has been tested on hardware, while the Linux variants have been tested on Virtualbox VMs. This is not a fair test between Windows and Linux, but it still helps me to decide, whether I better keep performing my java development directly on my Windows laptop, or whether I can move all java development to a Virtualbox Linux VM on my Windows laptop. For Rails, I had found the surprising result, that the performance was better on the VMs than directly on the hardware. Not for java, though:

2015.10.28-10_22_08-hc_001

We have seen that the performance on Windows 7 without virtualization is ~4 times higher than on any of the Virtualbox VMs, which will account for the expected performance degradation effect of software virtualization. This is no comparison between java on Windows and java on Ubuntu. It is just a comparison of the options I have on a Windows laptop: directly work on Windows or work on the VMs. The CoreOS image has a ~30% lower performance than the Ubuntu image, which is surprising, since it is much more lightweight than the Ubuntu image.

The application’s startup times are quite good for Ubuntu and for Docker on Ubuntu. Surprisingly, both, CoreOS as well as boot2docker fall back substantially with respect to startup times (factor 3 and 6 compared to Ubuntu).

The deprecated boot2docker image has major memory allocation problems and should not be used.

Recommendation

All in all, development is best to be performed directly on the Windows laptop, with Ubuntu or Docker on Ubuntu being a good alternative because of low application startup times. CoreOS does not seem to be a good alternative, since in the development phase, I often need to restart the application, which takes ~13 sec on CoreOS instead of ~3.5 to 4 sec on Windows hardware or on the Ubuntu VM’s Docker.

Performance tests are best to be done on Windows hardware (or on HW-virtualized VMs, which I have not performance-tested yet). However, because of the deployment advantages, I would like to deliver my application as a Docker image, so I need to perform tests on Docker as well. For those Docker tests, the Ubuntu-trusty64-docker Virtualbox image of William Yeh has shown the best performance results.

If I need to test the behavior of the application on clustered Docker hosts, CoreOS and Kubernetes are the only cluster alternatives I have experience with, and I have not done more than a small installation POC in case of Kubernetes.  However, considering the low CoreOS performance, I will need to investigate its alternatives, I guess: e.g. Docker Swarm, Kubernetes, or others (ping me, if you have suggestions).

My Path towards Continuous Integration of my Java Application

I am planning to do the following:

  • I will continue to develop my java application on native Windows and I will continue to push the code to Github often.
  • I have linked Github with TravisCI, so the code is automatically tested in the cloud. This way, I do not need to wait for test results. The test results are sent via email and I can react soon for any newly introduced bugs.
  • I will link TravisCI with Docker Hub, so a Docker image will be created automatically.
  • Locally, I will use the Docker on Ubuntu, if I need to troubleshoot the Docker build process, or I need to manually test, whether the Docker image really works.
  • If it comes to productive deployment including Docker host clustering, I had made good experience with CoreOS, even though clustering behind a HTTP proxy is a challenge; see here. However, the performance is sub-optimal, and I need to evaluate, which alternatives I have (e.g. test new CoreOS versions? Docker Swarm? Google Kubernetes?).

Watch out for my next blog post(s)…

;-))