Of one the newer “hypes” around development and operations is Docker, a tool that can be used to “containerize” your processes or programs as to manage, deploy and secure them more easily. In this post we’ll go over what Docker can do if you combine it with your ESB. If you’d like to know more about Docker, take a look at their site here. This post won’t include an entire breakdown of how to use Docker with an ESB, rather we’ll discuss why using it is a good idea for any company that needs an ESB and wants more flexibility in their deployments, upgrades or general maintenance of their platform.

By: Michiel van der Winden

The purpose of Docker

If you’ve followed the link above, you’ll have read that Docker is a platform used to “containerize” services or processes. The idea is that, instead of running a VM, you’ll have a thin layer, managed by Docker, between your server and your services. This, in turn, means that the overhead of each service is lowered compared to running them in separate VMs, services can be separated as to not impact each other and can be more easily managed by restarting a specific Docker container instead of the entire server in the case of a necessary restart (e.g. after you’ve changed a configuration file, or an updated version of the service is available).

A real pro of Docker is that it containerizes an app or service in such a way that it should run anywhere Docker is capable of running. This means that we’ll take “works on my machine” out of the equation: If your image is running on your development system, it should run on any other similar system as well. A real upside of this is that development and deployment become a breeze; if it’s tested locally and on the test server, you’ll have a near 100% chance it’ll work in acceptance/production as well.

Why use Docker with an ESB?

Using Docker, we have the option of creating an image from an existing installation (e.g. an Integration Server with a few packages) and deploy this image to another server. Doing so will make it possible to start up a container based on the image we created from the installation, meaning that all packages, configuration files and other metadata will be identical on the new server.

Having this capability, we can more easily start doing continuous integration/deployment, as we know that whatever we deploy will run identical to the version we tested. It also means we can upgrade our own workstations, and use said upgraded versions of the images in the other environments as soon as we know everything is tested and found to be working. Being able to do this means we’ll minimize the downtime between two versions, make testing way more effective and lower the overhead of installations/upgrades done by the operational specialists to boot!

Basic usage of Docker with an ESB

Everything starts out with an existing (basic) installation of the service you want to containerize. For the purposes of this example, I’ll use a simple installation of an Integration Server, and some packages we want to deploy alongside. Every installation of an Integration Server (starting with version 9.12) comes with a basic Dockerfile (metadata about how the image should be created) which can be used to quickly start containerizing the service.

Docker-Overview creating an image

Figure 1: Overview of creating a base image. You can see that the Dockerfile will use the installation’s (meta)data to create the actual image containing your service.

As we can see above, creating a base image can be done easily out of the box; after installing the service we’ll let Docker create an image using the Dockerfile attached to said installation and we’re ready to go!

Of course, having a clean installation of an Integration Server doesn’t do much for test/production purposes. There are multiple ways to make our image more useful; adding packages and configuration so it can be used as an actual replacement of your currently installed services. One of them would be to first add all necessary (or at least your company’s default) packages to the installation before letting Docker create the image. Another, more “Docker-like” way of doing it would be to create an image that uses your base image as its starting point:

Docker-Base image startpoint

Figure 2: Using the base image as startpoint of a new image so we can add the (meta)data and we can use this image for a production environment.

This is a bit more hands-on but can be used to create multiple images for multiple platforms or intended purposes (say, having one image for synchronous traffic, and another for asynchronous processes). As seen in the image above, we’ll use the base image in our new Dockerfile as the starting point and copy any necessary packages and configuration files to your image, so you can provide specific setup information to each image. This way we can more easily create “Test”, “Acceptance” and “Production” images, as each environment will normally use different connection strings to external services, database and so on.

Conclusion

We can easily say that Docker will be one of the leading forces in making our DevOps’ lives easier. It makes for easier development, testing, deployment and management whilst costing nothing except for the time necessary for setting it up for the first time. We’re very excited about this technology and will continue researching and using it in the future. Keep an eye out for a next post regarding Docker, then we’ll go into the actual mechanics of how it actually works, including examples and a guide to take your first steps into Docker with an ESB.

How can we help you?

Would you like to know more about our system integration services? We work with integration every day and would love to help you with your questions regarding IT. Call Peter Perebooms on: 06-45 34 40 46 or send an email to: info@inQdo.com.

iso 27001 & isae 3402 inQdo BV

simplifying cloud and integration together

inQdo B.V.

Coltbaan 1-19

3439 NG Nieuwegein

info@inqdo.com

+31 85 2011161

Verzend