Ever since our last blogpost we’ve been working on getting (most of) Software AG’s products to run in Docker containers. The best starting point would be the Integration Server as this is the actual brains of the operation, so we did just that. In this blog post we’ll discuss the do’s and don’ts, our progress and what challenges we ran into along the way.

By: Michiel van der Winden

Getting the Integration Server containerized

Software AG has been making big steps in the container world, meaning that in the newer versions there are better tools to actually get your Integration Server to follow suit. In every new installation we can select to install Docker support as well, meaning it will automatically add a script to generate a properly formatted Dockerfile. We can use this Dockerfile to create an actual image, which in turn can create containers running your Integration Server instance.
The steps are fairly easy:

  1. Locate the script in the installation directory
    1. g. {installation_dir}/is/IntegrationServer/docker/is_container.sh
  2. Run this shell script with the parameter createDockerFile
  3. Use the Dockerfile by browsing to it using a terminal, and create an image using the Docker command line

The image we’ve created above can be called with Docker run meaning Docker will create a container with the given information. After doing so, you’ll see the following (I use Kitematic which is just a GUI for the Docker command line. It’s recommended but not necessary to do so):

Integration Server

Figure 1: Integration Server running in a Docker container.
To the left we have Safari showing the running Integration Server.
To the right is Kitematic, showing a running Docker container with the internal port mapping to localhost:5555.

After booting the container, the actual Integration Server can be used for local development (Designer can be connected using the usual ports, or any ports you’ve mapped yourself), to deploy services to through WmDeployer and even for production purposes. Do note that the actual container doesn’t do much as it is built from a (probably clean) installation of Integration Server. No actual packages (apart from the Wm* packages) nor configuration files (except for the defaults from the original installation) have been added to the container yet.

Adding packages to a new image would be as simple as putting the packages we’d like to add to the new image in the Dockerfile of said image and use the base image as the starting point of said Dockerfile. An overview of the process can be found in our previous blogpost (see Figure 2).

Challenges along the way

Whilst researching this topic we ran into a few challenges. As we’re trying to get a generic process going where we could potentially take all our customer’s installations and put them into separate containers for better management and separation of services, we found that the biggest hurdle would be configuration.
The Integration Server needs a multitude of configuration files for the actual services (e.g. adapters, global variables), connections to other services (e.g. messaging, proxy servers, SAML etc.), server-wide configurations (amount of CPU’s, allocated memory) and license files to work properly. Some of these can be found in the config folder of the instance, others will be deployed in packages and some are hidden (or even encrypted!) in yet another folder.

To handle this issue, we’ve started looking into automating the process of creating adapters based on external parameters (e.g. Environment variables in Docker, or using AWS Parameter Store for simpler parameters), as well as automatically (re)generating things like global variables, UM connection parameters etc. This way it’ll be easier to create environment-specific containers from the same image, removing the need of having specific images per environment.

Another challenge seems to be deploying the correct packages to the correct environment. This could be handled by creating scripts that add the new/changed packages to a new image which in turn will be tagged with a specific version number. To make sure it’s easy to differentiate between versions and knowing which image version contains what version of our code we’d recommend using a VCS like Git or SVN. Support for the former has been increased immensely meaning we can now use it more easily with Designer and has the added benefit of being easy to script (e.g. when wanting to use something like Jenkins or Gitlab’s build-tool for building images).

Do’s and don’ts

In our journey to get (most of) Software AG’s products containerized we’ve found that every product we use at inQdo can be put in a container. The real question is: Should you?
Products we tried:

  • Integration Server (IS, in our case includes TN packages)
  • Universal Messaging Server (UM)
  • My webMethods Server (MWS)

During our testing we found that the above products all have support for Docker and when we want to easily create a local development environment on a colleague’s laptop then we’ll be able to push the images to their laptop, spin up a few containers and have a complete environment ready.
The thing with containers is that they shouldn’t be used for everything. First and foremost, containers should be used for portability and to solve the “works on my machine” issue. This means that we only want to containerize things that actually change often (and thus need a lot of testing) or should be easily scalable. Services/products like MWS or UM might tick the scalability factor (e.g. when we want to handle more concurrent connections (UM) or have a highly available management environment (MWS)), but the portability it not much of a factor.

This leaves us with the IS. When the entire CI/CD pipeline is properly set up we think that it can be a great addition or even replacement of current environments as it means that scalability is a greatly improved and we can easily roll out new versions of services or (hot)fixes to the Integration Server itself.

The do’s and don’ts would simply come to this:

  1. Only containerize the IS unless the need arises (e.g. you need MWS or UM to be easily scalable, or you’re using container-only providers like Amazon ECS/EKS)
  2. Change the way of working to use stateless services as much as possible (this is necessary for the scalability)
  3. Containerize the IS to replace all current installations and have an environment that’s easier to support, update and re-create if necessary
  4. Script everything from creating the base image to the actual environment images and the deployment to prevent any issues along the way.

Conclusie

We hebben ondervonden dat Docker zeer bruikbaar en waardevol is als het aankomt op de producten van Software AG. Het kunnen opspinnen van een volledige omgeving met meerdere Integration Servers en potentieel andere producten betekent minder downtime, meer tijd om producten en functionaliteit te ontwikkelen en vergemakkelijkt het onderhoud op de koop toe.

inQdo blijft bezig met onderzoek naar Docker en de toepassing binnen producten van Software AG waardoor wij hopelijk binnen afzienbare tijd zowel ons als uw bedrijf kunnen verbeteren en meer toekomstbestendig kunnen maken.

Conclusion

In our testing we found Docker to be immensely useful and valuable when it comes to Software AG products. Having the Integration Server and potentially other products up and running within minutes means less downtime, more time to develop new functionality and getting an easier to manage environment as well.

We’ll continue researching Docker for sure, and hopefully we’ll be able to use it to change our and your lives for the better.

How can we help you?

Would you like to know more about our system integration services? We work with integration every day and would love to help you with your questions regarding IT. Call Peter Perebooms on: 06-45 34 40 46 or send an email to: info@inQdo.com.

iso 27001 & isae 3402 inQdo BV

simplifying cloud and integration together

inQdo B.V.

Coltbaan 1-19

3439 NG Nieuwegein

info@inqdo.com

+31 85 2011161

Verzend