Deploying an application by using AWS only components
Recently, IceMobile asked our support with designing their new application. This consisted of both mobile and server components. The company required AWS only components to be implemented in the application, with a focus on AWS’ managed services.
By: Tom Nuijtemans
IceMobile’s application needed to physically land on Amazon Web Services, and no third-party tools could be used for deployment and testing. We built this in AWS only components and, by request, mainly used managed services components.
At our customer case page, we’re providing you with a general description of this project. In this current article, we’ll go into more detail regarding the technical aspects.
Standard services of AWS
During the project, IceMobile’s developers focused on the development of the latest software, and we worked on the AWS infrastructure. The most important requirements were the use of Docker for all services, and the software deployments needed to run with as little manual intervention as possible. Also, new deployments of the software needed to be tested automatically against the already running services, before they were deployed to the other environments.
In order to achieve our goals, we primarily used the standard managed services from AWS:
- EC2 Container Services for the Docker clusters,
- CodeBuild and CodePipeline for the deployments,
- and a few Lambda functions for specific tasks like automated integration testing.
Also, we used the new MongoDB Atlas. It provides all of the features of MongoDB, while removing most of the operational overhead. The entire environment was build using CloudFormation templates, executed by Ansible.
CodePipeline had a central role in the deployments to the DEV, TEST, and ACC environments. Each Docker service had its own CodePipeline stack, with the associated configuration items.
As soon as a new version was pushed to the Git repository one of the developers, CodePipeline started the build process. CodeBuild received the Docker files, the buildspec, required variables, and the location of the EC2 Container Registry the new container should be pushed to.
After a successful build, a Lambda function checked whether an integration test was already running, and deployed the container to the test cluster.
The integration tests checked if the container was working as it should and if it cooperated with the other running containers. If all tests passed, the container was deployed to the other clusters and replaced the already running, previous versions. If the health checks were clear, all connections were drained from the previous container and it would be removed from the cluster. Any waiting CodePipeline stack or integration test would then continue with the next deployment.
The entire process only took about 10 minutes, of which the build process is responsible for about 5 minutes.
All logging, from CodePipeline and CodeBuild, the integration tests, and even the local Docker logs, are sent to CloudWatch Logs. Because of the huge amount of data the most important events are filtered and sent to a Slack channel. This way, both inQdo and IceMobile are able to quickly spot irregularities in the builds or the tests.
Application Load Balancers (ALBs)
The rest of the environment consists of multiple Application Load Balancers (ALBs), CloudFront distributions, S3 bucket for the configuration files and certificates, and Redis clusters for caching.
Because of Ansible and CloudFormation templates the complete environment is built within minutes. If necessary, also in a different AWS region. The entire VPC with all subnets, Security Groups and route tables is set up using a single file with variables. For example, a change in region or IP ranges is quickly made and redeployed to the environment.
If you would like to know more about our AWS services, please contact us.