By: Mijndert Stuij
AWS Graviton Processor
At AWS re:Invent 2018 AWS announced the A1 class of instance types powered by the AWS Graviton Processor. Graviton is a 64-bit Arm CPU specifically built for use within the AWS data center to deliver scalability, but up to 40% cheaper than using instance types backed by Intel or even AMD CPU’s. These Graviton CPU’s also typically offer more CPU cores, which is great for multi-core workloads. Graviton is supported in a wide range of AWS services like ECS, EKS and of course EC2.
Recently AWS added support for manifest lists in ECR (Elastic Container Registry). Using manifest lists, you can store image variants for different hardware architectures such as x86 and Arm as a single container image in ECR. Docker containers can automatically pull the right image variant for each architecture when starting containers. This helps simplify your build and deploy workflow as you use a single image and tag instead of embedding per- architecture image references throughout your CI/CD scripts.
Docker has an experimental feature called Buildx that allows you to build multi-arch Docker images in one go. You can include this tool in your Continuous Integration (CI) pipelines to automatically build Docker images for each of your ECS or EKS clusters without worrying about the underlying CPU architecture.
Creating a builder
First you have to install Docker for your operating system. I’m using a Mac for this tutorial, but any operating system will work. Also make sure to enable Docker’s experimental features to get access to Buildx.
To give you an idea of how building multi-arch Docker images works, we’ll build a Docker image based on Alpine Linux that just installs Ansible.
I’ve prepared a Dockerfile that looks like this:
FROM alpine:latest RUN apk update && \ apk add --no-cache ansible && \ rm -rf /tmp/* && \ rm -rf /var/cache/apk/*
To start using Buildx we first need to create a new builder – an environment in which your container will build itself. The following command will create a new builder called ‘ansible’ and we’ll immediately switch to it with ‘–use’. We’ll also bootstrap the new builder, this will install BuildKit which is required for building Docker images locally.
docker buildx create --name ansible --use docker buildx inspect --bootstrap
For emulation of binaries on multiple architectures we need to run ‘binfmt’, this can also be done in Docker.
docker run --privileged linuxkit/binfmt:v0.8
Creating a new ECR repository
For storing our multi-arch Docker image we’ll use Amazon Elastic Container Registry (ECR).. Head over to the AWS Web Console and click Create Repository.
If for some reason you don’t want to use Amazon ECR, you’ll have to check the documentation of the registry of your choice if it supports manifest lists as well.
Now that we’ve created a new ECR repository, make note of the push commands. We’ll need
those when we actually build and push the Docker image to ECR.
Building and pushing
Before we can log in to the ECR repository we just created, we first need to get IAM credentials active on the AWS Command Line Interface (CLI). AWS has a handy how-to article available on how to configure your AWS-CLI tools to use IAM credentials.
Now that we have our IAM credentials active, we can log in to ECR.
aws ecr get-login-password --region eu-west-1 | docker login --username AWS --password-stdin .dkr.ecr.eu-west-1.amazonaws.com
After logging in, you can build and push the Docker image by using one simple command.
docker buildx build --progress plain --platform linux/amd64,linux/arm64,linux/arm/v7 -t .dkr.ecr.eu-west-1.amazonaws.com/ansible:latest --push .
Now we can inspect the Docker image. This will show all Platform types included in the Manifest file. Docker will automatically pick the correct Platform version based on the CPU architecture it’s running on.
docker buildx imagetools inspect .dkr.ecr.eu-west-1.amazonaws.com/ansible:latest
We’ve just built multiple Docker images for multiple CPU architectures. This allows us to run both ARM and X86 workloads on AWS ECR, EKS or EC2. The Docker engine will automatically detect the presence of the multi-arch Docker images and pick the correct one.
Now we can move on and integrate these steps with our CI/CD pipelines using for example, AWS BuildPipeline, Jenkins, GitLab CI or GitHub Actions.