Well-Architected Framework
Build and deploy immutable containers
Immutable containers package your application into an unchangeable unit that you build once and deploy one or more times. You create the container image with Docker or Packer, deploy it across all environments, and never modify it directly. When you need to make changes, you update your Dockerfile or Packer configuration, rebuild the image, and redeploy it. You can then scale your containers using orchestration tools like Kubernetes or Nomad.
Docker Engine is the underlying virtualization software for Docker and can run on a laptop, physical server instance, VM, or a workload orchestrator that supports it, like Kubernetes or Nomad. Workload orchestrators offer scalability and reliability in comparison to a single instance running Docker Engine but also offer portability between cloud providers given the widespread adoption of container orchestration. Terraform can help deploy underlying orchestrator infrastructure across your local or cloud-based networks.
Containers have two parts that you can define with infrastructure as code (IaC): the container image and the running container. The container image is the underlying blueprint for creating a container and contains all of the software that the application requires to run properly, including the application code and dependencies. The contents are defined as code in a configuration file and for Docker containers, this is a Dockerfile.
The second part that you can define with IaC is the actual running container. This process includes the creation of the container with an orchestrator like Kubernetes or Nomad through the use of a deployment specification file. The deployment specification file contains the path to the container image and additional configurations like application ports, number of containers to create, and storage requirements.
Create an immutable container image
You can create a Docker container image with a Dockerfile or a Packer template file. Try both methods to learn which one you prefer and which one works better for your workflow.
Create an image with Docker
The process of creating an image with Docker includes creating the Dockerfile, building the image, and pushing the image to a registry. The example below shows a Dockerfile that contains a Python application.
python.Dockerfile
FROM python:alpine
WORKDIR /usr/src/app
COPY app.py .
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
ENTRYPOINT ["python", "./app.py"]
You can then build the image with the Docker CLI and pass in the Dockerfile. The build process builds and saves the image locally.
$ docker build --file python.Dockerfile --tag python-app .
You then need to push the image to a registry like Dockerhub or your private registry to make it accessible to your deployment processes. This involves tagging the image to match the registry address and pushing it with an appropriate tag name.
Tag the image with your private registry hostname.
$ docker tag python-app your-registry.com/team1/python-app:v1
Push the image to your private registry.
$ docker push your-registry.com/team1/python-app:v1
Create an image with Packer
You can also create and publish Docker images with Packer. The process includes creating the Packer template file and then building and publishing the image with Packer. The example below shows a Packer template that builds a Docker image with the same Python application as the Docker process above.
python-app.pkr.hcl
packer {
required_plugins {
docker = {
version = "~> 1.1.2"
source = "github.com/hashicorp/docker"
}
}
}
source "docker" "python-app" {
image = "python:alpine"
commit = true
changes = [
"WORKDIR /usr/src/app",
"ONBUILD RUN pip install --no-cache-dir -r requirements.txt",
"ENTRYPOINT [\"python\", \"./app.py\"]"
]
}
build {
name = "python-app-build"
sources = [
"source.docker.python-app"
]
provisioner "shell" {
inline = ["mkdir /usr/src/app"]
}
provisioner "file" {
source = "app.py"
destination = "/usr/src/app/app.py"
}
provisioner "file" {
source = "requirements.txt"
destination = "/usr/src/app/requirements.txt"
}
post-processors {
post-processor "docker-tag" {
repository = "brianmmcclain/python-app"
tags = ["v1"]
}
post-processor "docker-push" {}
}
}
You can then build the image with Packer. Packer tags and pushes the image to your registry as part of the process.
$ packer build python-app.pkr.hcl
Deploy immutable containers
The general workflow for creating immutable containers involves creating the image, creating the deployment specification file, creating the container from the image with an orchestrator, and redeploying as necessary.
- Create the deployment file: Create the deployment file for your specific orchestrator. This could be a Deployment file for Kubernetes or a job specification (jobspec) file for Nomad.
- Create the containers with the orchestrator: Submit the deployment file to the orchestrator to have it create the container.
- Iterate and redeploy: Update your Dockerfile or Packer template and rebuild the container image. Update your deployment file and resubmit to the orchestrator. The orchestrator destroys any out-of-date containers and deploys new containers with the updated image.
Improvements to the workflow can include:
- Using CI/CD and source code repository triggers to automate building the container image
- Using the Terraform providers for Kubernetes and Nomad to deploy containers with the respective orchestrator
HashiCorp resources:
- Build a Docker image with Packer
- Manage Kubernetes resources with Terraform and the Kubernetes Terraform provider
- Manage Nomad resources with Terraform and the Nomad Terraform provider
- Whiteboard video: What is mutable vs. immutable infrastructure?
External resources:
Next steps
In this section of Define your processes, you learned how to deploy immutable containers. Create immutable containers is part of the Define and automate processes pillar.
Refer to the following documents to learn more about creating immutable infrastructure:
- Create immutable infrastructure
- Create immutable virtual machines
- Create a Nomad cluster on AWS, GCP, and Azure