Use HCP Packer and Terraform to manage cloud container registries
Author: Bruce Harrison
This guide will help you with creating and managing Docker containers via HCP Packer and interface them with cloud service providers (CSP) container registries. This guide will demonstrate how to push images to AWS, Azure, and GCP container registries.
This guide uses both Packer CE and the HCP Packer platform. You will use Packer CE to build containers and VM images. You will use HCP Packer to store container and VM metadata generated by Packer CE.
By using Packer CE and HCP Packer, you gain the following benefits:
- Capturing of container metadata by the HCP platform
- Visibility into container lineage
- Notification when a derived image is outdated due to a new version of the parent being released
- Leverage HCP Packer webhooks for more advanced cloud providers registry management
- VM images and containers can share a common declarative file format (Packer HCL) that is used to describe and build both.
Target audience
This guide references the following roles:
- Platform operator: Someone responsible for setting up the image repositories, building the containers, and pushing them to the respective cloud provider container registries.
Validated architecture
The following diagram shows the process of pushing a container to a cloud provider's container registry using HCP Packer.
This document focuses on running this process from a local machine, but the same concepts could easily be applied to a CI/CD mechanism such as Github Actions or Gitlab Runners.
We recommend using Terraform to provision the underlying infrastructure.
Due to the way Packer post-processors and docker login works, if you plan on pushing images to multiple cloud providers, you should run each build in an isolated context. Attempting to push multiple images in parallel from a single context can cause transient failures since they all compete for
docker login
access.Packer offers a
docker-push
post processor, but this guide does not cover this post-processor. The post-processor is inherently inflexible, and causes wide divergence in container push processes across cloud providers. Instead, you will be manually logging into registries and pushing the containers. This method is broadly applicable to CI/CD scenarios as well.Every cloud provider has unique constraints on how credentials are handled. This guide uses long-lived credentials for brevity, but this may not work for all organizations.
Platform operators should consult with their internal security teams to ensure that the cloud providers authentication method aligns with company policies and practices.
Prerequisites
- An active HCP service principal.
- An existing HCP Packer Account (HCP Platform).
- Packer OSS CLI >=v1.11.0 installed.
- Have a working Packer HCL file configured for building a Docker container.
- Git CLI installed.
For your desired cloud provider, ensure you have the following:
- Necessary permissions to create resources within your chosen cloud provider.
- gcloud CLI installed locally and configured with your GCP credentials
Create image repository
You can create an artifact repository using either Terraform or the gcloud
CLI.
Use the following Terraform configuration to deploy an Google Cloud artifact repository. This configuration uses the google_container_registry
, google_service_account
, google_project_iam_member
, and google_service_account_key
resources.
Update the <gcp-project-id>
with the GCP project ID you want to deploy your artifact repository to.
resource "google_container_registry" "demo" {
project = "<gcp-project-id>"
location = "EU"
}
resource "google_service_account" "demo" {
account_id = "packer-service-account-id"
display_name = "Packer Service Account"
}
resource "google_project_iam_member" "demo" {
project = "packer-project"
role = "roles/artifactregistry.writer"
member = google_service_account.demo.email
}
resource "google_service_account_key" "demo" {
service_account_id = google_service_account.demo.name
}
output "service_account_keys" {
value = google_service_account_key.demo.private_key
}
After you create the Google Cloud artifact repository, store the service_account_keys
in a file named `packer-user.key. You will use these values to login to the artifact repository.
Enable the gcloud authentication helper
You need to enable the gcloud
authentication helper so you can login to the artifact repository through docker login
.
First, setup the Service Principal as the account Docker will use.
$ gcloud auth activate-service-account \
packer-user@<gcp-project-id>.iam.gserviceaccount.com \
--key-file=./packer-user.key
Next, enable the gcloud
/docker
authentication helper.
$ gcloud auth configure-docker us-west1-docker.pkg.dev
Test the credentials.
$ docker login us-west1-docker.pkg.dev
You should see "Login Succeeded" as the result if setup correctly.
Push container to registry
Locate your HCP service principal credentials and export them as environment variables.
$ export HCP_CLIENT_ID=
$ export HCP_CLIENT_SECRET=
Build the container (replace filename with your template filename).
$ packer build docker-debian-gcp.pkr.hcl
Login to the GCP Container Registry via GCP Docker credential helper.
$ docker login us-west1-docker.pkg.dev
Push the container to GCP.
$ docker push us-west1-docker.pkg.dev/<gcp-project-id>/packerexampleregistry/packer-demo-image:latest
The image should be pushed into the artifact repository, and is now available to be pulled.
Conclusion
In this guide, you learned how to push containers to cloud provider container registries using Packer CE and HCP Packer. To learn more, check out the following resources: