Standardize machine images across multiple cloud providers
As your organization grows, you may adopt a hybrid or multi-cloud strategy to enable innovation, increase resiliency, decrease costs, or integrate different systems. Packer is a cloud-agnostic tool that lets you build identical machine images for multiple platforms from a single template file. By tracking your build metadata through HCP Packer, you can query it for future downstream Packer builds or reference images in your Terraform configuration.
In this tutorial, you will build and deploy a machine image containing HashiCups, a fictional coffee-shop application, in AWS and Azure. To do so, you will use Packer to build and store the images in AWS and Azure, push the build metadata to HCP Packer, and use Terraform to deploy the images to their respective cloud providers. In the process, you will learn how to use Packer and HCP Packer to standardize machine images across multi-cloud and hybrid environments.
Prerequisites
This tutorial assumes that you are familiar with the workflows for Packer, HCP Packer, and either Terraform OSS or Terraform Cloud. If you are new to Packer, complete the Packer Get Started tutorials first. If you are new to HCP Packer, complete the Get Started with HCP Packer tutorials.
Terraform Cloud is a platform that you can use to manage and execute your Terraform projects. It includes features like remote state and execution, structured plan output, workspace resource summaries, and more. The workflow for Terraform Cloud is the same as Terraform OSS.
Select the Terraform OSS tab if you would rather complete this tutorial using Terraform OSS.
If you are new to Terraform, complete the Get Started tutorials first. If you are new to Terraform Cloud, complete the Terraform Cloud Get Started tutorials.
Next, you will need Terraform 1.2+ installed locally.
You will also need a Terraform Cloud account with Terraform Cloud locally authenticated.
In this tutorial, you will use the Terraform CLI to create a Terraform Cloud workspace and trigger remote apply runs.
Now, install Packer 1.7.10+ locally.
You will also need an HCP account with an HCP Packer Registry.
Next, create a new HCP service principal and set the following environment variables locally.
Environment Variable | Description |
---|---|
HCP_CLIENT_ID | The client ID generated by HCP when you created the HCP Service Principal |
HCP_CLIENT_SECRET | The client secret generated by HCP when you created the HCP Service Principal |
You will also need an AWS account with credentials set as local environment variables.
Environment Variable | Description |
---|---|
AWS_ACCESS_KEY_ID | The access key ID from your AWS key pair |
AWS_SECRET_ACCESS_KEY | The secret access key from your AWS key pair |
If you do not have one already, create an Azure account.
In your Azure account, create an Azure Active Directory Service Principal scoped to your Subscription, with the Contributor role, and an application secret. Be sure to copy the application secret value generated by Azure. Then, set the following environment variables.
Environment Variable | Description |
---|---|
ARM_CLIENT_ID | The Application (client) ID from your Azure Service Principal |
ARM_CLIENT_SECRET | The value generated by Azure when you created an application secret for your Azure Service Principal |
ARM_SUBSCRIPTION_ID | Your Azure subscription id |
ARM_TENANT_ID | The Directory (tenant) ID from your Azure Service Principal |
Next, create an Azure Resource Group in the US West 3 region and set the following environment variable.
Environment Variable | Description |
---|---|
TF_VAR_azure_resource_group | The name of the Azure Resource Group you created. Packer will store images here, and Terraform will create HashiCups infrastructure here. |
Clone repository
In your terminal, clone the example repository.
Navigate to the cloned repository.
Review Packer template
The packer
directory contains files Packer uses to build images.
In your editor, open variables.pkr.hcl
. Packer uses the environment variables you set earlier for the first four variables.
Now, open packer/build.pkr.hcl
.
The azure-arm.ubuntu-lts
source block uses the client_id
, client_secret
,
and subscription_id
parameters to authenticate to Azure. Packer retrieves an Ubuntu 22.04 image to use as the base image and stores built images in the resource group specified by the managed_image_resource_group_name
parameter.
The amazon-ebs.ubuntu-lts
source block retrieves an Ubuntu 22.04 image to use as a base image, from the region specified in the region
variable. The Amazon plugin for Packer uses the AWS credential environment variables you set earlier to authenticate to AWS.
The build
block references the image sources defined in the source blocks. Packer standardizes your machine images across clouds by following the same instructions to build both images.
First, Packer creates a virtual machine from each source image in both cloud providers. Then, it copies the HashiCups systemd unit file to each machine and runs the setup-deps-hashicups.sh
script to install and configure HashiCups. When the script finishes, Packer asks each cloud provider to create a new image from each virtual machine.
Finally, Packer sends image metadata from the newly built images to the specified HCP Packer registry bucket so you can reference the images.
Build HashiCups images
Change into the packer
directory.
Initialize the Packer template to install the required AWS and Azure plugins.
Packer installs the plugins specified in the required plugins
block in the build.pkr.hcl
template. Packer plugins are standalone applications that you can use to perform tasks during builds. They extend Packer's capabilities, similarly to Terraform providers.
Now, build the HashiCups images in both AWS and Azure.
Note
It may take up to 15 minutes for Packer to build the images.
Packer builds images in parallel in each cloud provider, reducing the total build time.
Continue on to the next section while the build completes to learn how to deploy the images to multiple clouds using Terraform. To skip the deployment step, proceed to Clean up your infrastructure.
Review Terraform configuration for HashiCups
The terraform
directory contains the Terraform configuration to deploy the
HashiCups machine images to Azure and AWS.
Open terraform/variables.tf
. This file contains variables used by the rest of the configuration. The aws_region
and azure_region
variables control which image metadata Terraform should request from HCP Packer for, and the regions where Terraform will deploy HashiCups images and infrastructure.
Warning
Ensure that the values assigned to aws_region
and azure_region
match the values for the corresponding Packer variables. If you changed the value of the Packer variables during build, change the Terraform variable values, too.
Now, open terraform/hcp.tf
. This configuration retrieves image information from
HCP Packer using data sources.
The hcp_packer_iteration.hashicups
data source retrieves image
iteration information from the production
channel of the learn-packer-multicloud-hashicups
bucket. These values are the defaults for the configuration's input variables.
The hcp_packer_image
data sources use the iteration ID from the
hcp_packer_iteration
data source to retrieve an image for
each cloud provider. Notice the differences in the cloud_provider
and
region
attributes between the two data sources.
Open terraform/aws.tf
. This configuration defines a VPC and network resources,
an AWS EC2 instance running HashiCups,
and a security group with public access on port 80. Notice that the HashiCups instance
references the hcp_packer_image.aws_hashicups
data source.
Open terraform/azure.tf
. This configuration defines a VPC and network resources,
an Azure virtual machine running HashiCups,
and a security group with public access on port 80. The virtual machine references the hcp_packer_image.azure_hashicups
data source.
Warning
This configuration hardcodes admin credentials for the Azure virtual machine for demo purposes. Do not hardcode credentials in production.
Initialize configuration
Change to the terraform
directory.
Set the TF_CLOUD_ORGANIZATION
environment variable to your Terraform Cloud
organization name. This will configure your Terraform Cloud integration.
Initialize your configuration. Terraform will automatically create the learn-packer-multicloud
workspace in your Terraform Cloud organization.
In Terraform Cloud, navigate to the learn-packer-multicloud
workspace.
Set the following workspace-specific variables. Set the correct type and be sure to mark the secrets as sensitive.
Variable name | Description | Type |
---|---|---|
ARM_CLIENT_ID | The Application (client) ID from your Azure Service Principal | Environment variable |
ARM_CLIENT_SECRET | The value generated by Azure when you created an application secret for your Azure Service Principal | Environment variable |
ARM_SUBSCRIPTION_ID | Your Azure subscription ID | Environment variable |
ARM_TENANT_ID | The Directory (tenant) ID from your Azure Service Principal | Environment variable |
AWS_ACCESS_KEY_ID | The access key ID from your AWS key pair | Environment variable |
AWS_SECRET_ACCESS_KEY | The secret access key from your AWS key pair | Environment variable |
HCP_CLIENT_ID | The client ID generated by HCP when you created the HCP Service Principal | Environment variable |
HCP_CLIENT_SECRET | The client secret generated by HCP when you created the HCP Service Principal | Environment variable |
azure_resource_group | The name of the Azure Resource Group you created. Packer will store images here, and Terraform will create HashiCups infrastructure here. | Terraform variable |
Wait for Packer to finish building your images, then continue with the tutorial.
Verify machine images
When Packer finishes building your images, navigate to your learn-packer-multicloud-hashicups
bucket in the HCP Packer dashboard.
Click on Iterations, then select the first iteration, labeled v1
. Notice that this
iteration has two builds — one for Azure, the other for AWS.
The respective AWS and Azure data sources in your Terraform configuration reference each of these artifacts.
Create HCP Packer channel
HCP Packer channels allow you to reference a specific build iteration in Packer or Terraform.
In the HCP console, click on Channels, then click on New Channel. Create
a new channel named production
and set it to the v1
iteration of your
learn-packer-multicloud-hashicups
bucket.
Terraform will query the production
channel to retrieve the Azure and AWS image IDs and deploy the appropriate images.
Deploy images
In your terminal, apply your configuration to deploy HashiCups images in
both Azure and AWS. Respond yes
to the prompt to confirm the operation.
Visit the addresses from the aws_public_ip
and azure_public_ip
outputs in your browser to view the HashiCups application.
Tip
It may take several minutes for the setup script to complete on each instance. If you cannot view the HashiCups dashboard or receive a error response, please wait a few minutes before trying again.
You successfully built and deployed identical machine images across multiple clouds with Packer and Terraform.
Clean up your infrastructure
Before moving on, destroy the infrastructure you created in this tutorial.
In the terraform
directory, destroy the infrastructure for the HashiCups application. Respond yes
to the prompt to confirm the operation.
Delete Azure resource group and machine images
Your Azure account still has machines images in the resource group you created for this tutorial.
In the Azure portal, visit Resource groups. Then, select the name of the resource group you created for this tutorial.
If you want to keep the Resource Group, select all images in the resource list whose name begins with hashicups_
. Then select Delete to delete them.
If you no longer need the resource group, select Delete resource group and follow the on-screen instructions to delete it. Azure will delete all resources contained in the resource group, including images, before deleting the group itself.
Delete AWS machine images
Your AWS account still has machine images and their respective snapshots, which you may be charged for depending on your other usage.
Note
Remember to delete the AMI images and snapshots in the region
where Packer created them. If you didn't update the aws_region
variable in the
terraform.tfvars
file, they will be in the us-west-1
region.
In your us-west-1
AWS account, deregister the
AMI
by selecting it, clicking on the Actions button, then the Deregister AMI
option, and finally confirm by clicking the Deregister AMI button in the
confirmation dialog.
Delete the snapshots by selecting the snapshots, clicking on the Actions button, then the Delete snapshot option, and finally confirm by clicking the Delete button in the confirmation dialog.
Clean up Terraform Cloud resources
If you used the Terraform Cloud workflow, navigate to your learn-packer-multicloud
workspace in Terraform Cloud and delete the workspace.
Next steps
In this tutorial, you built machine images from the same Packer template in AWS and Azure, pushed the metadata to HCP Packer, and deployed virtual machines using the built images. In the process, you learned how you can use Packer and HCP Packer to standardize your machine images as you adopt a multi-cloud strategy.
For more information on topics covered in this tutorial, check out the following resources.
- Complete the Build a Golden Image Pipeline with HCP Packer tutorial to build a sample application image with a golden image pipeline, and deploy it to AWS using Terraform.
- Complete the Set Up Terraform Cloud Run Task for HCP Packer tutorial to learn how to set up run tasks that ensure your Terraform configuration uses compliant machine images.