Well-Architected Framework
Make workflows consistent across cloud providers
In multi-cloud environments, or organizations with multiple teams, you can end up with several ways to do the same work. Different tools, runbooks, and approval paths slow down delivery, complicate handoffs, and increase operational risk.
Standardizing workflows means agreeing on a consistent change lifecycle, such as where configuration lives, how you review changes, how you validate the impact, and how you promote changes between environments. Even when the underlying cloud services differ, your team can use the same process to plan, review, and apply changes.
Why standardize workflows
Standardized workflows address the following operational challenges:
Reduce operational complexity across cloud providers: Managing infrastructure with provider-specific tools requires learning different workflows, CLIs, and APIs for each platform. Standardized workflows use consistent tooling and processes across all cloud providers, reducing the cognitive load on your team.
Enable flexible technology choices: When your workflows depend on specific tools or platforms, switching technologies requires retraining your team and rebuilding processes. Workflow standardization focuses on the process rather than the underlying technology, allowing you to introduce new tools without disrupting operations.
Accelerate team onboarding: When each team member follows different processes, institutional knowledge stays siloed with individuals. Standardized workflows document processes in code, making it easier for new team members to understand operations and contribute quickly.
Standardize multi-cloud workflows
You should standardize the lifecycle of your changes such as store configuration in version control, review updates through pull requests, validate impact before applying changes, and use consistent promotion between environments.
Standarize deployment with Terraform
You can use Terraform to standardize you infrastructure deployment process. Terraform provides a consistent workflow for managing infrastructure across cloud providers. You define infrastructure in HCL configuration files, run terraform plan to preview changes, and execute terraform apply to create or modify resources. The workflow remains the same whether you provision AWS EC2 instances, Azure Virtual Machines, or GCP Compute Engine instances.
HCP Terraform extends this standardization to team collaboration. Run triggers connect workflows by automatically starting downstream runs when upstream workspaces complete. When one workflow completes, such as creating a Kubernetes cluster, a run trigger starts a second workflow to deploy applications or configure monitoring.
Standardize image creation with Packer
Packer standardizes image creation workflows across cloud providers. You define an image in a Packer template using HCL, specify provisioning steps with shell scripts or configuration management tools, and run packer build to create machine images for multiple platforms.
Standardized images help you reduce configuration drift and speed up provisioning because every environment starts from the same baseline such as OS version, packages, and hardening settings. When you update that baseline, you can rebuild images and roll them out through the same review and promotion process you use for infrastructure changes.
The same Packer template produces AMIs for AWS, VHDs for Azure, and GCP images from a single configuration file.
Standardize image creation with Packer
The following Packer template creates machine images for multiple cloud providers from a single configuration:
web-app.pkr.hcl
packer {
required_plugins {
amazon = {
source = "github.com/hashicorp/amazon"
version = "~> 1.0"
}
azure = {
source = "github.com/hashicorp/azure"
version = "~> 2.0"
}
}
}
# Build configuration for AWS
source "amazon-ebs" "web_app" {
ami_name = "web-app-{{timestamp}}"
instance_type = "t3.micro"
region = "us-east-1"
source_ami_filter {
filters = {
name = "ubuntu/images/*ubuntu-jammy-22.04-amd64-server-*"
root-device-type = "ebs"
virtualization-type = "hvm"
}
most_recent = true
owners = ["099720109477"]
}
ssh_username = "ubuntu"
}
# Build configuration for Azure
source "azure-arm" "web_app" {
managed_image_name = "web-app-{{timestamp}}"
managed_image_resource_group_name = "packer-images"
os_type = "Linux"
image_publisher = "Canonical"
image_offer = "0001-com-ubuntu-server-jammy"
image_sku = "22_04-lts"
location = "East US"
vm_size = "Standard_B2s"
}
# Standard provisioning workflow for all platforms
build {
sources = [
"source.amazon-ebs.web_app",
"source.azure-arm.web_app"
]
# Install and configure application
provisioner "shell" {
inline = [
"sudo apt-get update",
"sudo apt-get install -y nginx",
"echo '<h1>Hello from a standardized image</h1>' | sudo tee /var/www/html/index.html >/dev/null",
"sudo systemctl enable nginx"
]
}
}
This Packer template uses the same provisioning workflow like install packages and configure services, across AWS and Azure. Running packer build web-app.pkr.hcl creates an AMI for AWS and a managed image for Azure simultaneously. Teams learn one workflow that applies to all cloud platforms.
Standardize secrets with Vault
Consistent workflows also require consistent secret handling. If each team stores secrets differently across AWS, Azure, and GCP, you increase the chance of leaks, inconsistent access controls, and unclear rotation processes.
Vault gives you a centralized workflow for secrets across environments. You can authenticate workloads and users, request secrets through a consistent API, and audit access in one place. You can also issue dynamic, time-bound credentials such as database credentials, so you reduce long-lived static secrets.
Standardize application deployment with Nomad
Nomad provides consistent application deployment workflows across cloud providers and platforms. You define applications in HCL job specifications, run nomad plan to preview changes, and execute nomad run to deploy. The workflow remains identical whether you deploy to AWS ECS, Azure Container Instances, Kubernetes, or bare metal servers.
The same Nomad job specification deploys applications to different infrastructure platforms by changing the task driver configuration such as Docker, exec, and Java, without changing the deployment workflow. Nomad's consistent scheduling and orchestration model works across public clouds, private data centers, and edge locations.
Standardize service networking with Consul
Consul standardizes service discovery and networking across cloud providers. Services register with Consul using a consistent API regardless of where they run. Consul service mesh provides uniform traffic management, security policies, and observability across AWS, Azure, GCP, and on-premises environments.
When services need to communicate, they query Consul for service locations instead of hardcoding IP addresses or DNS names. The service discovery workflow remains the same whether services run in containers, virtual machines, or serverless platforms.
Standardize access workflows with Boundary
Boundary provides consistent access workflows to infrastructure regardless of location. Users authenticate once and access resources through Boundary sessions, whether connecting to AWS EC2 instances, Azure VMs, Kubernetes pods, or on-premises databases. The workflow eliminates managing SSH keys, VPN configurations, or bastion hosts differently for each environment.
Boundary's session recording and credential brokering work identically across all target platforms, providing consistent security controls and audit trails regardless of where infrastructure runs.
Enforce consistent policies with Sentinel
Sentinel standardizes policy enforcement across your workflows. You define policies as code that validate Terraform plans, Vault secret access, Nomad job submissions, and Consul service registrations. The same policy language and workflow applies across all HashiCorp tools.
Policy-as-code ensures consistent governance whether you provision infrastructure with Terraform, deploy applications with Nomad, or manage secrets with Vault. Sentinel policies prevent configuration drift by validating changes against your organizational standards before execution.
HashiCorp resources
- Map your workflows to identify and document your current processes
- Define infrastructure as code to establish IaC foundation before standardizing workflows
- Create reusable modules to build standardized infrastructure components
Get started with workflow automation:
- Learn Terraform with the Terraform tutorials and read the Terraform documentation
- Learn Packer with the Packer tutorials and read the Packer documentation
- Learn Vault with the Vault tutorials and read the Vault documentation
- Learn Nomad with the Nomad tutorials and read the Nomad documentation
- Learn Consul with the Consul tutorials and read the Consul documentation
- Learn Boundary with the Boundary tutorials and read the Boundary documentation
- Get started with AWS, Azure, or GCP
Multi-cloud standardization:
- Read about Terraform multi-cloud provisioning for workflow consistency
- Build a golden image pipeline with HCP Packer for standardized images
- Configure HCP Terraform run triggers to chain workflows
Standardize workflows across HashiCorp tools:
- Learn Nomad job specifications for consistent application deployment
- Learn Consul service discovery for standardized service networking
- Learn Boundary session management for consistent access workflows
- Learn Sentinel policy as code for governance across tools
Next steps
In this section, you learned how to make workflows consistent across cloud providers by standardizing your change lifecycle and automating repeatable steps. You explored how Terraform and Packer create consistent workflows across AWS, Azure, and GCP, and how HCP Terraform run triggers connect workflows. Make workflows consistent across cloud providers is part of the Define and automate processes pillar.
Visit the following documents to continue building your automation strategy:
- Implement CI/CD to automate your standardized workflows
- Use version control to track changes to your workflow definitions
- Learn to centralize package management to standardize dependencies across environments