Terraform
Migrate a workspace to a Stack
Stacks let you to manage complex infrastructure deployments across multiple environments. Instead of loosely coupled workspaces, a Stack consists of a set of components that you organize into one or more deployments. HCP Terraform can manage the lifecycle of each deployment separately, and provisions your Stack components as a unit. Each deployment in your Stack will provision the same Terraform components with their own configuration. Each deployment in a Stack represents a set of infrastructure that works together, such as a development, test, or production environment. HCP Terraform will roll out changes to each deployment at a time, and allows you to track changes across your environments.
In this tutorial, you will migrate infrastructure from an HCP Terraform
workspace to a Stack. You will use HCP Terraform to deploy infrastructure in a
workspace, and then use the tf-migrate tool to migrate the workspace to a
Stack. Then you will add a second deployment to your Stack to manage a second
environment.
Prerequisites
This tutorial assumes that you are familiar with the Terraform and HCP Terraform workflows. If you are new to Terraform, complete the Get Started collection first. If you are new to HCP Terraform, complete the HCP Terraform Get Started tutorials first.
In order to complete this tutorial, you will need the following:
- An HCP Terraform account and organization.
- Your account must be a member of an organization and team with Stacks enabled.
- The Terraform CLI installed, v1.13 or later.
- The
tf-migratetool installed, v2.0 or later. - An AWS account and associated credentials that allow you to create AWS resources including an EC2 instance, VPC, and security groups.
- A variable set configured in HCP Terraform with your AWS credentials.
- A GitHub account.
Configure an organization-wide variable set
To migrate a workspace to a Stack using tf-migrate, you must make a variable
set with your cloud provider credentials globally available to your entire
organization. Because of this limitation, you may want to use a non-production
organization to complete this tutorial.
To configure a global credentials variable set:
Navigate to your organization's variable sets page by selecting Settings > Variable sets from the left navigation.
Select the variable set containing your cloud provider credentials and click the Edit organization variable set -> button in the upper right.
Ensure that Apply to all projects, Stacks and workspaces is selected under Variable set scope.
Scroll to the bottom of the page and click the Save variable set button to save this change.
Clone example repository
Navigate to the example repository for this tutorial. This repository contains example configuration that you will provision as an HCP Terraform workspace and then migrate to a Stack.
In your terminal, clone the example repository.
$ git clone https://github.com/hashicorp-education/learn-terraform-stacks-migrate
Provision example workspace
Change to the repository directory.
$ cd learn-terraform-stacks-migrate/aws
Update the terraform block in terraform.tf to configure the cloud block
that connects your local workspace to HCP Terraform.
terraform.tf
terraform {
required_providers {
## ...
}
cloud {
organization = "your-organization-name"
workspaces {
name = "learn-terraform-stacks-migrate"
}
}
required_version = "~> 1.13"
}
Replace your-organization-name with your organization name, which you can find
in the HCP Terraform portal.
Now that you have configured your HCP Terraform integration, run terraform
init to initialize your HCP Terraform workspace.
$ terraform init
Initializing HCP Terraform...
Initializing modules...
Initializing provider plugins...
- Reusing previous version of hashicorp/tls from the dependency lock file
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed hashicorp/tls v4.1.0
- Using previously-installed hashicorp/aws v6.18.0
HCP Terraform has been successfully initialized!
You may now begin working with HCP Terraform. Try running "terraform plan" to
see any changes that are required for your infrastructure.
If you ever set or change modules or Terraform Settings, run "terraform init"
again to reinitialize your working directory.
HCP Terraform initializes your local configuration and creates your HCP Terraform workspace.
Next, apply your configuration. Respond to the confirmation prompt with a yes.
$ terraform apply
Running apply in HCP Terraform. Output will stream here. Pressing Ctrl-C
will cancel the remote apply if it's still pending. If the apply started it
will stop streaming the logs, but will not stop the apply running remotely.
Preparing the remote apply...
To view this run in a browser, visit:
https://app.terraform.io/app/your-organization-name/learn-terraform-stacks-migrate/runs/run-d586BjYZnVsvyBAC
Waiting for the plan to start...
Terraform v1.13.4
on linux_amd64
Initializing plugins and modules...
data.aws_availability_zones.available: Refreshing...
module.instance.data.aws_ami.ubuntu: Refreshing...
data.aws_availability_zones.available: Refresh complete after 0s [id=us-east-1]
module.instance.data.aws_ami.ubuntu: Refresh complete after 1s [id=ami-0ecb62995f68bb549]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_key_pair.instance will be created
+ resource "aws_key_pair" "instance" {
+ arn = (known after apply)
## ...
Plan: 28 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ intance_ids = [
+ (known after apply),
+ (known after apply),
+ (known after apply),
]
+ intance_private_dns = [
+ (known after apply),
+ (known after apply),
+ (known after apply),
]
Do you want to perform these actions in workspace "learn-terraform-stacks-migrate"?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
tls_private_key.instance: Creating...
tls_private_key.instance: Creation complete after 0s [id=969741004758c3001123f342b11f95ed5dde1f6f]
aws_key_pair.instance: Creating...
## ...
module.instance.aws_instance.private[0]: Creating...
module.instance.aws_instance.private[0]: Still creating... [10s elapsed]
module.instance.aws_instance.private[0]: Creation complete after 13s [id=i-04274e05c1437d3e0]
Apply complete! Resources: 28 added, 0 changed, 0 destroyed.
Outputs:
intance_ids = [
"i-064ef7aeaadc1561f",
"i-0fe529a10956df720",
"i-0d80938ea77640c43",
]
intance_private_dns = [
"ip-10-0-1-209.ec2.internal",
"ip-10-0-2-75.ec2.internal",
"ip-10-0-3-170.ec2.internal",
]
Review the resources created by your workspace. In the next step, you will migrate these resources to a new Stack.
Migrate workspace to a Stack
Before migrating a workspace to a Stack, you must ensure that your workspace is
fully modularized. Your top-level configuration cannot include any resources or
data sources, and your resources must be provisioned by sub-modules. You can
either manually modularize the configuration, or use tf-migrate to automate
the process.
Modularize configuration
Use the tf-migrate modules create command to automatically modularize your
workspace's configuration. Respond to the confirmation prompt with a yes.
$ tf-migrate modules create
✓ Found 4 terraform files in the root directory
✓ Extracted HCP Terraform data to identify the workspaces controlled by the configuration.
You're about to begin the modularization process.
Please read the following important notes carefully:
1. A folder named "modularized_config" will be created to store the new modularized
configuration generated from your current setup.
2. All directories containing locally referenced modules will be copied into the
"modularized_config" folder.
3. The "modularized_config" folder must not already exist in the current working directory.
4. All folders for locally referenced modules must be located within the current working
directory where tf-migrate is being run.
Confirmation required ... ?
Only 'yes' or 'no' will be accepted as input.
Type 'yes' to approve proceed with the modularization process.
Type 'no' to cancel and abort.
Enter a value: yes
✓ Found 1 HCP Terraform workspaces associated with the configuration.
✓ Deleted backend block cloud from terraform block during modularization
✓ Successfully generated modularized configuration in modularized_config directory
✓ Modularization process completed successfully
Your modularized configuration files are available in the "modularized_config" directory.
Next steps to migrate your workspaces to Terraform Stacks:
1. Change into the modularized configuration directory:
cd modularized_config
2. Copy any terraform.tfvars files (from the top-level directories)
into this directory as needed.
3. Initialize Terraform:
terraform init
4. Prepare the stack migration:
tf-migrate stacks prepare
The tf-migrate CLI created a new directory called modularized_config that
contains an equivalent configuration to the workspace you previously
provisioned. Navigate into the new directory.
$ cd modularized_config
Initialize the modularized configuration.
$ terraform init
Initializing HCP Terraform...
Initializing modules...
- terraform_module in terraform_modules
- terraform_module.instance in terraform_modules/modules/instance
Downloading registry.terraform.io/terraform-aws-modules/vpc/aws 6.5.0 for terraform_module.vpc...
- terraform_module.vpc in .terraform/modules/terraform_module.vpc
Initializing provider plugins...
- Finding hashicorp/tls versions matching "~> 4.1.0"...
- Finding hashicorp/aws versions matching ">= 6.0.0, ~> 6.18.0"...
- Installing hashicorp/aws v6.18.0...
- Installed hashicorp/aws v6.18.0 (signed by HashiCorp)
- Installing hashicorp/tls v4.1.0...
- Installed hashicorp/tls v4.1.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
HCP Terraform has been successfully initialized!
You may now begin working with HCP Terraform. Try running "terraform plan" to
see any changes that are required for your infrastructure.
If you ever set or change modules or Terraform Settings, run "terraform init"
again to reinitialize your working directory.
Prepare your Stack
Prepare your Stack for migration with the tf-migrate stacks prepare command.
When prompted, name your Stack learn-terraform-stacks-migrate and your project
learn-terraform-stacks-migrate.
$ tf-migrate stacks prepare
⚠️ This is a beta release. Features and behavior may change.
Report issues: https://forms.gle/jJj2DXvVSqzqkgJe8
✓ Environment readiness checks completed
✓ Extracted terraform configuration data from current directory
Enter the name of the stack to be created: learn-terraform-stacks-migrate
Enter the name of a new project under which the stack will be created (project must not already exist): learn-terraform-stacks-migrate
✓ Fetched latest state file for workspace: learn-terraform-stacks-migrate
✓ Parsed state file for workspace: learn-terraform-stacks-migrate
✓ Extracted variables from terraform configuration
✓ Extracted providers from terraform configuration
✓ Extracted outputs blocks from terraform configuration
✓ Created components from module blocks from terraform configuration
✓ Created deployments for workspaces provided
✓ Stack configuration files generated successfully
✓ Completed sanity check: terraform stacks init
✓ Completed sanity check: terraform stacks fmt
✓ Completed sanity check: terraform stacks validate
─────────────────────────────────────────────────────────────────────────────
🎉 The `tf-migrate stacks prepare` command completed successfully.
─────────────────────────────────────────────────────────────────────────────
Next steps:
1. Review the generated files before proceeding:
a. _stacks_generated — contains Terraform Stacks configuration files.
b. stacks_migration_infra — contains Terraform configuration files for creating stacks and migrating workspaces.
2. Update all PLACEHOLDER values:
- In output blocks (outputs.tfcomponent.hcl in _stacks_generated)
- In deployment blocks (deployment.tfdeploy.hcl in _stacks_generated)
3. Configure authentication if your stacks manage cloud resources
(AWS, GCP, Azure, etc.):
→ https://developer.hashicorp.com/terraform/language/stacks/deploy/authenticate
4. Apply the configuration once all updates are complete:
run: `tf-migrate stacks execute` to start the migration.
The project created in this step cannot already exist in your HCP Terraform organization. Use a new project to migrate your Stack.
The tf-migrate stacks prepare command created two directories:
The
stacks_migration_infradirectory contains infrastructure thattf-migrateuses to create your project and Stack in HCP Terraform.The
_stacks_generateddirectory contains the configuration for your new Stack.
Review the configuration in modularized_config/_stacks_generated. This
directory contains five files that define your Stack, and a terraform_modules
directory with the modules that will define your Stack's components.
- The
providers.tfcomponent.hclfile defines the providers for your Stack. - The
components.tfcomponent.hclfile defines the components that will be provisioned for each of your Stack's deployments. The Terraform configuration for each component is defined interraform_modules. - The
deployment.tfdeploy.hclfile defines deployments for your Stack. Currently, this includes a single deployment that matches the workspace you provisioned earlier in this tutorial. - The
variables.tfcomponent.hclfile defines input variables that you can use to configure each deployment in your Stack. - The
outputs.tfcomponent.hclfile defines the output values for each of your Stack's deployments.
Configure credentials
You must configure your Stack to authenticate with your cloud provider. In this tutorial, you will use the same credentials variable set that you used to provision the infrastructure in your workspace. You can also authenticate your Stack by configuring a trust relationship with your cloud provider using OpenID Connect (OIDC).
Now configure your Stack to use your credentials variable set. Open
deployment.tfdeploy.hcl and add a store
block
for your variable set. Replace your-credentials-varset with the name of your
global variable set containing your provider credentials, which you can find in
your HCP Terraform organization's variable sets page by navigating to Settings > Variable sets in HCP Terraform.
modularized_config/_stacks_generated/deployment.tfdeploy.hcl
store "varset" "credentials" {
name = "your-credentials-varset"
category = "env"
}
Next, add your credentials to the deployment block in the same file. When you
deploy this Stack with HCP Terraform, it will pass these values into your Stack
as variables.
modularized_config/_stacks_generated/deployment.tfdeploy.hcl
deployment "learn-terraform-stacks-migrate" {
inputs = {
access_key = store.varset.credentials.AWS_ACCESS_KEY_ID
secret_key = store.varset.credentials.AWS_SECRET_ACCESS_KEY
session_token = store.varset.credentials.AWS_SESSION_TOKEN
aws_region = "us-east-1"
vpc_name = "learn-stacks-vpc"
vpc_cidr = "10.0.0.0/16"
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
}
import = true
}
Next, add the corresponding variables for your credentials to
variables.tfcomponent.hcl.
modularized_config/_stacks_generated/variables.tfcomponent.hcl
variable "access_key" {
description = "AWS access key"
type = string
ephemeral = true
}
variable "secret_key" {
description = "AWS sensitive secret key."
type = string
sensitive = true
ephemeral = true
}
variable "session_token" {
description = "AWS session token."
type = string
sensitive = true
ephemeral = true
}
Adding the ephemeral argument tells HCP Terraform to not store these values in
your Stack's state. The secret key and session token also have the sensitive
argument, so HCP Terraform will redact those values in its UI.
Update the provider configuration in providers.tfcomponent.hcl to authenticate
with these credentials.
modularized_config/_stacks_generated/providers.tfcomponent.hcl
provider "aws" "this" {
config {
region = var.aws_region
access_key = var.access_key
secret_key = var.secret_key
token = var.session_token
}
}
Migrate to your Stack
Migrate your workspace to a Stack using the tf-migrate stacks execute command.
$ tf-migrate stacks execute
⚠️ This is a beta release. Features and behavior may change.
Report issues: https://forms.gle/jJj2DXvVSqzqkgJe8
✓ Stack configuration path found: /Users/YOU/code/learn-terraform-stacks-migrate/aws/modularized_config/_stacks_generated
✓ Successfully validated stack configuration found in dir: /Users/YOU/code/learn-terraform-stacks-migrate/aws/modularized_config/_stacks_generated
✓ Using dir: /Users/YOU/code/learn-terraform-stacks-migrate/aws/modularized_config/stacks_migration_infra for terraform operations
✓ Init command ran successfully
✓ Plan command ran successfully and changes are detected
✓ Apply command ran successfully in 5m17.573512916s
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
Migration Summary
┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────┐
│ Item │ URL │
├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────┤
│ Project │ https://app.terraform.io/app/your-organization-name/projects/prj-kWZW6GvfUd8NMEDa │
│ Stack │ https://app.terraform.io/app/your-organization-name/projects/prj-kWZW6GvfUd8NMEDa/stacks/st-Wk1YFXiQCjSvXxNz │
└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────┘
Terraform Migrate has successfully executed the migration plan for your stack.
Please verify the migration in the HCP Terraform UI.
Next Steps:
1. Visit https://app.terraform.io/app/your-organization-name/projects/prj-kWZW6GvfUd8NMEDa/stacks/st-Wk1YFXiQCjSvXxNz.
2. Review the uploaded configuration and confirm it has been applied successfully.
3. Ensure all deployments under the stacks have completed. You may need to approve some plans.
4. If you encounter issues, refer to the diagnostics in the UI and follow this troubleshooting guide:
a. If there are configuration errors, review your stack configurations in _stacks_generated and resolve them.
b. Delete the stack and project from the HCP Terraform UI if necessary.
c. Remove the `.terraform` and `terraform.tfstate` files from the stacks_migration_infra directory on your local machine.
d. Run `tf-migrate stacks execute` again to re-create the stacks and upload the corrected configuration.
Navigate to the URL output by the tf-migrate stacks execute command above to
review the resources you have imported to your new Stack.
Remove import argument
The deployment configuration generated by tf-migrate includes an import
argument to tell HCP Terraform to import your Stack's state during the migration
process. Now that your workspace's state has been migrated to your Stack, remove
the import argument from the deployment block in deployment.tfdeploy.hcl.
modularized_config/_stacks_generated/deployment.tfdeploy.hcl
deployment "learn-terraform-stacks-migrate" {
inputs = {
access_key = store.varset.credentials.AWS_ACCESS_KEY_ID
secret_key = store.varset.credentials.AWS_SECRET_ACCESS_KEY
session_token = store.varset.credentials.AWS_SESSION_TOKEN
aws_region = "us-east-1"
vpc_name = "learn-stacks-vpc"
vpc_cidr = "10.0.0.0/16"
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
}
- import = true
}
In your terminal, change into the directory with your generated Stack configuration.
$ cd _stacks_generated
After changing your Stack configuration, you need to upload that new
configuration version to make HCP Terraform perform that change. You can upload
your Stack configuration using the terraform stacks configuration upload
command. Replace st-abcdef12345 with the Stack ID, which you can find in the
output of the previous command, or on your Stack's Settings page.
$ terraform stacks configuration upload -stack-id st-abcdef12345
Uploading stack configuration...
Configuration for Stack (id: 'st-abcdef12345') was uploaded
Configuration ID: stc-WaLB68bJRggxJa2o
Sequence Number: 2
See run at: https://app.terraform.io/app/your-organization-name/projects/prj-SRVBr1Hjf3Df2ftr/stacks/st-cnqCbcyu2sJuU2yt/configurations/stc-WaLB68bJRggxJa2o
HCP Terraform will apply the change with a new deployment run. Since none of your underlying infrastructure has changed, the plan and apply will succeed without you having to confirm it.
Decommission workspace
You have successfully migrated the management of your infrastructure to a Stack. To avoid accidentally changing the infrastructure with the workspace you created earlier in this tutorial, delete or lock the workspace from HCP Terraform without destroying the infrastructure it managed.
To delete the workspace:
Navigate to your
learn-terraform-stacks-migrateworkspace.Select Settings > Destruction and Deletion from the left navigation.
Scroll to the bottom of the page and click the Force Delete from HCP Terraform button.
Confirm the action, and click the Force delete button to remove your workspace without affecting the infrastructure it used to manage.
Move Stack to VCS workflow
Now that you have migrated your infrastructure to your new Stack, you could continue to manage your stack with the CLI-driven workflow, or you can move your stack's configuration to a Version Control System (VCS).
For this tutorial, you will migrate your Stack's configuration to a new VCS repository, and then configure your Stack to use it. To do so, first create a VCS repository to manage the Stack's configuration, commit your configuration, and then configure your stack to use it.
Create VCS repository
To create your VCS repository:
Navigate to Create a new repository in GitHub.
Select the owner for the repository, and name it
learn-terraform-stacks-migrated.Turn on the
Add READMEtoggle.In the
Add .gitignoresection, selectTerraformfrom the dropdown.Click the Create repository button to create the repository.
On your new repository's GitHub page, select the Code dropdown and copy the appropriate URL to clone your repository.
Commit Stack configuration to VCS repository
Now, copy your Stack configuration to the new repository and commit it. To do so:
Open a second terminal window.
In the second terminal, navigate to a directory where you want to store your local clone of the new Git repository.
Clone the repository to your local machine:
$ git clone <repository-url>Replace
<repository-url>with the git URL of your new repository.Change into the repository directory.
$ cd learn-terraform-stacks-migratedPrint out the path to your new repository directory and copy it to your clipboard.
$ pwd /Users/YOU/code/learn-terraform-stacks-migratedIn the first terminal window, ensure that you are still in the
_stacks_generatedsubdirectory.Copy your new Stack's configuration to the new repository's directory.
$ cp -r . /Users/YOU/code/learn-terraform-stacks-migratedReplace
/Users/YOU/code/learn-terraform-stacks-migratedwith the output of thepwdcommand from the previous step.In the second terminal window, ensure that you are still in the
learn-terraform-stacks-migrateddirectoryAdd the Stack configuration to your repository.
$ git add .Commit the configuration.
$ git commit -m "Migrated Stacks config" [main 4e5fc87] Migrated Stacks config 14 files changed, 402 insertions(+) create mode 100755 .terraform-version create mode 100755 .terraform.lock.hcl create mode 100755 components.tfcomponent.hcl create mode 100755 deployment.tfdeploy.hcl create mode 100755 outputs.tfcomponent.hcl create mode 100755 providers.tfcomponent.hcl create mode 100644 terraform_modules/main.tf create mode 100644 terraform_modules/modules/instance/main.tf create mode 100644 terraform_modules/modules/instance/outputs.tf create mode 100644 terraform_modules/modules/instance/variables.tf create mode 100644 terraform_modules/outputs.tf create mode 100644 terraform_modules/terraform.tf create mode 100644 terraform_modules/variables.tf create mode 100755 variables.tfcomponent.hclPush the changes to GitHub.
$ git push Enumerating objects: 20, done. Counting objects: 100% (20/20), done. Delta compression using up to 8 threads Compressing objects: 100% (17/17), done. Writing objects: 100% (19/19), 5.87 KiB | 2.93 MiB/s, done. Total 19 (delta 2), reused 0 (delta 0), pack-reused 0 (from 0) remote: Resolving deltas: 100% (2/2), done. To github.com:YOU/learn-terraform-stacks-migrated.git 111f35a..4e5fc87 main -> main
Configure Stack to use VCS repository
Finally, configure your stack to use the new VCS repository.
Return to HCP Terraform in your browser window.
Navigate to your Stack's Settings > Version Control page.
On the Connect to VCS step, select your GitHub integration for the organization you created your new repository in.
In order to manage Stack configuraton changes with your VCS, you must integrate your version control system (VCS) provider with HCP Terraform. If your GitHub organization is not listed in this step, follow the steps in the Set up the GitHub.com OAuth VCS provider documentation to learn configure the integration.
On the Choose a repository step, select the
learn-terraform-stacks-migratedrepository you created earlier in this tutorial.On the Configure settings step, leave the default values, and click the Update Stack button to connect your Stack to your new repository.
Create a new deployment
You have migrated your workspace to a Stack, and configured a new VCS repository
to manage your Stack's configuration. Now that you have moved your stack's
configuration to your VCS, you no longer need the configuration in the
modularized_config/_stacks_generated directory in the original repository.
Now update your Stack to create a new deployment. In the
learn-terraform-stacks-migrated VCS repository's directory, edit
deployment.tfdeploy.hcl to add the following deployment block.
learn-terraform-stacks-migrated/deployment.tfdeploy.hcl
deployment "learn-terraform-stacks-migrate-test" {
inputs = {
access_key = store.varset.credentials.AWS_ACCESS_KEY_ID
secret_key = store.varset.credentials.AWS_SECRET_ACCESS_KEY
session_token = store.varset.credentials.AWS_SESSION_TOKEN
aws_region = "us-west-2"
vpc_name = "learn-stacks-vpc-test"
vpc_cidr = "10.0.0.0/16"
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24"]
}
}
This block configures a new deployment of your Stack in a different region. HCP Terraform will provision all of the components for your Stack in each deployment.
Add this change to your repository.
$ git add deployment.tfdeploy.hcl
Commit the change.
$ git commit -m "Add test deployment"
[main 104cdde] Add test deployment
1 file changed, 13 insertions(+)
Push the change to GitHub.
$ git push
Enumerating objects: 5, done.
Counting objects: 100% (5/5), done.
Delta compression using up to 8 threads
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 351 bytes | 351.00 KiB/s, done.
Total 3 (delta 2), reused 0 (delta 0), pack-reused 0 (from 0)
remote: Resolving deltas: 100% (2/2), completed with 2 local objects.
To github.com:YOU/learn-terraform-stacks-migrated.git
4e5fc87..104cdde main -> main
Return to your Stack's Configurations page in HCP Terraform. After HCP
Terraform loads your new configuration, it will prepare the two deployments.
Your original deployment succeeds with no changes, and the
learn-terraform-stacks-migrate-test deployment creates a plan to add the
resources for your new test environment. Select the
learn-terraform-stacks-migrate-test, and once the plan is complete, click the
Approve plan button to approve the plan YOUR_JWT_AUDIENCE create your
resources.
Now that you have migrated your infrastructure to a Stack, you can continue to manage your resources with the Stacks VCS-driven workflow.
Clean up your infrastructure
Remove the resources that you created in this tutorial. In your new
learn-terraform-stacks-migrated repository directory, edit
deployment.tfdeploy.hcl to add the destroy argument to both deployments.
learn-terraform-stacks-migrated/deployment.tfdeploy.hcl
deployment "learn-terraform-stacks-migrate" {
inputs = {
access_key = store.varset.credentials.AWS_ACCESS_KEY_ID
secret_key = store.varset.credentials.AWS_SECRET_ACCESS_KEY
session_token = store.varset.credentials.AWS_SESSION_TOKEN
aws_region = "us-east-1"
vpc_name = "learn-stacks-vpc"
vpc_cidr = "10.0.0.0/16"
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
}
destroy = true
}
deployment "learn-terraform-stacks-migrate-test" {
inputs = {
access_key = store.varset.credentials.AWS_ACCESS_KEY_ID
secret_key = store.varset.credentials.AWS_SECRET_ACCESS_KEY
session_token = store.varset.credentials.AWS_SESSION_TOKEN
aws_region = "us-west-2"
vpc_name = "learn-stacks-vpc"
vpc_cidr = "10.0.0.0/16"
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24"]
}
destroy = true
}
Add the change to git.
$ git add deployment.tfdeploy.hcl
Commit the change.
$ git commit -m "Destroy both deployments"
[main 82c1e17] Destroy both deployments
1 file changed, 4 insertions(+)
Push to GitHub.
$ git push
Enumerating objects: 5, done.
Counting objects: 100% (5/5), done.
Delta compression using up to 8 threads
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 338 bytes | 338.00 KiB/s, done.
Total 3 (delta 2), reused 0 (delta 0), pack-reused 0 (from 0)
remote: Resolving deltas: 100% (2/2), completed with 2 local objects.
To github.com:YOU/learn-terraform-stacks-migrated.git
d9abb99..82c1e17 main -> main
Return to your Stack in HCP Terraform. After HCP Terraform loads your new configuration, it will create plans to destroy the two deployments. Once the plans are created, approve both of them. Wait for the apply step to complete for both deployments, and verify that HCP Terraform has destroyed all of the infrastructure you created for this tutorial.
After you have confirmed that HCP Terraform has removed all of your resources, navigate to your Stack's Settings > Destruction and Deletion page. Click the Delete button, and on the confirmation dialog click Delete again to delete the Stack.
Next, navigate to the learn-terraform-stacks-migrate project you created when
you migrated your Stack. Confirm that the project is now empty, and navigate to
the Settings page. Scroll to the bottom of the page and click the Delete
button. Confirm the project name, and click the Delete button to delete your
project.
Next steps
In this tutorial, you learned how to migrate infrastructure from a workspace to a Stack with HCP Terraform. You also learned how Stacks support deploying the same configuration across multiple environments. In addition to allowing you to define any number of environments in a single configuration, Terraform Stacks include powerful orchestration and workflow features.
- Read the Terraform Migrate documentation for more details on migration features and workflow.
- Read the Compare Stacks and workspaces docuentation.
- Learn how to use Stacks deferred actions to manage Kubernetes workloads by following the Manage Kubernetes workloads with Stacks tutorial.