Upgrade and refactor Terraform modules
Author: Craig Sloggett
This guide provides a step-by-step workflow for upgrading Terraform module versions, applicable to both the HashiCorp Cloud Platform (HCP) and the Community Edition. When you upgrade your Terraform modules, you help maintain the standards set by the teams producing the modules, ensuring consistency and best practices. Regular upgrades prevent the accumulation of technical debt and reduce the risk of failed runs due to outdated, unsupported versions. Upgrading modules enables access to new service features and functionality while improving deployment performance through optimizations.
Upgrading a module can present challenges, such as maintaining infrastructure integrity and IaC changes behaving unexpectedly. Understanding how to implement module upgrades reliably and addressing issues that may arise is important.
The example is this guide upgrades an AWS S3 bucket module from v2.15.0
to v3.0.0
where the provider version is mismatched and requires an upgrade. The configuration splits into multiple resources, and then importing the resources to match the new configuration. The upgrade concludes with no changes to the infrastructure and the deployment is validated.
Target audience
This guide references the following roles:
- Platform operators: Someone responsible for monitoring and publishing new modules.
- Producers and consumers: Someone who uses the new modules in their environments.
Prerequisites
The term platform used throughout this document refers to a standardized set of tools, processes, and services to support application development and deployment across various environments.
To follow this guide, you will need the following:
- An understanding of the core Terraform workflow.
- How to use git for version control in a pull/merge request workflow.
Before starting this guide, please review the following resources:
Background and best practices
This section contains best practices for upgrading your Terraform modules. You must understand these concepts and best practices to ensure reliable, efficient, and secure module management.
- Upgrade modules often, ideally as they are published.
- Isolate module upgrades into their separate
git
branch and pull request. - If you have multiple issues during an upgrade, refactor for one issue at a time.
- Continuously verify changes with speculative plans. Speculative plans are plan-only runs that test changes to a configuration during editing and code review.
- Only run
terraform apply
once there are no errors, warnings, or actions in a speculative plan. - Ensure a speculative plan output only shows the module version change during an upgrade.
- Leverage change requests to inform owners of new module versions.
When using either HCP Terraform or Terraform Enterprise, Terraform manages infrastructure resources as organizations, projects, and workspaces, with access managed through user accounts, teams, and permissions. Planning how to organize these objects is outlined in the configuration for first use section of the Terraform: Operating Guide for Adoption.
People and process
There is a slight difference in responsibilities between the platform team and those consuming the platform. The platform team is responsible for monitoring and publishing new versions, but it’s up to the consumers to implement change.
The following highlights the roles and responsibilities of the platform team, and producers and consumers of the platform.
The Platform team is responsible for:
- Staying informed about new releases of the Terraform modules in use on the platform.
- Updating existing modules based on user feedback and Terraform provider changes.
- Testing new versions in a staging environment before deployment to production.
- Ensuring all relevant documentation is updated to reflect changes in the new versions.
- Communicating upcoming upgrades and potential impacts to all stakeholders.
- Providing training and support to ensure the smooth adoption of new versions.
Producers and consumers of the platform are responsible for:
- Staying informed about new versions available for the Terraform modules that deploy their application infrastructure.
- Testing their configurations with the new module versions in a staging environment.
- Providing feedback to the platform team about any issues encountered during testing.
- Updating their documentation and workflows to align with the new versions.
- Attending training sessions and actively engaging with support resources the platform team provides.
To learn more details about operating people and processes, visit the Terraform: Operating Guide for Adoption.
Validated architecture
When upgrading a Terraform module in your code base, you may have to make changes to your configuration, depending on the output of a speculative plan. The first validated architecture diagram shows the overall process to upgrade a Terraform module, and the second details the refactoring workflow.
The following is an overview of the module upgrade process:
In the Terraform module upgrade process, you start by monitoring for a new module version. Once a new version is available, update the Terraform configuration to use it. Before applying any changes, verify that there are no unintended modifications by reviewing the speculative plan output. If necessary, refactor the Terraform configuration to ensure compatibility with the updated module. Finally, after upgrading, verify the deployment to confirm that everything functions as expected.
If you need to refactor your module, you can use the following workflow:
Checklist
- Terraform has been installed and configured on your machine with the ability to run a speculative plan against the code base being upgraded.
- Push access to a git repository containing Terraform configuration files.
- A GitHub account to receive notifications for new module versions.
- AWS credentials for an IAM role with access to deploy an S3 bucket.
Deploy stable configuration
To begin, the platform team will deploy the initial, stable Terraform configuration. In this example, you will use the v2.15.0
AWS S3 Terraform module to create an S3 bucket.
terraform {
required_version = "~> 1.5"
required_providers {
aws = {
source = "hashicorp/aws"
version = "3.69.0"
}
}
}
provider "aws" {
region = "ca-central-1"
}
module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "2.15.0"
bucket = "testing-module-upgrades-bucket-00001"
attach_public_policy = false
server_side_encryption_configuration = {
rule = {
apply_server_side_encryption_by_default = {
sse_algorithm = "aws:kms"
}
}
}
}
Authenticate the AWS provider. Refer to the Terraform Registry for more information.
Initialize your Terraform configuration.
$ terraform init
Initializing the backend...
Initializing modules...
Downloading registry.terraform.io/terraform-aws-modules/s3-bucket/aws 2.15.0 for s3_bucket...
- s3_bucket in .terraform/modules/s3_bucket
Initializing provider plugins...
- Finding hashicorp/aws versions matching "3.69.0, ~> 3.69"...
- Installing hashicorp/aws v3.69.0...
- Installed hashicorp/aws v3.69.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Then, apply your configuration to create the S3 bucket.
$ terraform apply
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# module.s3_bucket.aws_s3_bucket.this[0] will be created
+ resource "aws_s3_bucket" "this" {
+ acceleration_status = (known after apply)
+ acl = "private"
+ arn = (known after apply)
+ bucket = "testing-module-upgrades-bucket-00001"
+ bucket_domain_name = (known after apply)
+ bucket_regional_domain_name = (known after apply)
+ force_destroy = false
+ hosted_zone_id = (known after apply)
+ id = (known after apply)
+ region = (known after apply)
+ request_payer = (known after apply)
+ tags_all = (known after apply)
+ website_domain = (known after apply)
+ website_endpoint = (known after apply)
+ server_side_encryption_configuration {
+ rule {
+ apply_server_side_encryption_by_default {
+ sse_algorithm = "aws:kms"
}
}
}
+ versioning (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
module.s3_bucket.aws_s3_bucket.this[0]: Creating...
module.s3_bucket.aws_s3_bucket.this[0]: Creation complete after 2s [id=testing-module-upgrades-bucket-00001]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Monitor for a new module version
The platform team will monitor the Terraform module for any updates. We recommend using an automated tool like Renovate or GitHub's Dependabot. These tools will automatically generate a pull request against your project to upgrade the version of dependencies in use (like a Terraform module).
Alternatively, developers can subscribe to module release notifications by selecting Releases under the Watch menu of the upstream GitHub repository. You will now receive an email and a notification in your GitHub inbox whenever the module producer publishes a new version.
As a module producer, you cannot guarantee that consumers are watching for new versions, but you might want them to upgrade as soon as possible. To facilitate this, you can deprecate a module version to add warnings to the registry page and the run output of any plans using the deprecated version.
For example, the run output of a deprecated module would display the following warning to the consumer:
Warning: Deprecated modules found, consider installing an updating version. The following are affected:
Version X.X.X of <module-name>
Update module configuration
When the platform team receives a notification about a new module version, they should upgrade and install the new module version in a timely manner. This involves modifying the version
argument within the affected module
block.
Create a new git
branch to isolate your changes.
$ git branch upgrade-s3-bucket-module
$ git push -u origin upgrade-s3-bucket-module
$ git checkout upgrade-s3-bucket-module
Update the version
argument for the S3 bucket module:
module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "3.0.0"
bucket = "testing-module-upgrades-bucket-00001"
attach_public_policy = false
Note
We highly recommend you only update the module version in this change. This ensures any changes in the plan are solely due to the module update, enabling a clear and accurate evaluation of the impact.
Install the new module version by re-initializing your Terraform configuration with the upgrade
flag. :
$ terraform init -upgrade
Initializing the backend...
Upgrading modules...
Downloading registry.terraform.io/terraform-aws-modules/s3-bucket/aws 3.0.0 for s3_bucket...
- s3_bucket in .terraform/modules/s3_bucket
Initializing provider plugins...
- Finding hashicorp/aws versions matching "3.69.0, >= 3.75.0"...
╷
│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider hashicorp/aws: no available releases match the given constraints 3.69.0, >= 3.75.0
│
│ To see which modules are currently depending on hashicorp/aws and what versions are specified, run the following command:
│ terraform providers
╵
Identify and document errors, warnings, or actions
The platform team will identify any errors, warnings, or required actions due to upgrading the module version.
For this S3 example, you have upgraded the S3 module version without updating the module configuration. As a result, the configured provider version does not match the constraints set by the updated module.
The latest version of the S3 module requires the AWS provider version to be >= 3.75.0.
Refer to the module CHANGELOG file to see the following notice:
> ⚠ BREAKING CHANGES
Update to support AWS provider v3.75 and newer (including v4.x) (#139)
> [!Note]
> This is one example of many types of changes that need to be analyzed individually as not all changes are the same. Always reference the changelog for any errors or warnings presented to you during an upgrade.
When presented with any errors or warnings during an upgrade, the platform team must document these changes and prioritize them for review and remediation. Creating a ticket in the backlog helps organize and track the necessary modifications, ensuring they are addressed promptly and systematically.
Document the errors or warnings presented. Include details such as the specific resources affected, the nature of the actions (e.g., additions, modifications, deletions), and any potential impacts on the infrastructure. In this case, it is especially important to document the requirement to upgrade the underlying provider version since it can now affect more than the module being upgraded.
This informs your team of the provider version upgrade and sets the expectation that the upgrade might require more effort than originally thought. Ensure the ticket includes all relevant information and context to facilitate the assigned engineer's understanding and resolution.
Tip
The Terraform modules managed by AWS typically have guides to help with the upgrade process for major releases. Check out the upstream GitHub repository to find specific guidance for your module upgrade.
Refactor Terraform configuration
After you have identified and documented the remediation actions, the platform team should refactor the configuration to address them.
In the S3 example, first, address the provider version constraints change.
Provider version mismatch
Update the version
argument for the aws
provider in the required_providers
block.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "3.75.0"
}
}
}
Root modules should use a ~>
constraint to set both a lower and upper bound on versions for each provider they depend on. However, using a specific version helps communicate the exact steps in this guide.
Install the new module version, this time with the updated AWS provider.
$ terraform init -upgrade
Initializing the backend...
Upgrading modules...
Downloading registry.terraform.io/terraform-aws-modules/s3-bucket/aws 3.0.0 for s3_bucket...
- s3_bucket in .terraform/modules/s3_bucket
Initializing provider plugins...
- Finding hashicorp/aws versions matching ">= 3.75.0, 3.75.0"...
- Installing hashicorp/aws v3.75.0...
- Installed hashicorp/aws v3.75.0 (signed by HashiCorp)
Terraform has made some changes to the provider dependency selections recorded
in the .terraform.lock.hcl file. Review those changes and commit them to your
version control system if they represent changes you intended to make.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
You have successfully addressed the provider mismatch error. After updating the provider version in the Terraform configuration, run a speculative plan. This step lets you preview the potential changes without actually applying them, ensuring that the new provider version does not introduce unexpected modifications to the infrastructure.
The goal is to have no errors, warnings, or actions when upgrading to a newer module version.
If you have configured a VCS-driven workflow, as defined in the Terraform workflows section of the Terraform: Operating Guide for Adoption, you could open a pull request right now and push your changes to run a speculative plan.
This workflow is designed to guide you, as a single developer, through a provider upgrade by quickly iterating through the core Terraform workflow using the CLI to run speculative plans.
If you run terraform plan
without the -out=FILE
option, then it will create a speculative plan, which only presents the proposed changes.
Run a speculative plan.
$ terraform plan
module.s3_bucket.data.aws_canonical_user_id.this: Reading...
module.s3_bucket.aws_s3_bucket.this[0]: Refreshing state... [id=testing-module-upgrades-bucket-00001]
module.s3_bucket.data.aws_canonical_user_id.this: Read complete after 1s [id=efcda01167ad832d1f7be6d0fc54f59a4549f08f80e253ae09dd33aa18934b7a]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
~ update in-place
Terraform will perform the following actions:
# module.s3_bucket.aws_s3_bucket.this[0] will be updated in-place
~ resource "aws_s3_bucket" "this" {
id = "testing-module-upgrades-bucket-00001"
tags = {}
# (12 unchanged attributes hidden)
- server_side_encryption_configuration {
- rule {
- bucket_key_enabled = false -> null
- apply_server_side_encryption_by_default {
- sse_algorithm = "aws:kms" -> null
# (1 unchanged attribute hidden)
}
}
}
# (1 unchanged block hidden)
}
# module.s3_bucket.aws_s3_bucket_server_side_encryption_configuration.this[0] will be created
+ resource "aws_s3_bucket_server_side_encryption_configuration" "this" {
+ bucket = "testing-module-upgrades-bucket-00001"
+ id = (known after apply)
+ rule {
+ apply_server_side_encryption_by_default {
+ sse_algorithm = "aws:kms"
# (1 unchanged attribute hidden)
}
}
}
Plan: 1 to add, 1 to change, 0 to destroy.
While the speculative plan has no errors or warnings, however, there are a couple of unexpected actions that you need to address:
- The
aws_s3_bucket
will be updated in-place, which will remove the server side encryption configuration. - Terraform will create a
aws_s3_bucket_server_side_encryption_configuration resource
.
Based on the output from the speculative plan, the updated AWS provider uses another resource to configure the server side encryption for the aws_s3_bucket
. The module maintainers have accounted for this in the latest release and updated the module code accordingly.
However, it is unclear if adding the aws_s3_bucket_server_side_encryption_configuration
resource with terraform apply
will succeed without error. The AWS API might throw an error when we try to update this resource.
The deployed infrastructure matches our updated configuration. To guarantee the best experience, we recommend importing the new resource to align the Terraform state with the infrastructure in AWS, avoiding potential conflicts.
Import resources
Each resource in the Terraform Registry includes a section in the documentation that explains how to import resources of that type. For example, the aws_s3_bucket_server_side_encryption_configuration
resource explains how to import using the terraform import
command or an import block
(if you are using Terraform v1.5.0
and later).
Note
We recommend using the configuration-driven import with import blocks over the legacy terraform import
command because it is predictable, can be automated, and lets you preview an import operation before modifying state.
In the S3 example, look at the speculative plan output from earlier to identify the exact resource you need to import.
# module.s3_bucket.aws_s3_bucket_server_side_encryption_configuration.this[0] will be created
+ resource "aws_s3_bucket_server_side_encryption_configuration" "this" {
+ bucket = "testing-module-upgrades-bucket-00001"
+ id = (known after apply)
+ rule {
+ apply_server_side_encryption_by_default {
+ sse_algorithm = "aws:kms"
# (1 unchanged attribute hidden)
}
}
}
Confirm that you want to import the module.s3_bucket.aws_s3_bucket_server_side_encryption_configuration.this[0]
resource, using the id
argument from the aws_s3_bucket
resource, created by the same module.
Add the aws_s3_bucket_server_side_encryption_configuration
resource import block to the configuration.
import {
to = module.s3_bucket.aws_s3_bucket_server_side_encryption_configuration.this[0]
id = "testing-module-upgrades-bucket-00001"
}
Now we can check the speculative plan output to see if we have addressed the aws_s3_bucket_server_side_encryption_configuration
resource creation message:
$ terraform plan
module.s3_bucket.data.aws_canonical_user_id.this: Reading...
module.s3_bucket.aws_s3_bucket.this[0]: Refreshing state... [id=testing-module-upgrades-bucket-00001]
module.s3_bucket.data.aws_canonical_user_id.this: Read complete after 0s [id=efcda01167ad832d1f7be6d0fc54f59a4549f08f80e253ae09dd33aa18934b7a]
module.s3_bucket.aws_s3_bucket_server_side_encryption_configuration.this[0]: Preparing import... [id=testing-module-upgrades-bucket-00001]
module.s3_bucket.aws_s3_bucket_server_side_encryption_configuration.this[0]: Refreshing state... [id=testing-module-upgrades-bucket-00001]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# module.s3_bucket.aws_s3_bucket.this[0] will be updated in-place
~ resource "aws_s3_bucket" "this" {
id = "testing-module-upgrades-bucket-00001"
tags = {}
# (12 unchanged attributes hidden)
- server_side_encryption_configuration {
- rule {
- bucket_key_enabled = false -> null
- apply_server_side_encryption_by_default {
- sse_algorithm = "aws:kms" -> null
# (1 unchanged attribute hidden)
}
}
}
# (1 unchanged block hidden)
}
# module.s3_bucket.aws_s3_bucket_server_side_encryption_configuration.this[0] will be imported
resource "aws_s3_bucket_server_side_encryption_configuration" "this" {
bucket = "testing-module-upgrades-bucket-00001"
expected_bucket_owner = null
id = "testing-module-upgrades-bucket-00001"
rule {
bucket_key_enabled = false
apply_server_side_encryption_by_default {
kms_master_key_id = null
sse_algorithm = "aws:kms"
}
}
}
Plan: 1 to import, 0 to add, 1 to change, 0 to destroy.
Note
Imports are the only acceptable change to see in a speculative plan when upgrading to a newer module version.
You have successfully refactored the Terraform configuration to address the presence of an add
action in the speculative plan for the aws_s3_bucket_server_side_encryption_configuration
resource. However, there is still an unexpected change
action (update-in-place) for the original S3 bucket after removing the server_side_encryption_configuration
block.
Discover a bug
There are common approaches to address unexpected changes when upgrading your Terraform module.
- Import the S3 bucket just like the other resources to see if that works (it does not).
- Run
terraform apply
“a few times” (does not fix the issue). - Give up?
We recommend searching for similar issues posted to the provider's GitHub repository.
There is a bug in the AWS provider version that you just upgraded to.
In this case, you could pause this effort and wait for the issues to be resolved, or you can choose to upgrade the provider to an unaffected version.
From the GitHub issue:
Practitioners using Terraform AWS Provider
v4.x
are not affected as theserver_side_encryption_configuration
argument is already Computed (as it has been deprecated).
If you find that there are no relevant issues describing your problem, we encourage you to report an issue to help the community.
(Optional) Upgrade the Terraform provider (again)
Upgrade the AWS provider to v4.0.0
since this AWS version addresses the bug.
Update the version
argument for the aws
provider in the required_providers
block:
In this case, you could pause this effort and wait for the issues to be resolved, or you can choose to upgrade the provider to an unaffected version.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.0.0"
}
}
}
Re-initialize your Terraform configuration to upgrade to the latest version of the AWS provider.
$ terraform init -upgrade
Initializing the backend...
Upgrading modules...
Downloading registry.terraform.io/terraform-aws-modules/s3-bucket/aws 3.0.0 for s3_bucket...
- s3_bucket in .terraform/modules/s3_bucket
Initializing provider plugins...
- Finding hashicorp/aws versions matching ">= 3.75.0, 4.0.0"...
- Installing hashicorp/aws v4.0.0...
- Installed hashicorp/aws v4.0.0 (signed by HashiCorp)
Terraform has made some changes to the provider dependency selections recorded
in the .terraform.lock.hcl file. Review those changes and commit them to your
version control system if they represent changes you intended to make.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
After updating the provider version in the Terraform configuration, run a speculative plan. A major provider version bump could introduce issues outside of the scope of the module being upgraded. Running another plan ensures the new provider version does not introduce unexpected modifications to the infrastructure.
Verify the speculative plan has all errors, warnings, and actions removed:
$ terraform plan
module.s3_bucket.data.aws_canonical_user_id.this: Reading...
module.s3_bucket.aws_s3_bucket.this[0]: Refreshing state... [id=testing-module-upgrades-bucket-00001]
module.s3_bucket.data.aws_canonical_user_id.this: Read complete after 0s [id=efcda01167ad832d1f7be6d0fc54f59a4549f08f80e253ae09dd33aa18934b7a]
module.s3_bucket.aws_s3_bucket_server_side_encryption_configuration.this[0]: Preparing import... [id=testing-module-upgrades-bucket-00001]
module.s3_bucket.aws_s3_bucket_server_side_encryption_configuration.this[0]: Refreshing state... [id=testing-module-upgrades-bucket-00001]
Terraform will perform the following actions:
# module.s3_bucket.aws_s3_bucket_server_side_encryption_configuration.this[0] will be imported
resource "aws_s3_bucket_server_side_encryption_configuration" "this" {
bucket = "testing-module-upgrades-bucket-00001"
expected_bucket_owner = null
id = "testing-module-upgrades-bucket-00001"
rule {
bucket_key_enabled = false
apply_server_side_encryption_by_default {
kms_master_key_id = null
sse_algorithm = "aws:kms"
}
}
}
Plan: 1 to import, 0 to add, 0 to change, 0 to destroy.
There are no longer any errors, warnings, or infrastructure changes after upgrading to a newer module version!
In this example, you do not run into any issues outside of the module being upgraded. However, if there are further unexpected errors, warnings, or changes after upgrading the provider, continue to work through them systematically until there are 0 to add, 0 to change, and 0 to destroy.
Tip
Keep all of the refactoring work related to the module upgrade in a single pull request to be clear as to why these changes are being made. Clearly separating actions taken in separate commits allows you to easily document changes in the CHANGELOG as well.
Refactored Terraform configuration
Here is the final, refactored result of the Terraform configuration:
terraform {
required_version = "~> 1.5"
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.0.0"
}
}
}
provider "aws" {
region = "ca-central-1"
}
import {
to = module.s3_bucket.aws_s3_bucket_server_side_encryption_configuration.this[0]
id = "testing-module-upgrades-bucket-00001"
}
module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "3.0.0"
bucket = "testing-module-upgrades-bucket-00001"
attach_public_policy = false
server_side_encryption_configuration = {
rule = {
apply_server_side_encryption_by_default = {
sse_algorithm = "aws:kms"
}
}
}
}
After deploying the changes, we recommend you remove import blocks from your configuration. Alternatively, you can leave them as a record of the resource's origin.
Review changes
After refactoring the Terraform code to address deprecated attributes and to ensure there are no actions in the speculative plan, submit the updated code for team review. This ensures that the changes are reviewed by peers for accuracy, adhere to best practices, and surface any potential issues before deployment.
Push the refactored code to the upgrade-s3-bucket-module
branch you created earlier and create a pull request for the team to review in your VCS provider (for example, GitHub).
Provide a clear and concise description of the changes made, including the reason for the refactor and any relevant context. Mention that you’ve ensured there are no errors, warnings, or changes in the speculative plan.
For example:
This PR upgrades the AWS S3 bucket module version from v2.15.0 to v3.0.0. As part of the upgrade, the AWS provider major version was also upgraded to meet the updated module requirements.
Normally, I wouldn't introduce this much change when upgrading a module, but I've found a [bug](https://github.com/hashicorp/terraform-provider-aws/issues/28701) in this provider version that required bumping major versions.
Please review for accuracy and adherence to best practices.
Respond to any comments or feedback provided by the reviewers. Make necessary adjustments to the code based on the review and update the pull request accordingly.
Merge changes
After the refactored Terraform code has been reviewed and approved by the team, merge the changes into the main branch. This step finalizes the code changes, making them part of the main codebase and preparing them for deployment.
Apply Terraform configuration
After merging the refactored code into the main branch, apply the Terraform configuration to deploy the changes using the upgraded module version. This ensures that the infrastructure state is updated and aligned with the new module version.
Depending on how your infrastructure is managed, your changes might be deployed automatically as part of merging your pull request into the main branch. Alternatively, there might be some other process that needs to be approved before deployment.
Refer to your team for how this step should ideally be executed.
Verify deployment
The final step is to verify a successful deployment. This involves checking that all resources are correctly provisioned and configured according to the updated Terraform state. Ensure that there are no errors or discrepancies, and confirm that the infrastructure behaves as expected with the new module version.
This would be a good time to write some Terraform tests to capture any unexpected changes that happened during the deployment.
Conclusion
By following this guide, you have successfully navigated the process of upgrading the AWS S3 bucket module including:
- Monitor for new module releases.
- Update the module configuration block to use the new version.
- Run speculative plans to identify potential issues.
The example workflow showed common issues that arise:
- Mismatched provider versions were based on updated constraints.
- Align your configuration with the latest provider changes.
- Import resources to maintain state consistency.
- Find bugs in the provider by searching the repository issues.
You should now understand the importance of peer reviews followed by deployment verification of the upgraded configuration, ensuring a smooth transition to the new module version.