Terraform
Migrate to HCP Terraform Operator for Kubernetes v2
Warning: Version 1 of the HCP Terraform Operator for Kubernetes is deprecated and no longer maintained. If you are installing the operator for the first time, refer to Set up the HCP Terraform Operator for Kubernetes for guidance.
To upgrade the HCP Terraform Operator for Kubernetes from version 1 to the HCP Terraform Operator for Kubernetes (version 2), there is a one-time process that you need to complete. This process upgrades the operator to the newest version and migrate your custom resources.
Prerequisites
The migration process requires the following tools to be installed locally:
Prepare for the upgrade
Configure an environment variable named RELEASE_NAMESPACE
with the value of the namespace that the Helm chart is installed in.
$ export RELEASE_NAMESPACE=<NAMESPACE>
Next, create an environment variable named RELEASE_NAME
with the value of the name that you gave your installation for the Helm chart.
$ export RELEASE_NAME=<INSTALLATION_NAME>
Before you migrate to HCP Terraform Operator for Kubernetes v2, you must first update v1 of the operator to the latest version, including the custom resource definitions.
$ helm upgrade --namespace ${RELEASE_NAMESPACE} ${RELEASE_NAME} hashicorp/terraform
Next, backup the workspace resources.
$ kubectl get workspace --all-namespaces -o yaml > backup_tfc_operator_v1.yaml
Manifest schema migration
Version 2 of the HCP Terraform Operator for Kubernetes renames and moves many existing fields. When you migrate, you must update your specification to match version 2's field names.
Workspace controller
The table below lists the field mapping of the Workspace
controller between v1 and v2 of the operator.
Version 1 | Version 2 | Changes between versions |
---|---|---|
apiVersion: app.terraform.io/v1alpha1 | apiVersion: app.terraform.io/v1alpha2 | The apiVersion is now v1alpha2 . |
kind: Workspace | kind: Workspace | None. |
metadata | metadata | None. |
spec.organization | spec.organization | None. |
spec.secretsMountPath | spec.token.secretKeyRef | In v2 the operator keeps the HCP Terraform access token in a Kubernetes Secret. |
spec.vcs | spec.versionControl | Renamed the vcs field to versionControl . |
spec.vcs.token_id | spec.versionControl.oAuthTokenID | Renamed the token_id field to oAuthTokenID . |
spec.vcs.repo_identifier | spec.versionControl.repository | Renamed the repo_identifier field to repository . |
spec.vcs.branch | spec.versionControl.branch | None. |
spec.vcs.ingress_submodules | spec.workingDirectory | Moved. |
spec.variables.[*] | spec.environmentVariables.[*] OR spec.terraformVariables.[*] | We split variables into two possible places. In v1's CRD, if spec.variables.environmentVariable was true , migrate those variables to spec.environmentVariables . If false , migrate those variables to spec.terraformVariables . |
spec.variables.[*]key | spec.environmentVariables.[*]name OR spec.terraformVariables.[*]name | Renamed the key field as name . Learn more. |
spec.variables.[*]value | spec.environmentVariables.[*]value OR spec.terraformVariables.[*]value | Learn more. |
spec.variables.[*]valueFrom | spec.environmentVariables.[*]valueFrom OR spec.terraformVariables.[*]valueFrom | Learn more. |
spec.variables.[*]hcl | spec.environmentVariables.[*]hcl OR spec.terraformVariables.[*]hcl | Learn more. |
spec.variables.sensitive | spec.environmentVariables.[*]sensitive OR spec.terraformVariables.[*]sensitive | Learn more. |
spec.variables.environmentVariable | N/A | Removed, variables are split between spec.environmentVariables and spec.terraformVariables . |
spec.runTriggers.[*] | spec.runTriggers.[*] | None. |
spec.runTriggers.[*].sourceableName | spec.runTriggers.[*].name | The sourceableName field is now name . |
spec.sshKeyID | spec.sshKey.id | Moved the sshKeyID to spec.sshKey.id . |
spec.outputs | N/A | Removed. |
spec.terraformVersion | spec.terraformVersion | None. |
spec.notifications.[*] | spec.notifications.[*] | None. |
spec.notifications.[*].type | spec.notifications.[*].type | None. |
spec.notifications.[*].enabled | spec.notifications.[*].enabled | None. |
spec.notifications.[*].name | spec.notifications.[*].name | None. |
spec.notifications.[*].url | spec.notifications.[*].url | None. |
spec.notifications.[*].token | spec.notifications.[*].token | None. |
spec.notifications.[*].triggers.[*] | spec.notifications.[*].triggers.[*] | None. |
spec.notifications.[*].recipients.[*] | spec.notifications.[*].emailAddresses.[*] | Renamed the recipients field to emailAddresses . |
spec.notifications.[*].users.[*] | spec.notifications.[*].emailUsers.[*] | Renamed the users field to emailUsers . |
spec.omitNamespacePrefix | N/A | Removed. In v1 spec.omitNamespacePrefix is a boolean field that affects how the operator generates a workspace name. In v2, you must explicitly set workspace names in spec.name . |
spec.agentPoolID | spec.agentPool.id | Moved the agentPoolID field to spec.agentPool.id . |
spec.agentPoolName | spec.agentPool.name | Moved the agentPoolName field to spec.agentPool.name . |
spec.module | N/A | Removed. You now configure modules with a separate Module CRD. Learn more. |
Below is an example of configuring a variable in v1 of the operator.
v1.yaml
apiVersion: app.terraform.io/v1alpha1
kind: Workspace
metadata:
name: migration
spec:
variables:
- key: username
value: "user"
hcl: true
sensitive: false
environmentVariable: false
- key: SECRET_KEY
value: "s3cr3t"
hcl: false
sensitive: false
environmentVariable: true
In v2 of the operator, you must configure Terraform variables in spec.terraformVariables
and environment variables spec.environmentVariables
.
v2.yaml
apiVersion: app.terraform.io/v1alpha2
kind: Workspace
metadata:
name: migration
spec:
terraformVariables:
- name: username
value: "user"
hcl: true
sensitive: false
environmentVariables:
- name: SECRET_KEY
value: "s3cr3t"
hcl: false
sensitive: false
Module controller
HCP Terraform Operator for Kubernetes v2 configures modules in a new Module
controller separate from the Workspace
controller. Below is a template of a custom resource manifest:
apiVersion: app.terraform.io/v1alpha2
kind: Module
metadata:
name: <NAME>
spec:
organization: <ORG-NAME>
token:
secretKeyRef:
name: <SECRET-NAME>
key: <KEY-NAME>
name: operator
The table below describes the mapping between the Workspace
controller from v1 and the Module
controller in v2 of the operator.
Version 1 (Workspace CRD) | Version 2 (Module CRD) | Notes |
---|---|---|
spec.module | N/A | In v2 of the operator a Module is a separate controller with its own CRD. |
N/A | spec.name: operator | In v1 of the operator, the name of the generated module is hardcoded to operator . In v2, the default name of the generated module is this , but you can rename it. |
spec.module.source | spec.module.source | This supports all Terraform module sources. |
spec.module.version | spec.module.version | Refer to module sources for versioning information for each module source. |
spec.variables.[*] | spec.variables.[*].name | You should include variable names in the module. This is a reference to variables in the workspace that is executing the module. |
spec.outputs.[*].key | spec.outputs.[*].name | You should include output names in the module. This is a reference to the output variables produced by the module. |
status.workspaceID OR metadata.namespace-metadata.name | spec.workspace.id OR spec.workspace.name | The workspace where the module is executed. The workspace must be in the same organization. |
Below is an example migration of a Module
between v1 and v2 of the operator:
v1.yaml
apiVersion: app.terraform.io/v1alpha1
kind: Workspace
metadata:
name: migration
spec:
module:
source: app.terraform.io/org-name/module-name/provider
version: 0.0.42
variables:
- key: username
value: "user"
hcl: true
sensitive: false
environmentVariable: false
- key: SECRET_KEY
value: "s3cr3t"
hcl: false
sensitive: false
environmentVariable: true
In v2 of the operator, separate controllers manage workspace and modules.
workspace-v2.yaml
apiVersion: app.terraform.io/v1alpha2
kind: Workspace
metadata:
name: migration
spec:
terraformVariables:
- name: username
value: "user"
hcl: true
sensitive: false
environmentVariables:
- name: SECRET_KEY
value: "s3cr3t"
hcl: false
sensitive: false
module-v2.yaml
apiVersion: app.terraform.io/v1alpha2
kind: Module
metadata:
name: migration
spec:
name: operator
module:
source: app.terraform.io/org-name/module-name/provider
version: 0.0.42
workspace:
name: migration
Upgrade the operator
Download Workspace CRD patch A:
$ curl -sO https://raw.githubusercontent.com/hashicorp/hcp-terraform-operator/main/docs/migration/crds/workspaces_patch_a.yaml
View the changes that patch A applies to the workspace CRD.
$ kubectl diff --filename workspaces_patch_a.yaml
Patch the workspace CRD with patch A. This patch adds app.terraform.io/v1alpha2
support, but excludes .status.runStatus
because it has a different format in app.terraform.io/v1alpha1
and causes JSON un-marshalling issues.
Upgrade warning: Once you apply a patch, Kubernetes converts existing app.terraform.io/v1alpha1
custom resources to app.terraform.io/v1alpha2
according to the updated schema, meaning that v1 of the operator can no longer serve custom resources. Before patching, update your existing custom resources to satisfy the v2 schema requirements. Learn more.
$ kubectl patch crd workspaces.app.terraform.io --patch-file workspaces_patch_a.yaml
Install the Operator v2 Helm chart with the helm install
command. Be sure to set the operator.watchedNamespaces
value to the list of namespaces your Workspace resources are deployed to. If this value is not provided, the operator will watch all namespaces in the Kubernetes cluster.
$ helm install \
${RELEASE_NAME} hashicorp/hcp-terraform-operator \
--version 2.4.0 \
--namespace ${RELEASE_NAMESPACE} \
--set 'operator.watchedNamespaces={white,blue,red}' \
--set controllers.agentPool.workers=5 \
--set controllers.module.workers=5 \
--set controllers.workspace.workers=5
Next, create a Kubernetes secret to store the HCP Terraform API token following the Usage Guide. The API token can be copied from the Kubernetes secret that you created for v1 of the operator. By default, this is named terraformrc
. Use the kubectl get secret
command to get the API token.
$ kubectl --namespace ${RELEASE_NAMESPACE} get secret terraformrc -o json | jq '.data.credentials' | tr -d '"' | base64 -d
Update existing custom resources according to the schema migration guidance and apply your changes.
$ kubectl apply --filename <UPDATED_V2_WORKSPACE_MANIFEST.yaml>
Download Workspace CRD patch B.
$ curl -sO https://raw.githubusercontent.com/hashicorp/hcp-terraform-operator/main/docs/migration/crds/workspaces_patch_b.yaml
View the changes that patch B applies to the workspace CRD.
$ kubectl diff --filename workspaces_patch_b.yaml
Patch the workspace CRD with patch B. This patch adds .status.runStatus
support, which was excluded in patch A.
$ kubectl patch crd workspaces.app.terraform.io --patch-file workspaces_patch_b.yaml
The v2 operator will fail to proceed if a custom resource has the v1 finalizer finalizer.workspace.app.terraform.io
. If you encounter an error, check the logs for more information.
$ kubectl logs -f <POD_NAME>
Specifically, look for an error message such as the following.
ERROR Migration {"workspace": "default/<WORKSPACE_NAME>", "msg": "spec contains old finalizer finalizer.workspace.app.terraform.io"}
The finalizer
exists to provide greater control over the migration process. Verify the custom resource, and when you’re ready to migrate it, use the kubectl patch
command to update the finalizer
value.
$ kubectl patch workspace migration --type=merge --patch '{"metadata": {"finalizers": ["workspace.app.terraform.io/finalizer"]}}'
Review the operator logs once more and verify there are no error messages.
$ kubectl logs -f <POD_NAME>
The operator reconciles resources during the next sync period. This interval is set by the operator.syncPeriod
configuration of the operator and defaults to five minutes.
If you have any migrated Module
custom resources, apply them now.
$ kubectl apply --filename <MIGRATED_V2_MODULE_MANIFEST.yaml>
In v2 of the operator, the applyMethod
is set to manual
by default. In this case, a new run in a managed workspace requires manual approval. Run the following command for each Workspace
resource to change it to auto
approval.
$ kubectl patch workspace <WORKSPACE_NAME> --type=merge --patch '{"spec": {"applyMethod": "auto"}}'