Terraform
- Terraform Enterprise
- 1.0.x (latest)
- v202507-1
- v202506-1
- v202505-1
- v202504-1
- v202503-1
- v202502-2
- v202501-1
- v202411-2
- v202411-1
- v202410-1
- v202409-3
- v202409-2
- v202409-1
- v202408-1
- No versions of this document exist before v202408-1. Click below to redirect to the version homepage.
- v202407-1
- v202406-1
- v202405-1
- v202404-2
- v202404-1
- v202402-2
- v202402-1
- v202401-2
- v202401-1
- v202312-1
- v202311-1
- v202310-1
- v202309-1
- v202308-1
- v202307-1
- v202306-1
- v202305-2
- v202305-1
- v202304-1
- v202303-1
- v202302-1
- v202301-2
- v202301-1
- v202212-2
- v202212-1
- v202211-1
- v202210-1
- v202209-2
- v202209-1
- v202208-3
- v202208-2
- v202208-1
- v202207-2
- v202207-1
- v202206-1
Migrate to HCP Terraform Operator for Kubernetes v2
Warning: Version 1 of the HCP Terraform Operator for Kubernetes is deprecated and no longer maintained. If you are installing the operator for the first time, refer to Set up the HCP Terraform Operator for Kubernetes for guidance.
To upgrade the HCP Terraform Operator for Kubernetes from version 1 to the HCP Terraform Operator for Kubernetes (version 2), there is a one-time process that you need to complete. This process upgrades the operator to the newest version and migrate your custom resources.
Prerequisites
The migration process requires the following tools to be installed locally:
Prepare for the upgrade
Configure an environment variable named RELEASE_NAMESPACE with the value of the namespace that the Helm chart is installed in.
$ export RELEASE_NAMESPACE=<NAMESPACE>
Next, create an environment variable named RELEASE_NAME with the value of the name that you gave your installation for the Helm chart.
$ export RELEASE_NAME=<INSTALLATION_NAME>
Before you migrate to HCP Terraform Operator for Kubernetes v2, you must first update v1 of the operator to the latest version, including the custom resource definitions.
$ helm upgrade --namespace ${RELEASE_NAMESPACE} ${RELEASE_NAME} hashicorp/terraform
Next, backup the workspace resources.
$ kubectl get workspace --all-namespaces -o yaml > backup_tfc_operator_v1.yaml
Manifest schema migration
Version 2 of the HCP Terraform Operator for Kubernetes renames and moves many existing fields. When you migrate, you must update your specification to match version 2's field names.
Workspace controller
The table below lists the field mapping of the Workspace controller between v1 and v2 of the operator.
| Version 1 | Version 2 | Changes between versions | 
|---|---|---|
| apiVersion: app.terraform.io/v1alpha1 | apiVersion: app.terraform.io/v1alpha2 | The apiVersionis nowv1alpha2. | 
| kind: Workspace | kind: Workspace | None. | 
| metadata | metadata | None. | 
| spec.organization | spec.organization | None. | 
| spec.secretsMountPath | spec.token.secretKeyRef | In v2 the operator keeps the HCP Terraform access token in a Kubernetes Secret. | 
| spec.vcs | spec.versionControl | Renamed the vcsfield toversionControl. | 
| spec.vcs.token_id | spec.versionControl.oAuthTokenID | Renamed the token_idfield tooAuthTokenID. | 
| spec.vcs.repo_identifier | spec.versionControl.repository | Renamed the repo_identifierfield torepository. | 
| spec.vcs.branch | spec.versionControl.branch | None. | 
| spec.vcs.ingress_submodules | spec.workingDirectory | Moved. | 
| spec.variables.[*] | spec.environmentVariables.[*]ORspec.terraformVariables.[*] | We split variables into two possible places. In v1's CRD, if spec.variables.environmentVariablewastrue, migrate those variables tospec.environmentVariables. Iffalse, migrate those variables tospec.terraformVariables. | 
| spec.variables.[*]key | spec.environmentVariables.[*]nameORspec.terraformVariables.[*]name | Renamed the keyfield asname. Learn more. | 
| spec.variables.[*]value | spec.environmentVariables.[*]valueORspec.terraformVariables.[*]value | Learn more. | 
| spec.variables.[*]valueFrom | spec.environmentVariables.[*]valueFromORspec.terraformVariables.[*]valueFrom | Learn more. | 
| spec.variables.[*]hcl | spec.environmentVariables.[*]hclORspec.terraformVariables.[*]hcl | Learn more. | 
| spec.variables.sensitive | spec.environmentVariables.[*]sensitiveORspec.terraformVariables.[*]sensitive | Learn more. | 
| spec.variables.environmentVariable | N/A | Removed, variables are split between spec.environmentVariablesandspec.terraformVariables. | 
| spec.runTriggers.[*] | spec.runTriggers.[*] | None. | 
| spec.runTriggers.[*].sourceableName | spec.runTriggers.[*].name | The sourceableNamefield is nowname. | 
| spec.sshKeyID | spec.sshKey.id | Moved the sshKeyIDtospec.sshKey.id. | 
| spec.outputs | N/A | Removed. | 
| spec.terraformVersion | spec.terraformVersion | None. | 
| spec.notifications.[*] | spec.notifications.[*] | None. | 
| spec.notifications.[*].type | spec.notifications.[*].type | None. | 
| spec.notifications.[*].enabled | spec.notifications.[*].enabled | None. | 
| spec.notifications.[*].name | spec.notifications.[*].name | None. | 
| spec.notifications.[*].url | spec.notifications.[*].url | None. | 
| spec.notifications.[*].token | spec.notifications.[*].token | None. | 
| spec.notifications.[*].triggers.[*] | spec.notifications.[*].triggers.[*] | None. | 
| spec.notifications.[*].recipients.[*] | spec.notifications.[*].emailAddresses.[*] | Renamed the recipientsfield toemailAddresses. | 
| spec.notifications.[*].users.[*] | spec.notifications.[*].emailUsers.[*] | Renamed the usersfield toemailUsers. | 
| spec.omitNamespacePrefix | N/A | Removed. In v1 spec.omitNamespacePrefixis a boolean field that affects how the operator generates a workspace name. In v2, you must explicitly set workspace names inspec.name. | 
| spec.agentPoolID | spec.agentPool.id | Moved the agentPoolIDfield tospec.agentPool.id. | 
| spec.agentPoolName | spec.agentPool.name | Moved the agentPoolNamefield tospec.agentPool.name. | 
| spec.module | N/A | Removed. You now configure modules with a separate ModuleCRD. Learn more. | 
Below is an example of configuring a variable in v1 of the operator.
v1.yaml
apiVersion: app.terraform.io/v1alpha1
kind: Workspace
metadata:
  name: migration
  spec:
    variables:
      - key: username
        value: "user"
        hcl: true
        sensitive: false
        environmentVariable: false
      - key: SECRET_KEY
        value: "s3cr3t"
        hcl: false
        sensitive: false
        environmentVariable: true
In v2 of the operator, you must configure Terraform variables in spec.terraformVariables and environment variables spec.environmentVariables.
v2.yaml
apiVersion: app.terraform.io/v1alpha2
kind: Workspace
metadata:
  name: migration
  spec:
    terraformVariables:
      - name: username
        value: "user"
        hcl: true
        sensitive: false
    environmentVariables:
      - name: SECRET_KEY
        value: "s3cr3t"
        hcl: false
        sensitive: false
Module controller
HCP Terraform Operator for Kubernetes v2 configures modules in a new Module controller separate from the Workspace controller. Below is a template of a custom resource manifest:
apiVersion: app.terraform.io/v1alpha2
kind: Module
metadata:
  name: <NAME>
spec:
  organization: <ORG-NAME>
  token:
    secretKeyRef:
      name: <SECRET-NAME>
      key: <KEY-NAME>
  name: operator
The table below describes the mapping between the Workspace controller from v1 and the Module controller in v2 of the operator.
| Version 1 (Workspace CRD) | Version 2 (Module CRD) | Notes | 
|---|---|---|
| spec.module | N/A | In v2 of the operator a Moduleis a separate controller with its own CRD. | 
| N/A | spec.name: operator | In v1 of the operator, the name of the generated module is hardcoded to operator. In v2, the default name of the generated module isthis, but you can rename it. | 
| spec.module.source | spec.module.source | This supports all Terraform module sources. | 
| spec.module.version | spec.module.version | Refer to module sources for versioning information for each module source. | 
| spec.variables.[*] | spec.variables.[*].name | You should include variable names in the module. This is a reference to variables in the workspace that is executing the module. | 
| spec.outputs.[*].key | spec.outputs.[*].name | You should include output names in the module. This is a reference to the output variables produced by the module. | 
| status.workspaceIDORmetadata.namespace-metadata.name | spec.workspace.idORspec.workspace.name | The workspace where the module is executed. The workspace must be in the same organization. | 
Below is an example migration of a Module between v1 and v2 of the operator:
v1.yaml
apiVersion: app.terraform.io/v1alpha1
kind: Workspace
metadata:
  name: migration
spec:
  module:
    source: app.terraform.io/org-name/module-name/provider
    version: 0.0.42
  variables:
    - key: username
      value: "user"
      hcl: true
      sensitive: false
      environmentVariable: false
    - key: SECRET_KEY
      value: "s3cr3t"
      hcl: false
      sensitive: false
      environmentVariable: true
In v2 of the operator, separate controllers manage workspace and modules.
workspace-v2.yaml
apiVersion: app.terraform.io/v1alpha2
kind: Workspace
metadata:
  name: migration
  spec:
    terraformVariables:
      - name: username
        value: "user"
        hcl: true
        sensitive: false
    environmentVariables:
      - name: SECRET_KEY
        value: "s3cr3t"
        hcl: false
        sensitive: false
module-v2.yaml
apiVersion: app.terraform.io/v1alpha2
kind: Module
metadata:
  name: migration
spec:
  name: operator
  module:
    source: app.terraform.io/org-name/module-name/provider
    version: 0.0.42
  workspace:
    name: migration
Upgrade the operator
Download Workspace CRD patch A:
$ curl -sO https://raw.githubusercontent.com/hashicorp/hcp-terraform-operator/main/docs/migration/crds/workspaces_patch_a.yaml
View the changes that patch A applies to the workspace CRD.
$ kubectl diff --filename workspaces_patch_a.yaml
Patch the workspace CRD with patch A. This patch adds app.terraform.io/v1alpha2 support, but excludes .status.runStatus because it has a different format in app.terraform.io/v1alpha1 and causes JSON un-marshalling issues.
Upgrade warning: Once you apply a patch, Kubernetes converts existing app.terraform.io/v1alpha1 custom resources to app.terraform.io/v1alpha2 according to the updated schema, meaning that v1 of the operator can no longer serve custom resources. Before patching, update your existing custom resources to satisfy the v2 schema requirements. Learn more.
$ kubectl patch crd workspaces.app.terraform.io --patch-file workspaces_patch_a.yaml
Install the Operator v2 Helm chart with the helm install command. Be sure to set the operator.watchedNamespaces value to the list of namespaces your Workspace resources are deployed to. If this value is not provided, the operator will watch all namespaces in the Kubernetes cluster.
$ helm install \
  ${RELEASE_NAME} hashicorp/hcp-terraform-operator \
  --version 2.4.0 \
  --namespace ${RELEASE_NAMESPACE} \
  --set 'operator.watchedNamespaces={white,blue,red}' \
  --set controllers.agentPool.workers=5 \
  --set controllers.module.workers=5 \
  --set controllers.workspace.workers=5
Next, create a Kubernetes secret to store the HCP Terraform API token following the Usage Guide. The API token can be copied from the Kubernetes secret that you created for v1 of the operator. By default, this is named terraformrc. Use the kubectl get secret command to get the API token.
$ kubectl --namespace ${RELEASE_NAMESPACE} get secret terraformrc -o json | jq '.data.credentials' | tr -d '"' | base64 -d
Update existing custom resources according to the schema migration guidance and apply your changes.
$ kubectl apply --filename <UPDATED_V2_WORKSPACE_MANIFEST.yaml>
Download Workspace CRD patch B.
$ curl -sO https://raw.githubusercontent.com/hashicorp/hcp-terraform-operator/main/docs/migration/crds/workspaces_patch_b.yaml
View the changes that patch B applies to the workspace CRD.
$ kubectl diff --filename workspaces_patch_b.yaml
Patch the workspace CRD with patch B. This patch adds .status.runStatus support, which was excluded in patch A.
$ kubectl patch crd workspaces.app.terraform.io --patch-file workspaces_patch_b.yaml
The v2 operator will fail to proceed if a custom resource has the v1 finalizer finalizer.workspace.app.terraform.io. If you encounter an error, check the logs for more information.
$ kubectl logs -f <POD_NAME>
Specifically, look for an error message such as the following.
ERROR   Migration   {"workspace": "default/<WORKSPACE_NAME>", "msg": "spec contains old finalizer finalizer.workspace.app.terraform.io"}
The finalizer exists to provide greater control over the migration process. Verify the custom resource, and when you’re ready to migrate it, use the kubectl patch command to update the finalizer value.
$ kubectl patch workspace migration --type=merge --patch '{"metadata": {"finalizers": ["workspace.app.terraform.io/finalizer"]}}'
Review the operator logs once more and verify there are no error messages.
$ kubectl logs -f <POD_NAME>
The operator reconciles resources during the next sync period. This interval is set by the operator.syncPeriod configuration of the operator and defaults to five minutes. 
If you have any migrated Module custom resources, apply them now.
$ kubectl apply --filename <MIGRATED_V2_MODULE_MANIFEST.yaml>
In v2 of the operator, the applyMethod is set to manual by default. In this case, a new run in a managed workspace requires manual approval. Run the following command for each Workspace resource to change it to auto approval.
$ kubectl patch workspace <WORKSPACE_NAME> --type=merge --patch '{"spec": {"applyMethod": "auto"}}'