Deploy Infrastructure with the Terraform Cloud Kubernetes Operator v2
Note
The Terraform Cloud Kubernetes Operator v2 is currently in private beta. Please contact your HashiCorp account team for more information on how to join.
The Terraform Cloud Operator for Kubernetes (Operator) allows you to manage the lifecycle of cloud and on-prem infrastructure through a single Kubernetes custom resource.
You can create application-related infrastructure from a Kubernetes cluster by adding the Operator to your Kubernetes namespace. The Operator uses a Kubernetes Custom Resource Definition (CRD) to manage Terraform Cloud workspaces. These workspaces execute a Terraform Cloud run to provision Terraform modules. By using Terraform Cloud, the Operator leverages its proper state handling and locking, sequential execution of runs, and established patterns for injecting secrets and provisioning resources.
In this tutorial, you will configure and deploy the Operator to a Kubernetes cluster and use it to create a Terraform Cloud workspace. You will also use the Operator to provision a message queue that the example application needs for deployment to Kubernetes.
Prerequisites
The tutorial assumes some basic familiarity with Kubernetes and kubectl
.
You should also be familiar with:
- The Terraform workflow — All Get Started tutorials
- Terraform Cloud — All Get Started with Terraform Cloud tutorials
For this tutorial, you will need:
A Terraform Cloud account
An AWS account and AWS Access Credentials
Note
This tutorial will provision resources that qualify under the AWS free-tier. If your account doesn't qualify under the AWS free-tier, we're not responsible for any charges that you may incur.
Install and configure kubectl
To install the kubectl
(Kubernetes CLI), follow these instructions or choose a package manager based on your operating system.
Use the package manager homebrew
to install kubectl
.
You will also need a sample kubectl
config. We recommend using kind
to provision a local Kubernetes cluster and using that config for this tutorial.
Use the package manager homebrew
to install kind.
Then, create a kind Kubernetes cluster called terraform-learn
.
Verify that your cluster exists by listing your kind clusters.
Then, point kubectl
to interact with this cluster.
Clone repository
In your terminal, clone the Learn Terraform Kubernetes Operator repository.
Navigate into the repository.
Checkout the v2beta
branch of the repository.
This repository contains the following files.
- The root directory of this repository contains the Terraform configuration for a Kubernetes namespace and the Operator helm chart.
- The
operator
directory contains the Kubernetes.yml
files that you will use to create a Terraform Cloud workspace using the Operator. - The
aws-sqs-test
directory contains the files that build the Docker image that tests the message queue. This is provided as reference only. You will use an image from DockerHub to test the message queue.
Configure the Operator
The Operator must have access to Terraform Cloud and your AWS account. It also needs to run in its own Kubernetes namespace. Below you will configure the Operator and deploy it into your Kubernetes cluster using a Terraform configuration that we have provided for you.
Configure Terraform Cloud access
The Operator must authenticate to Terraform Cloud. To do this, you must create a Terraform Cloud Team API token, then add it as a secret for the Operator to access.
First, sign into your Terraform Cloud account, then navigate to the Settings -> Teams.
If you are using a free tier, you will only find one team called "owners" that has full access to the API. Click on "owners".
Scroll to the Team API Token section. Click on Create a team token and choose an expiration for the token. We recommend that all team tokens have a specified expiration, such as 30 days. Click Generate token to generate a new team token. Copy this token and store it somewhere secure for usage later in this tutorial.
Warning
The Team token has global privileges for your organization. Ensure that the Kubernetes cluster using this token has proper role-based access control to limit access to the secret, or store it in a secret manager with access control policies.
Explore Terraform configuration
The main.tf
file has Terraform configuration that will deploy the Operator into your Kubernetes cluster. It includes:
Two Kubernetes namespaces. The Operator will be deployed in the
tfc-operator-system
namespace and the workspace, module, and application will be deployed in theedu
namespace.A
terraformrc
generic secret for your team API token. The workspace and module that you create later in this tutorial reference this secret.A generic secret named
workspacesecrets
containing your AWS credentials. In addition to the Terraform Cloud Teams token, Terraform Cloud needs your cloud provider credentials to create infrastructure. This configuration adds your credentials to the namespace, which is used when you create a workspace. You will add the credential values as variables and create a workspace later in this tutorial.The Operator Helm Chart. This is the configuration for the Operator. It is configured to watch the
edu
namespace for changes such as creating and modifying workspaces. If you use Terraform Enterprise, uncomment the finalset
block, which specifies the installation endpoint using an input variable.
If your Terraform Enterprise installation uses a TLS certificate signed by a custom Certificate Authority, pass the custom CA bundle root certificate to the Helm chart by adding another set
block to the helm_release
resource. Update the value
to the path of your CA bundle on your local machine.
In order to use this configuration, you need to define the variables that authenticate to the kind
cluster, AWS, and Terraform Cloud.
Run the following command. It will generate a terraform.tfvars
file with your kind
cluster configuration.
Open terraform.tfvars
and set the aws_access_key_id
, aws_secret_access_key
, and tfc_token
variables to your AWS and Terraform Cloud credentials. For Terraform Enterprise installations, set the tfe_address
variable.
You should end up with something similar to the following.
Warning
Do not commit sensitive values into version control. The .gitignore
file found in this repository ignores all .tfvars
files. Include it in all of your future Terraform repositories.
Deploy the Operator
Now that you have defined the variables, you are ready to create the Kubernetes resources.
Initialize your configuration.
Apply your configuration. Remember to confirm your apply with a yes
.
Verify the Operator pods are running.
In addition to deploying the Operator, the Helm chart adds custom resource definitions for Workspaces, Modules, and Agent Pools.
Explore the specifications
Now you are ready to create infrastructure using the Operator.
First, navigate to the operator
directory.
Open workspace.yml
, the workspace specification, and customize it with your Terraform Cloud organization name. The workspace specification both creates a Terraform Cloud workspace, and uses it to deploy your application's required infrastructure.
You can find the following items in workspace.yml
, which you use to apply the Workspace custom resource to a Kubernetes cluster.
The workspace name suffix. The workspace name is a combination of your namespace and
metadata.name
(in this case:edu-greetings
)The Terraform Cloud organization. This organization must match the one you generate the Teams token for. Replace
ORGANIZATION_NAME
with your Terraform Cloud organization name.The Terraform Cloud token. The Workspace spec expects a
token
value, which can be accessed in theterraformrc
secret at the keytoken
Terraform and Environment variables. For variables that must be passed to the module, the
terraformVariables
key in the specification must match the name of the module variable. Environment variables can be set using theenvironmentVariables
key. You can indicate that a variable is sensitive by settingsensitive
totrue
. The value can either be set directly with thevalue
or by using ConfigMaps or with a Secret with thevalueFrom
key. We have set reasonable defaults for these values which you will review in a later step.
Explore configmap.yml
In workspace.yml
, the AWS_DEFAULT_REGION
variable is defined by a ConfigMap named aws-configuration
.
Open configmap.yml
. Here you will find the specifications for the aws-configuration
ConfigMap.
Explore module.yml
Open module.yml
. This specification is equivalent to the following Terraform configuration:
You can also find the following items in module.yml
, which you use to apply the Module custom resource to a Kubernetes cluster.
The Terraform Cloud organization. This organization must match the one you generate the Teams token for. Replace ORGANIZATION_NAME with your Terraform Cloud organization name.
Outputs you would like to find in the Kubernetes status. The example stores the
queue_id
which you will use in a later step.
Create the message queue
Create an environment variable named NAMESPACE
and set it to edu
.
Apply the ConfigMap specifications to the namespace.
Then, apply the Workspace specifications to the namespace.
Next, apply the Module specifications to the namespace.
Debug the Operator by accessing its logs and checking if the workspace creation ran into any errors.
Check the status of the workspace via kubectl or the Terraform Cloud web UI to determine the run status, outputs, and run identifiers.
The Workspace custom resource reflects that the run was applied.
You can also access the status directly.
In addition to the workspace status, the Operator creates a Kubernetes ConfigMap containing the outputs of the Terraform Cloud workspace. The ConfigMap is formatted <workspace_name>-outputs
.
Note
It may take several minutes for the Terraform output configmap to be created by the Operator
Verify message queue
Now that you have deployed the queue, you will now send and receive messages on the queue.
The application.yml
contains a spec that runs a containerized application in your kind
cluster. That app calls a script called message.sh
, which sends and receives messages from the queue, using the same AWS credentials that the Operator used.
To give the script access to the queue's location, the application.yml
spec creates a new environment variable named QUEUE_URL
, and sets it to the Kubernetes ConfigMap containing the queue url from the Terraform Cloud workspace output.
Tip
If you mount the Secret as a volume, rather than project it as an environment variable, you can update that Secret without redeploying the app.
Open aws-sqs-test/message.sh
. This bash script tests the message queue. To access the queue, it creates environment variables with your AWS credentials and the queue URL. Since Terraform Cloud outputs from the Kubernetes Secret contain double quotes, the script strips the double quotes from the output (QUEUE_URL
) to ensure the script works as expected.
Deploy the job and examine the logs from the pod associated with the job.
View the job's logs.
Change the queue name
Once your infrastructure is running, you can use the Operator to modify it. Update the workspace.yml
file to change the queue's name, and the type of the queue from FIFO to standard.
Apply the updated workspace configuration. The Terraform Operator retrieves the configuration update and updates the variables in the workspace.
A new run won't be ran unless there is a change to the Module resource. To trigger a new one, make a small change by updating the restartedAt
value of the specification.
Examine the run for the workspace in the Terraform Cloud UI. The plan indicates that Terraform Cloud replaced the queue.
You can audit updates to the workspace from the Operator through Terraform Cloud, which maintains a history of runs and the current state.
Clean up resources
Now that you have created and modified a Terraform Cloud workspace using the Operator, delete the module and workspace.
Delete the application
Delete the job used to test the SQS queue.
Delete the module
Delete the Module custom resource.
You may notice that the command hangs for a few minutes. This is because the Operator executes a finalizer, a pre-delete hook. It executes a terraform destroy
on workspace resources.
Once the finalizer completes, Kubernetes will delete the Module custom resource.
Delete the workspace
Delete the Workspace custom resource.
Delete resources and kind
cluster
Navigate to the root directory.
Destroy the namespace, secrets and the Operator. Remember to confirm the destroy with a yes.
Finally, delete the kind
cluster.
Next steps
Congrats! You have configured and deployed the Operator to a Kubernetes namespace, created a Terraform workspace, and deployed a message queue using the Operator. This pattern can extend to other application infrastructure, such as DNS servers, databases, and identity and access management rules.
Visit the following resources to learn more about the Terraform Cloud Operator for Kubernetes.
- To learn more about the Operator and its design, check out the hashicorp/terraform-cloud-operator repository.
- To discover more about managing Kubernetes with Terraform, review the Hashicorp Kubernetes tutorials.
- Learn how to Manage Agent Pools with the Terraform Cloud Kubernetes Operator v2.