• HashiCorp Developer

  • HashiCorp Cloud Platform
  • Terraform
  • Packer
  • Consul
  • Vault
  • Boundary
  • Nomad
  • Waypoint
  • Vagrant
Terraform
  • Install
  • Tutorials
    • About the Docs
    • Configuration Language
    • Terraform CLI
    • Terraform Cloud
    • Terraform Enterprise
    • CDK for Terraform
    • Provider Use
    • Plugin Development
    • Registry Publishing
    • Integration Program
  • Registry(opens in new tab)
  • Try Cloud(opens in new tab)
  • Sign up
AWS Services

Skip to main content
14 tutorials
  • Manage AWS Auto Scaling Groups
  • Manage AWS Accounts Using Control Tower Account Factory for Terraform
  • Manage New AWS Resources with the Cloud Control Provider
  • Upgrade RDS Major Version
  • Use AssumeRole to Provision AWS Resources Across Accounts
  • Configure Default Tags for AWS Resources
  • Create IAM Policies
  • Deploy Serverless Applications with AWS Lambda and API Gateway
  • Use Application Load Balancers for Blue-Green and Canary Deployments
  • Host a Static Website with S3 and Cloudflare
  • Manage AWS RDS Instances
  • Provision an EKS Cluster (AWS)
  • Create Preview Environments with Terraform, GitHub Actions, and Vercel
  • Manage AWS DynamoDB Scale

  • Resources

  • Tutorial Library
  • Certifications
  • Community Forum
    (opens in new tab)
  • Support
    (opens in new tab)
  • GitHub
    (opens in new tab)
  • Terraform Registry
    (opens in new tab)
  1. Developer
  2. Terraform
  3. Tutorials
  4. AWS Services
  5. Manage New AWS Resources with the Cloud Control Provider

Manage New AWS Resources with the Cloud Control Provider

  • 11min

  • BetaBeta
  • TerraformTerraform

Terraform manages your resources through providers that connect to your cloud platforms' APIs. The Terraform Cloud Control provider supports new AWS services sooner than the traditional provider by using Cloud Control, a new AWS feature that creates standard API endpoints for new AWS services soon after their launch. These endpoints provide a standard set of actions, parameters, and error types that the Cloud Control provider uses to generate resources for AWS services automatically. You can use the Cloud Control provider alongside other providers, including the traditional AWS provider.

The Amazon Keyspaces service offers managed Apache Cassandra keyspaces and tables. The traditional AWS provider does not yet support Amazon Keyspaces, but the Cloud Control provider does. In this tutorial, you will provision a KMS key with the traditional AWS provider. Then, you will use the Cloud Control provider to provision a Cassandra keyspace and table, using the KMS key to encrypt your data at rest.

Note: While the Terraform Cloud Control provider is in technical preview, we recommend using the traditional Terraform provider for production workloads. Do not migrate configurations that use the traditional provider to the Cloud Control provider while it is still in technical preview.

Prerequisites

  • The Terraform CLI (1.0.7+).
  • An AWS account.
  • The AWS CLI (2.0+) installed, and configured for your AWS account.
  • Docker Desktop installed and running.
  • The Git CLI.

Note: Some of the infrastructure in this tutorial may not qualify for the AWS free tier. Destroy the infrastructure at the end of the tutorial to avoid unnecessary charges. We are not responsible for any charges that you incur.

Clone example configuration

Clone the example repository for this tutorial.

$ git clone https://github.com/hashicorp/learn-terraform-aws-cloud-control.git

Change to the repository directory.

$ cd learn-terraform-aws-cloud-control

This configuration defines a KMS key managed by the traditional AWS provider. You will use this key to encrypt your Cassandra table.

Create KMS key

Initialize this configuration.

$ terraform init
Initializing the backend...

Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Installing hashicorp/aws v3.59.0...
- Installed hashicorp/aws v3.59.0 (signed by HashiCorp)

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Apply the configuration to create your KMS key. Respond to the confirmation prompt with a yes.

$ terraform apply
## ...
Plan: 1 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + aws_region = "us-west-2"
  + kms_key_id = (known after apply)

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

aws_kms_key.terraform: Creating...
aws_kms_key.terraform: Still creating... [10s elapsed]
aws_kms_key.terraform: Still creating... [20s elapsed]
aws_kms_key.terraform: Creation complete after 22s [id=33198581-e648-46a3-b78d-1eb2edf9ab94]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Outputs:

aws_region = "us-west-2"
kms_key_id = "33198581-e648-46a3-b78d-1eb2edf9ab94"

Add AWS Cloud Control provider

The traditional AWS provider does not currently support Amazon Keyspaces, but the Cloud Control provider does. Add the Cloud Control provider to your configuration so you can use Terraform to manage a Cassandra keyspace and table.

First, update the terraform block in main.tf to add the Cloud Control and random providers. You will use the random provider to generate a random keyspace name.

main.tf
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.0"
    }
    awscc = {
      source  = "hashicorp/awscc"
      version = "~> 0.1.0"
    }
    random = {
      source  = "hashicorp/random"
      version = "~> 3.1.0"
    }
  }
}

Next, add provider blocks for both the Cloud Control and random providers. Configure the Cloud Control provider to use the same region as the traditional AWS provider.

main.tf
provider "awscc" {
  region = var.aws_region
}

provider "random" {}

Reinitialize your configuration to install the new providers.

$ terraform init
Initializing the backend...

Initializing provider plugins...
- Finding latest version of hashicorp/awscc...
- Reusing previous version of hashicorp/random from the dependency lock file
- Reusing previous version of hashicorp/aws from the dependency lock file
- Installing hashicorp/random v3.1.0...
- Installed hashicorp/random v3.1.0 (signed by HashiCorp)
- Using previously-installed hashicorp/aws v3.59.0

Now that you have installed the Cloud Control provider, you can create your Cassandra resources.

Add Cassandra keyspace and table

Add the following configuration to main.tf to configure a Cassandra keyspace with a random name, and a table to store your sample user data.

main.tf
resource "random_pet" "keyspace" {
  length    = 4
  separator = "_"
}

resource "awscc_cassandra_keyspace" "terraform" {
  keyspace_name = random_pet.keyspace.id
}

resource "awscc_cassandra_table" "users" {
  keyspace_name = awscc_cassandra_keyspace.terraform.keyspace_name
  table_name    = "users"

  partition_key_columns = [
    {
      column_name : "id"
      column_type : "int"
    }
  ]
  regular_columns = [
    {
      column_name : "first_name"
      column_type : "text"
    },
    {
      column_name : "last_name"
      column_type : "text"
    },
    {
      column_name : "email"
      column_type : "text"
    }
  ]

  encryption_specification = {
    encryption_type : "AWS_OWNED_KMS_KEY"
    kms_key_identifier : aws_kms_key.terraform.key_id
  }
}

Resource types begin with the name of the provider, so the Cloud Control provider manages awscc_cassandra_keyspace and awscc_cassandra_table resources.

Notice that your Cassandra table configuration uses the KMS key managed by the traditional provider, by referencing aws_kms_key.terraform.key_id for the kms_key_identifier argument. You can use resources from both the traditional and Cloud Control providers in the same configuration.

Next, add an output for your Cassandra keyspace name to outputs.tf.

outputs.tf
output "keyspace_name" {
  description = "Name of Cassandra keyspace."
  value       = awscc_cassandra_keyspace.terraform.keyspace_name
}

Now, apply this configuration to create your keyspace and table. Respond to the confirmation prompt with a yes.

$ terraform apply
## ...
Plan: 3 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + keyspace_name = (known after apply)

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

random_pet.keyspace: Creating...
random_pet.keyspace: Creation complete after 0s [id=rightly_barely_communal_buzzard]
awscc_cassandra_keyspace.terraform: Creating...
awscc_cassandra_keyspace.terraform: Still creating... [10s elapsed]
awscc_cassandra_keyspace.terraform: Still creating... [20s elapsed]
awscc_cassandra_keyspace.terraform: Creation complete after 27s [id=rightly_barely_communal_buzzard]
awscc_cassandra_table.users: Creating...
awscc_cassandra_table.users: Still creating... [10s elapsed]
awscc_cassandra_table.users: Still creating... [20s elapsed]
awscc_cassandra_table.users: Still creating... [30s elapsed]
awscc_cassandra_table.users: Still creating... [40s elapsed]
awscc_cassandra_table.users: Creation complete after 44s [id=rightly_barely_communal_buzzard|users]

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

Outputs:

aws_region = "us-west-2"
keyspace_name = "rightly_barely_communal_buzzard"
kms_key_id = "33198581-e648-46a3-b78d-1eb2edf9ab94"

Load data into Cassandra table

Now that your table is ready, load the sample data from the data/ directory in the example repository into your newly provisioned Cassandra table.

You will use the cqlsh command line utility to load data into your table. Amazon provides a Docker image pre-configured with cqlsh and an authentication plugin that allows you to access your Amazon Keyspaces Cassandra table with your AWS credentials.

Build the Docker image now.

$ docker build --tag amazon/keyspaces-toolkit --build-arg CLI_VERSION=latest \
         https://github.com/aws-samples/amazon-keyspaces-toolkit.git

Note: It may take several minutes for Docker to build your container image.

By default, both the traditional AWS provider and the Cloud Control provider use the same authentication credentials as the aws command line utility. You must pass your AWS credentials to the Docker container so that it has permission to access your Cassandra table. In this tutorial, you will do so via environment variables.

If you are not already using environment variables to authenticate with AWS, configure them now.

For example, if you use an access key to authenticate with AWS, first set the access key ID environment variable. Your access key ID will be different from the one shown here.

$ export AWS_ACCESS_KEY_ID=

Next, set the secret access key environment variable. Your secret access key will be different from the one shown here.

$ export AWS_SECRET_ACCESS_KEY=

You do not need to set the AWS_DEFAULT_REGION environment variable.

Note: Depending on how you authenticate with AWS, you may need to set other environment variables such as AWS_SESSION_TOKEN and AWS_SESSION_EXPIRATION. If so, set those variables in your terminal session before you export them with the command below.

Once you have set your AWS credentials as environment variables, export them to a file named aws_auth_env.

$ printenv | grep "^AWS" > aws_auth_env

Now, launch the amazon/keyspaces-toolkit Docker container to connect to your Cassandra database.

$ docker run -ti --rm --mount type=bind,src=$(pwd)/data,dst=/data \
         --env-file ./aws_auth_env \
         --env AWS_DEFAULT_REGION="$(terraform output -raw aws_region)" \
         --entrypoint cqlsh-expansion amazon/keyspaces-toolkit \
                      cassandra.$(terraform output -raw aws_region).amazonaws.com \
                      -k $(terraform output -raw keyspace_name) \
                      --ssl --auth-provider "SigV4AuthProvider"

In addition to configuring AWS authentication, the above command mounts the data directory from the example repository inside your Docker container. Then, it launches the cqlsh-expansion command to launch cqlsh and connect to your keyspace using an authentication provider from AWS. After it connects to your keyspace, cqlsh will print output summarizing its configuration followed by a prompt including your keyspace name.

Connected to Amazon Keyspaces at cassandra.us-west-2.amazonaws.com:9142.
[cqlsh 5.0.1 | Cassandra 3.11.2 | CQL spec 3.4.4 | Native protocol v4]
Use HELP for help.
cqlsh current consistency level is ONE.
cqlsh:rightly_barely_communal_buzzard> 

At the cqlsh prompt, copy the data from the example CSV file into your Cassandra table with the following command:

CONSISTENCY LOCAL_QUORUM; COPY users (id, first_name, last_name, email) FROM '/data/users.csv' WITH HEADER = TRUE;

Cassandra will load your data into the users table, and print output similar to the following.

cqlsh:rightly_barely_communal_buzzard> CONSISTENCY LOCAL_QUORUM; COPY users (id, first_name, last_name, email) FROM '/data/users.csv' WITH HEADER = TRUE;
Consistency level set to LOCAL_QUORUM.
cqlsh current consistency level is LOCAL_QUORUM.
Reading options from /root/.cassandra/cqlshrc:[copy]: {'maxattempts': '25', 'numprocesses': '16'}
Reading options from /root/.cassandra/cqlshrc:[copy-from]: {'minbatchsize': '1', 'chunksize': '30', 'maxparseerrors': '-1', 'maxinserterrors': '-1', 'ingestrate': '1500', 'maxbatchsize': '10'}
Reading options from the command line: {'header': 'TRUE'}
Using 16 child processes

Starting copy of rightly_barely_communal_buzzard.users with columns [id, first_name, last_name, email].
Processed: 4 rows; Rate:       0 rows/s; Avg. rate:       1 rows/s
4 rows imported from 1 files in 4.989 seconds (0 skipped).

Read data from table

Now, read the data from your Cassandra table with the following command:

SELECT * FROM users;

Cassandra will print output similar to the following.

cqlsh:rightly_barely_communal_buzzard> SELECT * FROM users;

 id | email                   | first_name | last_name
----+-------------------------+------------+------------
  2 |    gardener@example.com |    Samwise |     Gamgee
  4 |  thetallone@example.com |   Meriadoc | Brandybuck
  1 |  ringbearer@example.com |      Frodo |    Baggins
  3 | foolofatook@example.com |   Peregrin |       Took

(4 rows)

Exit the cqlsh prompt with exit.

cqlsh:rightly_barely_communal_buzzard> exit

After you exit cqlsh, Docker will remove your container, so you do not need to delete it manually.

Clean up your infrastructure

Remove the infrastructure you created during this tutorial. Respond to the confirmation prompt with a yes.

$ terraform destroy
## ...
Plan: 0 to add, 0 to change, 4 to destroy.

Changes to Outputs:
  - aws_region    = "us-west-2" -> null
  - keyspace_name = "rightly_barely_communal_buzzard" -> null
  - kms_key_id    = "33198581-e648-46a3-b78d-1eb2edf9ab94" -> null

Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes

awscc_cassandra_table.users: Destroying... [id=rightly_barely_communal_buzzard|users]
awscc_cassandra_table.users: Still destroying... [id=rightly_barely_communal_buzzard|users, 10s elapsed]
awscc_cassandra_table.users: Still destroying... [id=rightly_barely_communal_buzzard|users, 20s elapsed]
## ...
awscc_cassandra_keyspace.terraform: Still destroying... [id=rightly_barely_communal_buzzard, 1m20s elapsed]
awscc_cassandra_keyspace.terraform: Still destroying... [id=rightly_barely_communal_buzzard, 1m30s elapsed]
awscc_cassandra_keyspace.terraform: Destruction complete after 1m36s
random_pet.keyspace: Destroying... [id=rightly_barely_communal_buzzard]
random_pet.keyspace: Destruction complete after 0s

Destroy complete! Resources: 4 destroyed.

Next, remove the file containing your AWS credentials.

$ rm aws_auth_env

Finally, remove the Amazon Docker image you built during this tutorial.

$ docker rmi amazon/keyspaces-toolkit
Untagged: amazon/keyspaces-toolkit:latest
Deleted: sha256:054b66c9dec91680b49ee30686a4065cd2fafbcbb7aeed5f5f2ecad839fb183e

Next steps

In this tutorial, you used the Cloud Control provider to manage Amazon Keyspaces resources that the traditional AWS provider does not yet support.

Review the following resources to learn more about the Cloud Control provider and Terraform providers in general.

  • Read the Cloud Control Provider announcement blog post.
  • Visit the Cloud Control provider documentation to learn more about authentication and supported resources.
  • Learn how to create custom Terraform providers.
 Previous
 Next

On this page

  1. Manage New AWS Resources with the Cloud Control Provider
  2. Prerequisites
  3. Clone example configuration
  4. Create KMS key
  5. Add AWS Cloud Control provider
  6. Add Cassandra keyspace and table
  7. Load data into Cassandra table
  8. Read data from table
  9. Clean up your infrastructure
  10. Next steps
Give Feedback(opens in new tab)
  • Certifications
  • System Status
  • Terms of Use
  • Security
  • Privacy
  • Trademark Policy
  • Trade Controls
  • Give Feedback(opens in new tab)