• HashiCorp Developer

  • HashiCorp Cloud Platform
  • Terraform
  • Packer
  • Consul
  • Vault
  • Boundary
  • Nomad
  • Waypoint
  • Vagrant
HashiCorp Cloud Platform
  • Tutorials
  • Documentation
  • Try Cloud(opens in new tab)
  • Sign up
HashiCorp Cloud Platform

Skip to main content
8 tutorials
  • Peering an AWS VPC with HashiCorp Cloud Platform (HCP)
  • Deploy HCP Consul
  • Configure EC2 as a Consul Client for HCP Consul
  • Connect an Elastic Kubernetes Service Cluster to HCP Consul
  • Serverless Consul service mesh with ECS and HCP
  • Admin Partitions with HCP Consul and Amazon Elastic Container Service
  • Configure Azure VM as a Consul Client for HCP Consul
  • Connect an Azure Kubernetes Service Cluster to HCP Consul

  • Resources

  • Tutorial Library
  • Community Forum
    (opens in new tab)
  • Support
    (opens in new tab)
  • GitHub
    (opens in new tab)
  1. Developer
  2. HashiCorp Cloud Platform
  3. Tutorials
  4. HashiCorp Cloud Platform
  5. Serverless Consul service mesh with ECS and HCP

Serverless Consul service mesh with ECS and HCP

  • 12min

  • HCPHCP
  • ConsulConsul
  • TerraformTerraform

Consul with Elastic Container Service (ECS) and Hashicorp Cloud Platform (HCP) provides you with a fully-managed service mesh ecosystem. Empowering your AWS ECS tasks with Consul service mesh connectivity enables you to take advantage of features such as zero-trust-security, intentions, observability, traffic policy, and more.

In this tutorial, you will use Terraform to create an HCP Consul environment, an ECS cluster, and various AWS services. This environment will be used to highlight the ease of deployment, simplified scalability, and reduced operational overhead gained by utilizing this pattern.

Architecture Diagram

Specifically, you will:

  • Create a service principal and key in HashiCorp Cloud Platform (HCP)
  • Customize the Terraform environment deployment script
  • Deploy a HashiCorp Cloud Platform (HCP) Consul cluster and an Elastic Container Service (ECS) example application using the Terraform script
  • Explore your deployment with the HashiCorp Cloud Platform (HCP) portal UI
  • Inspect your environment using the Consul UI
  • Explore the sample application UI
  • Enable service mesh networking with Consul Intentions
  • Decommission the HashiCorp Cloud Platform (HCP) and Elastic Container Service (ECS) environment

While this tutorial uses elements that are not suitable for production environments including a development-tier HCP cluster and lack of redundancy within the architecture, it will teach you the core concepts for deploying and interacting with a fully-managed service mesh with AWS Elastic Container Service (ECS) and HashiCorp Cloud Platform (HCP). Refer to the Consul Reference Architecture for Consul best practices and the AWS Well-Architected Documentation for AWS best practices.

Prerequisites

To complete this tutorial you will need the following.

  • Basic command line access
  • The Terraform v1.0.0+ installed
  • Git installed
  • Admin access to the HashiCorp Cloud Platform (HCP) Consul portal
  • AWS account and associated credentials that allow you to create resources.

Clone GitHub repository

Clone the GitHub repository containing the configuration files and resources.

$ git clone https://github.com/hashicorp/learn-consul-terraform.git
$ git clone git@github.com:hashicorp/learn-consul-terraform.git
Change into the directory with the newly cloned repository. This directory contains the complete configuration files.
$ cd learn-consul-terraform/

Check out the v0.5 tag of the repository.

$ git checkout v0.5

Change directory to the ECS module.

$ cd datacenter-deploy-ecs-hcp/

Create and configure credential resources

Create HCP service principal and key

Create a Service Principal and key to enable HCP Consul deployment with Terraform.

From the left menu select Access Control (IAM) in the Settings section.

In the Access Control (IAM) page, click on the Service Principals tab and click on the Create a service principal link.

Specify a name for the service principal (learn-hcp in this tutorial). Choose the Contributor role from the drop-down menu.

Create Service Principal

Once created, click on the service principal's name to view its details.

From the detail page, click on the create service principal key link. A popup resembling the following will appear:

Create Service Principal Key

Note: Remember to copy the Client ID and secret. You will not be able to retrieve the secret later, and it is required in the next step.

Configure AWS and HCP environment variables

You can provide your AWS and HCP credentials to Terraform as environment variables. The required AWS environment variables are AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. The required HCP environment variables are HCP_CLIENT_ID and HCP_CLIENT_SECRET

If you don't have AWS Access Credentials, create your AWS Access Key ID and Secret Access Key by navigating to your service credentials in the IAM service on AWS. Click "Create access key" on that page to view your AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.

In the same terminal where you will run Terraform commands, set your AWS environment variable values.

export AWS_ACCESS_KEY_ID="YOUR_AWS_ACCESS_KEY"
export AWS_SECRET_ACCESS_KEY="YOUR_AWS_SECRET_KEY"
export HCP_CLIENT_ID="YOUR_HCP_CLIENT_ID"
export HCP_CLIENT_SECRET="YOUR_HCP_SECRET"

Tip: If you don't have access to IAM user credentials, use another authentication method described in the AWS provider documentation.

Create a Terraform configuration file for your secrets

Terraform will utilize your unique credentials to build a complete HashiCorp Cloud Platform (HCP) Consul cluster and example application in Elastic Container Service (ECS).

Create a file named terraform.tfvars in your working directory and copy the following configuration into the file.

lb_ingress_ip = "YOUR_PUBLIC_IP"
region     = "us-east-1"
name = "learn-hcp"

Replace the placeholders with your values and save the file.

To learn more about each of the Terraform attributes, see the respective resource documentation in the Terraform registry.

Note: By default, secrets created by AWS Secrets Manager require 30 days before they can be deleted. If this tutorial is destroyed and recreated, a name conflict error will occur for these secrets. This can be resolved by changing the value of name in your terraform.tfvars file.

Explore the Terraform manifest files

The Terraform manifest files used in this tutorial deploy various resources that enable your fully managed service mesh ecosystem. Below is the purpose of each Terraform manifest file.

  • data.tf - Data sources that allow Terraform to use information defined outside of Terraform.
  • ecs-clusters.tf - AWS ECS cluster deployment resources.
  • ecs-services.tf - AWS ECS service deployment resources.
  • hcp-consul.tf - HCP Consul cluster deployment resources.
  • hvn.tf - HashiCorp Virtual Network (HVN) deployment resources.
  • load-balancer.tf - AWS Application Load Balancer (ALB) deployment resources.
  • logging.tf - AWS Cloudwatch logging configuration.
  • modules.tf - AWS ECS task application definitions.
  • network-peering.tf - HCP and AWS network communication configuration.
  • outputs.tf - Unique values output after Terraform successfully completes a deployment.
  • providers.tf - AWS and HCP provider definitions for Terraform.
  • secrets-manager.tf - AWS Secrets Manager configuration.
  • security-groups - AWS Security Group port management definitions.
  • variables.tf - Parameter definitions used to customize unique user environment attributes.
  • vpc.tf - AWS Virtual Private Cloud (VPC) deployment resources.
  • terraform.tfvars - Your unique credentials and environment attributes (created in the previous step).

Note: By default, the hcp-consul.tf file creates a "development" size tier HCP Cluster. Development tiers are single-server Consul datacenters recommended for testing or evaluation purposes only. For production, we recommend using the "standard" or "plus" tier because each Consul datacenter will have the recommended three server nodes.

Deploy the HCP + ECS environment

With the Terraform manifest files and your custom credentials file, you are now ready to deploy your infrastructure.

Issue the terraform init command from your working directory to download the necessary providers and initialize the backend.

$ terraform init

Initializing the backend...

Initializing provider plugins...
...

Terraform has been successfully initialized!
...

Once Terraform has been initialized, you can verify the resources that will be created using the plan command.

$ terraform plan

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:
...

Finally, you can deploy the resources using the apply command.

$ terraform apply

...
Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value:

Remember to confirm the run by entering yes.

Once you confirm, it will take a few minutes to complete the deploy. Terraform will print the following output if the deployment is successful.

Apply complete! Resources: 64 added, 0 changed, 0 destroyed.

Note: The deploy could take up to 10 minutes to complete. Feel free to grab a cup of coffee while waiting for the cluster to complete initialization or learn more about the Raft protocol in a fun, interactive way.

Perform testing procedures

Explore the HCP Portal UI

Once deployed, you can verify that the resources have been created in the HCP portal. Open https://portal.cloud.hashicorp.com in your browser.

Click on the Consul item on the left navigation pane to see the overview for your newly deployed Consul cluster, then select your dc1 cluster to review your cluster details.

HCP Consul Clusters

On your dc1 cluster detail page, click the Generate Token button to generate an ACL token for your Consul datacenter and copy it to a local text file.

HCP Consul primary datacenter

Note: The Terraform script in this tutorial will create a Consul datacenter that is publicly accessible from the internet for ease of use. For production, we recommend using private endpoints.

Access the Consul UI

Once HCP Consul is deployed, you can access the Consul UI by clicking Public from the list of Cluster URLs on the HCP Consul cluster overview page.

Consul overview tab

This will copy the public IP address to your clipboard. You can now paste this IP address into your browser to access the Consul UI.

HCP Consul is secure by default, so you will need an ACL token to view any data in the UI. Select the login option in the top right corner of the Consul UI, then paste the ACL token you generated in the previous step. This will authorize you to interact with the Consul UI.

HCP Consul Cluster

Open the menu item labeled Services in the left of the screen. Notice the informational text in service mesh with proxy on each ECS service.

HCP Consul Nodes in Service Mesh Image

Explore the sample application

One of the ECS service tasks defined in this environment deploys the application fake-service, a Consul client agent, and an Envoy sidecar proxy in your ECS cluster.

Visit the unique client_lb_address URL that was output by Terraform after your run to see the deployed fake-service application.

Apply complete! Resources: 64 added, 0 changed, 0 destroyed.

Outputs:

client_lb_address = "http://hcp-ecs-learn-example-client-app-1546746700.us-east-1.elb.amazonaws.com:9090/ui"

HCP Consul Nodes in Service Mesh Image

Notice the lack of communication between the two services. This is due to the deny by default service mesh communication behavior.

Enable service mesh networking

Consul Intentions are used to control which services may establish connections or make requests.

Open the menu item labeled Intentions in the left of the screen. Click the Create button in the top right to create an intention.

Consul UI - Intentions Main Page

Set the source service as hcp-ecs-example-client-app, the destination service as hcp-ecs-example-server-app, both namespace fields as default, and communication behavior to Allow. Click the Save button in the bottom left once complete.

Consul UI - Intentions Details Page

Revisit your fake-service application at the unique client_lb_address URL that was output by Terraform after your run. Notice that the two services are now able to communicate with each other.

Fake Service Working

You have successfully deployed a serverless environment across ECS and HCP using Terraform. Within this environment, you enabled service mesh communication with Consul intentions.

Destroy resources

Use the terraform destroy command to clean up the resources you created.

$ terraform destroy

...
Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value:

Remember to confirm by entering yes.

Once you confirm, it will take a few minutes to complete the removal. Terraform will print the following output if the command is successful.

Destroy complete! Resources: 64 destroyed.

Prerequisites

To complete this tutorial you will need the following.

  • Basic command line access
  • The Terraform v1.0.0+ installed
  • Git installed
  • Admin access to the HashiCorp Cloud Platform (HCP) Consul portal
  • An AWS account and AWS Access Credentials
  • An AWS VPC with at least two private subnets, two public subnets, and internet access.
  • An existing HCP Consul cluster
  • An existing HashiCorp Virtual Network (HVN) created with network connectivity to an AWS VPC

Clone GitHub repository

Clone the GitHub repository containing the configuration files and resources.

$ git clone https://github.com/hashicorp/learn-consul-terraform.git
$ git clone git@github.com:hashicorp/learn-consul-terraform.git

Change into the directory with the newly cloned repository. This directory contains the complete configuration files.

$ cd learn-consul-terraform/

Check out the v0.5 tag of the repository.

$ git checkout v0.5

Change directory to the ECS module.

$ cd datacenter-deploy-ecs-hcp-existing/

Configure AWS authentication

Configure AWS credentials for your environment so that Terraform can authenticate with AWS and create resources.

To do this with IAM user authentication, set your AWS access key ID as an environment variable.

$ export AWS_ACCESS_KEY_ID="<YOUR_AWS_ACCESS_KEY_ID>"

Now set your secret key.

$ export AWS_SECRET_ACCESS_KEY="<YOUR_AWS_SECRET_ACCESS_KEY>"

Tip: If you don't have access to IAM user credentials, use another authentication method described in the AWS provider documentation.

Create a Terraform configuration file for your environment

Terraform will utilize your unique credentials to build an example application in Elastic Container Service (ECS) and integrate with your existing HCP Consul cluster.

Create a file named terraform.tfvars in your working directory and copy the following configuration into the file.

vpc_id                = "YOUR_VPC_ID_HERE"
user_public_ip        = "YOUR_PUBLIC_IP_HERE"
region                = "AWS_REGION_OF_YOUR_HCP_CLUSTER"
name                  = "ANY_CUSTOM_NAME_HERE"
consul_cluster_addr   = "YOUR_HCP_CONSUL_CLUSTER_IP_HERE"
consul_datacenter     = "YOUR_CONSUL_DATACENTER_NAME_HERE"
consul_acl_token      = "YOUR_CONSUL_ACL_TOKEN_HERE"
consul_gossip_key     = "YOUR_CONSUL_GOSSIP_KEY_HERE"
consul_client_ca_path = "THE_PATH_TO_YOUR_CA_PEM_KEY"
private_subnets_ids   = [YOUR_FIRST_PRIVATE_SUBNET_HERE, YOUR_SECOND_PRIVATE_SUBNET_HERE]
public_subnets_ids    = [YOUR_FIRST_PUBLIC_SUBNET_HERE, YOUR_SECOND_PUBLIC_SUBNET_HERE]

Replace the placeholders with your values and save the file.

To get the consul_gossip_key, and the consul_client_ca, go to the Consul Cluster Overview Page, click on the Access Consul , and then click on download to install Client agents. This zip file contains the ca.pem file and the client_config.json file.

In the file client_config.json, the alphanumeric value for "encryption" is the consul_gossip_key. The consul_datacenter is your Consul cluster name.

The consul_acl_token can be found by clicking on Access Consul button and the Generate admin token link. To get the consul_cluster_addr click on the private tab under 'Access Consul' and copy the address.

Note: Ensure the Consul client CA pem key is available locally and that the path to the file is correctly provided as the variable input value.

Image of the client config payload download option

To learn more about each of the Terraform attributes, see the respective resource documentation in the Terraform registry.

Note: By default, secrets created by AWS Secrets Manager require 30 days before they can be deleted. If this tutorial is destroyed and recreated, a name conflict error will occur for these secrets. This can be resolved by changing the value of name in your terraform.tfvars file.

Explore the Terraform manifest files

The Terraform manifest files used in this tutorial deploy various resources that enable your fully managed service mesh ecosystem. Below is the purpose of each Terraform manifest file.

  • data.tf - Data sources that allow Terraform to use information defined outside of Terraform.
  • ecs-clusters.tf - AWS ECS cluster deployment resources.
  • ecs-services.tf - AWS ECS service deployment resources.
  • load-balancer.tf - AWS Application Load Balancer (ALB) deployment resources.
  • logging.tf - AWS Cloudwatch logging configuration.
  • modules.tf - AWS ECS task application definitions.
  • outputs.tf - Unique values output after Terraform successfully completes a deployment.
  • providers.tf - AWS provider definitions for Terraform.
  • secrets-manager.tf - AWS Secrets Manager configuration.
  • security-groups - AWS Security Group port management definitions.
  • variables.tf - Parameter definitions used to customize unique user environment attributes.
  • terraform.tfvars - Your unique credentials and environment attributes (created in the previous step).

Deploy the ECS environment

With the Terraform manifest files and your custom credentials file, you are now ready to deploy your infrastructure.

Issue the terraform init command from your working directory to download the necessary providers and initialize the backend.

$ terraform init

Initializing the backend...

Initializing provider plugins...
...

Terraform has been successfully initialized!
...

Once Terraform has been initialized, you can verify the resources that will be created using the plan command.

$ terraform plan

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:
...

Finally, you can deploy the resources using the apply command.

$ terraform apply

...
Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value:

Remember to confirm the run by entering yes.

Once you confirm, it will take a few minutes to complete the deploy. Terraform will print the following output if the deployment is successful.

Apply complete! Resources: 37 added, 0 changed, 0 destroyed.

Note: The deploy could take up to 5 minutes to complete. Feel free to grab a cup of coffee while waiting for the ECS cluster to complete initialization or learn more about the Raft protocol in a fun, interactive way.

Perform testing procedures

Access the Consul UI

HCP Consul is secure by default, so you will need an ACL token to view any data in the UI. Navigate to the Consul UI, select the login option in the top right corner of the Consul UI, then paste the ACL token for your Consul cluster. This will authorize you to interact with the Consul UI.

HCP Consul Cluster

Open the menu item labeled Services in the left of the screen. Notice the informational text in service mesh with proxy on each ECS service.

HCP Consul Nodes in Service Mesh Image

Explore the sample application

One of the ECS service tasks defined in this environment deploys the application fake-service, a Consul client agent, and an Envoy sidecar proxy in your ECS cluster.

Visit the unique client_lb_address URL that was output by Terraform after your run to see the deployed fake-service application.

Apply complete! Resources: 37 added, 0 changed, 0 destroyed.

Outputs:

client_lb_address = "http://hcp-ecs-learn-example-client-app-1546746700.us-east-1.elb.amazonaws.com:9090/ui"

HCP Consul Nodes in Service Mesh Image

Notice the lack of communication between the two services. This is due to the deny by default service mesh communication behavior.

Enable service mesh networking

Consul Intentions are used to control which services may establish connections or make requests.

Open the menu item labeled Intentions in the left of the screen. Click the Create button in the top right to create an intention.

Consul UI - Intentions Main Page

Set the source service as hcp-ecs-example-client-app, the destination service as hcp-ecs-example-server-app, both namespace fields as default, and communication behavior to Allow. Click the Save button in the bottom left once complete.

Consul UI - Intentions Details Page

Revisit your fake-service application at the unique client_lb_address URL that was output by Terraform after your run. Notice that the two services are now able to communicate with each other.

Fake Service Working

You have successfully deployed a serverless environment across ECS and HCP using Terraform. Within this environment, you enabled service mesh communication with Consul intentions.

Destroy resources

Use the terraform destroy command to clean up the resources you created.

$ terraform destroy

...
Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value:

Remember to confirm by entering yes.

Once you confirm, it will take a few minutes to complete the removal. Terraform will print the following output if the command is successful.

Destroy complete! Resources: 37 destroyed.

Next steps

In this tutorial you learned how to deploy a fully-managed service mesh with AWS Elastic Container Service (ECS) and HashiCorp Cloud Platform (HCP) with Terraform. With Terraform, you accomplished this task using the Terraform HCP and AWS providers. You also learned how to enable service mesh communication between services using Consul intentions.

You can find the full documentation for the HashiCorp Cloud Platform and AWS providers in the Terraform registry documentation.

To get additional hands-on experience with Consul's service discovery and service mesh features, you can follow these guides to connect a Consul client deployed in a virtual machine or on Elastic Kubernetes Service (EKS).

If you encounter any issues, please contact the HCP team at support.hashicorp.com.

 Previous
 Next

This tutorial also appears in:

  •  
    11 tutorials
    HCP Consul Deployment
    Deploy managed Consul in AWS or Azure. Connect Consul clients running on Azure Virtual Machines (VMs), Elastic Compute Cloud (EC2), Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS), and/or Elastic Container Service (ECS).
    • Consul
  •  
    7 tutorials
    Cloud and Platform Integrations
    Learn how Consul service discovery integrates with cloud providers and technologies.
    • Consul

On this page

  1. Serverless Consul service mesh with ECS and HCP
  2. Next steps
Give Feedback(opens in new tab)
  • Certifications
  • System Status
  • Terms of Use
  • Security
  • Privacy
  • Trademark Policy
  • Trade Controls
  • Give Feedback(opens in new tab)