• HashiCorp Developer

  • HashiCorp Cloud Platform
  • Terraform
  • Packer
  • Consul
  • Vault
  • Boundary
  • Nomad
  • Waypoint
  • Vagrant
HashiCorp Cloud Platform
  • Tutorials
  • Documentation
  • Try Cloud(opens in new tab)
  • Sign up
HashiCorp Cloud Platform

Skip to main content
8 tutorials
  • Peering an AWS VPC with HashiCorp Cloud Platform (HCP)
  • Deploy HCP Consul
  • Configure EC2 as a Consul Client for HCP Consul
  • Connect an Elastic Kubernetes Service Cluster to HCP Consul
  • Serverless Consul service mesh with ECS and HCP
  • Admin Partitions with HCP Consul and Amazon Elastic Container Service
  • Configure Azure VM as a Consul Client for HCP Consul
  • Connect an Azure Kubernetes Service Cluster to HCP Consul

  • Resources

  • Tutorial Library
  • Community Forum
    (opens in new tab)
  • Support
    (opens in new tab)
  • GitHub
    (opens in new tab)
  1. Developer
  2. HashiCorp Cloud Platform
  3. Tutorials
  4. HashiCorp Cloud Platform
  5. Configure EC2 as a Consul Client for HCP Consul

Configure EC2 as a Consul Client for HCP Consul

  • 14min

  • HCPHCP
  • ConsulConsul
  • TerraformTerraform

HashiCorp Cloud Platform (HCP) Consul is a fully managed Service Mesh as a Service (SMaaS) version of Consul. After you deploy an HCP Consul server cluster, you must deploy Consul clients into your network so you can leverage Consul's full feature set including service mesh and service discovery. HCP Consul supports Consul clients running on EKS, EC2, and ECS workloads.

HCP Consul cluster connecting to a Consul Client on a peered AWS
VPC

In this tutorial, you will deploy and provision a Consul client running on an EC2 instance that connects to your HCP Consul cluster. In the process, you will review the provisioning script to better understand the steps required to properly configure an EC2 instance to connect and interact with an HCP Consul cluster.

Prerequisites

For this tutorial, you will need:

  • the Terraform 0.14+ CLI installed locally.
  • An HCP account configured for use with Terraform
  • an AWS account with AWS Credentials configured for use with Terraform.

Clone example repository

In your terminal, clone the example repository. This repository contains Terraform configuration to deploy different types of Consul clusters, including the one you will need in this tutorial.

$ git clone https://github.com/hashicorp/learn-consul-terraform

Navigate to the project directory in the cloned repository.

$ cd learn-consul-terraform/datacenter-deploy-ec2-hcp

Review configuration

The project directory contains two sub-directories:

  1. The 1-vpc-hcp subdirectory contains Terraform configuration to deploy an AWS VPC and underlying networking resources, an HCP HashiCorp Virtual Network (HVN), and an HCP Consul. In addition, it uses the hashicorp/hcp-consul/aws Terraform module to set up all networking rules to allow a Consul client to communicate with the HCP Consul servers. This includes setting up the peering connection between the HVN and your VPC, setting up the HCP routes, and creating AWS ingress rules.

    datacenter-deploy-ec2-hcp/1-vpc-hcp/main.tf
    module "aws_hcp_consul" {
      source  = "hashicorp/hcp-consul/aws"
      version = "~> 0.7.0"
    
      hvn             = hcp_hvn.main
      vpc_id          = module.vpc.vpc_id
      subnet_ids      = module.vpc.public_subnets
      route_table_ids = module.vpc.public_route_table_ids
    }
    

    Note: The hashicorp/hcp-consul/aws Terraform module creates a security group that allows TCP/UDP ingress traffic on port 8301 and allows all egress. The egress security rule lets the EC2 instance download dependencies required for the Consul client including the Consul binary and Docker.

  2. The 2-ec2-consul-client subdirectory contains Terraform configuration that creates an AWS key pair and deploys an EC2 instance. The EC2 instance uses a cloud-init script to automate the Consul client configuration. In the Review Consul client configuration for EC2 section, you will review the automation scripts in more detail.

This tutorial intentionally separates the Terraform configuration into two discrete steps. This process reflects Terraform best practices. By dividing the HCP Consul cluster management from the Consul client management, you can separate the duties and reduce the blast radius.

Deploy HCP Consul

Navigate to the 1-vpc-hcp directory.

$ cd 1-vpc-hcp

Initialize the Terraform configuration.

$ terraform init
Initializing modules...
Downloading registry.terraform.io/hashicorp/hcp-consul/aws 0.7.1 for aws_hcp_consul...
- aws_hcp_consul in .terraform/modules/aws_hcp_consul
Downloading registry.terraform.io/terraform-aws-modules/vpc/aws 3.10.0 for vpc...
- vpc in .terraform/modules/vpc

Initializing the backend...

Initializing provider plugins...
- Reusing previous version of hashicorp/hcp from the dependency lock file
- Reusing previous version of hashicorp/aws from the dependency lock file
- Installing hashicorp/hcp v0.29.0...
- Installed hashicorp/hcp v0.29.0 (signed by HashiCorp)
- Installing hashicorp/aws v3.75.2...
- Installed hashicorp/aws v3.75.2 (signed by HashiCorp)

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Next, apply the configuration. Respond yes to the prompt to confirm.

$ terraform apply

## ...

Plan: 23 to add, 0 to change, 0 to destroy.

Outputs:

hcp_consul_cluster_id = "learn-hcp-consul-ec2-client"
hcp_consul_security_group = "sg-007d371f114a2f553"
subnet_id = "subnet-0a23cb052f4960d79"
vpc_cidr_block = "10.0.0.0/16"
vpc_id = "vpc-0b1d246078d615afc"

Notice that Terraform displays the outputs created from the apply.

Create terraform.tfvars file for Consul client directory

Since you created the underlying infrastructure with Terraform, you can use the outputs to help you deploy the Consul clients on an EC2 instance.

Create a terraform.tfvars file in the 2-ec2-consul-client directory with the Terraform outputs from this project.

$ echo "vpc_id=\"$(terraform output -raw vpc_id)\"
vpc_cidr_block=\"$(terraform output -raw vpc_cidr_block)\"
subnet_id=\"$(terraform output -raw subnet_id)\"
cluster_id=\"$(terraform output -raw hcp_consul_cluster_id)\"
hcp_consul_security_group_id=\"$(terraform output -raw hcp_consul_security_group)\"" > ../2-ec2-consul-client/terraform.tfvars

Review Consul client configuration for EC2

Navigate to the 2-ec2-consul-client directory.

$ cd 2-ec2-consul-client

In your terminal, clone the example repository. This repository contains Terraform configuration to deploy different types of Consul clusters, including the one you will need in this tutorial.

$ git clone https://github.com/hashicorp/learn-consul-terraform

Navigate to the project directory in the cloned repository.

$ cd learn-consul-terraform/datacenter-deploy-hcp-ec2-client/2-ec2-consul-client

Review Terraform configuration

Open main.tf. This Terraform configuration creates an AWS key pair, a security group, an EC2 instance. The EC2 instance uses a cloud-init script to automate the Consul client configuration so it can connect to your HCP Consul cluster. The AWS key pair and security group lets you SSH into your EC2 instance.

Notice that the Terraform configuration uses data sources to retrieve information about your AWS and HCP resources.

datacenter-deploy-ec2-hcp/2-ec2-consul-client/main.tf
data "aws_vpc" "selected" {
  id = var.vpc_id
}

data "aws_subnet" "selected" {
  id = var.subnet_id
}

data "hcp_hvn" "selected" {
  hvn_id = data.hcp_consul_cluster.selected.hvn_id
}

data "hcp_consul_cluster" "selected" {
  cluster_id = var.cluster_id
}

The aws_instance.consul_client resource defines your EC2 instance that will serve as a Consul client. Notice the following attributes:

  • The count attribute lets you easily scale the number of Consul clients running on EC2 instances. The cloud-init script lets you automatically configure the EC2 instance to connect to your HCP Consul cluster.
  • The vpc_security_group_ids attribute references a security group that allows TCP/UDP ingress traffic on port 8301 and allows all egress. The ingress traffic lets the HCP Consul server cluster communicate with your Consul clients. The egress traffic lets you download the dependencies required for a Consul client, including the Consul binary.
  • The key_name attribute references a key pair that will let you SSH into the EC2 instance.
  • The user_data attribute references the scripts/user_data.sh and scripts/setup.sh automation scripts that configure and set up a Consul client on your EC2 instance. Notice that the automation scripts references the HCP Consul's CA certificate, configuration, and ACL tokens. These values are crucial for the Consul client to securely connect to your HCP Consul cluster.
datacenter-deploy-ec2-hcp/2-ec2-consul-client/main.tf
resource "aws_instance" "consul_client" {
  count                       = 1
  ami                         = data.aws_ami.ubuntu.id
  instance_type               = "t2.small"
  associate_public_ip_address = true
  subnet_id                   = var.subnet_id
  vpc_security_group_ids = [
    var.hcp_consul_security_group_id,
    aws_security_group.allow_ssh.id
  ]
  key_name = aws_key_pair.consul_client.key_name

  user_data = templatefile("${path.module}/scripts/user_data.sh", {
    setup = base64gzip(templatefile("${path.module}/scripts/setup.sh", {
      consul_ca        = data.hcp_consul_cluster.selected.consul_ca_file
      consul_config    = data.hcp_consul_cluster.selected.consul_config_file
      consul_acl_token = hcp_consul_cluster_root_token.token.secret_id,
      consul_version   = data.hcp_consul_cluster.selected.consul_version,
      consul_service = base64encode(templatefile("${path.module}/scripts/service", {
        service_name = "consul",
        service_cmd  = "/usr/bin/consul agent -data-dir /var/consul -config-dir=/etc/consul.d/",
      })),
      vpc_cidr = var.vpc_cidr_block
    })),
  })

  tags = {
    Name = "hcp-consul-client-${count.index}"
  }
}

Review client configuration files

The client configuration file contains information that lets your Consul client connect to your specific HCP Consul cluster.

The Terraform configuration file retrieves the values directly from the HCP Consul data source.

datacenter-deploy-ec2-hcp/2-ec2-consul-client/main.tf
resource "aws_instance" "consul_client" {
  ## ...
  user_data = templatefile("${path.module}/scripts/user_data.sh", {
    setup = base64gzip(templatefile("${path.module}/scripts/setup.sh", {
      consul_ca        = data.hcp_consul_cluster.selected.consul_ca_file
      consul_config    = data.hcp_consul_cluster.selected.consul_config_file
      consul_acl_token = hcp_consul_cluster_root_token.token.secret_id,
      ## ...
    })),
  })
  ## ...
}

You can also retrieve the client configuration files from the HCP Portal UI. First, navigate to your HCP Consul cluster. Click Access Consul, then click Download to install Client Agents.

HCP Consul dashboard that shows how to download the client configuration
files

This will download a compressed file containing the client configuration files. This compressed file will contain the following:

  • A ca.pem file that lets the Consul client authenticate to the HCP Consul cluster via mTLS.
  • A client_config.json file that contains information that lets your Consul client connect to your specific HCP Consul cluster.

The following is a sample client configuration file.

client_config.json
{
  "acl": {
    "enabled": true,
    "down_policy": "async-cache",
    "default_policy": "deny"
  },
  "ca_file": "./ca.pem",
  "verify_outgoing": true,
  "datacenter": "dc1",
  "encrypt": "GOSSIP_ENCRYPTION_KEY",
  "server": false,
  "log_level": "INFO",
  "ui": true,
  "retry_join": ["CONSUL_CLUSTER_PRIVATE_ENDPOINT"],
  "auto_encrypt": {
    "tls": true
  }
}

Notice these attributes in the client configuration file:

  • The acl.enabled setting is set to true which ensures that only requests with a valid token will be able to access resources in the datacenter. To add your client, you will need to configure an agent token. The automation script automatically configures this.
  • The ca_file setting references the ca.pem file. The automation script will update this to point to /etc/consul.d.
  • The encrypt setting is set to your Consul cluster's gossip encryption key. Do not modify the encryption key that is provided for you in this file.
  • The retry_join is configured with the private endpoint address of your HCP Consul cluster's API. This is the address that your client will use to interact with the servers running in the HCP Consul cluster. Do not modify the value that is provided for you in this file.
  • The auto_encrypt.tls setting is set to true to ensure transport layer security is enforced on all traffic with and between Consul agents.

Review provisioning scripts

The 2-ec2-consul-client/scripts directory contains all the automation scripts.

  1. The user_data.sh file serves as an entrypoint. It loads, configures, and runs setup.sh.
  2. The service file is a template for a systemd service. This will let the Consul client to run as a daemon (background) service on the EC2 instance. It will also automatically restart the Consul client if it fails.
  3. The setup.sh contains the core logic to configure the Consul client. First, the script sets up container networking (setup_networking), then downloads the Consul binary and Docker (setup_deps).

The setup_consul function creates the /etc/consul.d and /var/consul directories, the Consul configuration and data directories respectively.

datacenter-deploy-ec2-hcp/2-ec2-consul-client/scripts/setup.sh
setup_consul() {
  mkdir --parents /etc/consul.d /var/consul
  chown --recursive consul:consul /etc/consul.d
  chown --recursive consul:consul /var/consul

  ## …
}

The EC2 instance Terraform configuration defines the Consul configuration and data directory path.

datacenter-deploy-ec2-hcp/2-ec2-consul-client/main.tf
resource "aws_instance" "consul_client" {
  ## ...
  user_data = templatefile("${path.module}/scripts/user_data.sh", {
    setup = base64gzip(templatefile("${path.module}/scripts/setup.sh", {
      ## ...
      consul_service = base64encode(templatefile("${path.module}/scripts/service", {
        service_name = "consul",
        service_cmd  = "/usr/bin/consul agent -data-dir /var/consul -config-dir=/etc/consul.d/",
      })),
      ## ...
    })),
  })
  ## ...
}

Next, the setup_consul function configures and moves the CA file and client configuration files in their respective destinations in /etc/consul.d. Notice that the script updates the client configuration's ca_file path, ACL token, ports, and bind address.

datacenter-deploy-ec2-hcp/2-ec2-consul-client/scripts/setup.sh
setup_consul() {
  ## …

  echo "${consul_ca}" | base64 -d >/etc/consul.d/ca.pem
  echo "${consul_config}" | base64 -d >client.temp.0
  ip=$(hostname -I | awk '{print $1}')
  jq '.ca_file = "/etc/consul.d/ca.pem"' client.temp.0 >client.temp.1
  jq --arg token "${consul_acl_token}" '.acl += {"tokens":{"agent":"\($token)"}}' client.temp.1 >client.temp.2
  jq '.ports = {"grpc":8502}' client.temp.2 >client.temp.3
  jq '.bind_addr = "{{ GetPrivateInterfaces | include \"network\" \"'${vpc_cidr}'\" | attr \"address\" }}"' client.temp.3 >/etc/consul.d/client.json

}

Finally, the setup.sh file enables and starts the Consul service.

Create SSH key

The configuration scripts included in the AMIs rely on a user named consul-client. Create a SSH key to pair with the user so that you can securely connect to your instances.

Generate a new SSH key named consul-client. The argument provided with the -f flag creates the key in the current directory and creates two files called learn-packer and consul-client.pub. Change the placeholder email address to your email address.

$ ssh-keygen -t rsa -C "your_email@example.com" -f ./consul-client

When prompted, press enter to leave the passphrase blank on this key.

If you are on a Windows machine, use Putty to generate SSH keys by following the instructions here.

Deploy Consul client on EC2

Find the terraform.tfvars file. This file contains information about your VPC and HCP deployment and should look like the following.

datacenter-deploy-ec2-hcp/2-ec2-consul-client/terraform.tfvars
vpc_id="vpc-0b1d246078d615afc"
vpc_cidr_block="10.0.0.0/16"
subnet_id="subnet-0a23cb052f4960d79"
cluster_id="learn-hcp-consul-ec2-client"
hcp_consul_security_group_id="sg-007d371f114a2f553"

If you do not have this file, go to the step to create the file.

Create a file named terraform.tfvars file with the following content.

datacenter-deploy-ec2-hcp/2-ec2-consul-client/terraform.tfvars
vpc_id="VPC_ID"
vpc_cidr_block="CIDR_BLOCK"
subnet_id="SUBNET_ID"
cluster_id="HCP_CLUSTER_ID"
hcp_consul_security_group_id="HCP_CONSUL_SG_ID"

Update your terrafom.tfvars file to reference the appropriate values. The Start from tutorial tab contains sample values – your values should be similar to those.

Initialize the Terraform configuration.

$ terraform init
Initializing the backend...

Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Reusing previous version of hashicorp/hcp from the dependency lock file
- Installing hashicorp/aws v3.75.2...
- Installed hashicorp/aws v3.75.2 (signed by HashiCorp)
- Installing hashicorp/hcp v0.29.0...
- Installed hashicorp/hcp v0.29.0 (signed by HashiCorp)

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Next, apply the configuration. Respond yes to the prompt to confirm.

$ terraform apply

## ...

Apply complete! Resources: 5 added, 0 changed, 0 destroyed.

Outputs:

consul_root_token = <sensitive>
consul_url = "https://learn-hcp-consul-ec2-client.consul.98a0dcc3-5473-4e4d-a28e-6c343c498530.aws.hashicorp.cloud"
ec2_client = "34.211.17.208"
next_steps = "Hashicups Application will be ready in ~2 minutes. Use 'terraform output consul_root_token' to retrieve the root token."

Verify Consul client

Now that you have deployed the Consul clients on an EC2 instance, you will verify that you have a Consul deployment with at least 1 server and 1 client.

Retrieve your HCP Consul dashboard URL and open it in your browser.

$ terraform output -raw consul_url
https://learn-hcp-consul-ec2-client.consul.98a0dcc3-5473-4e4d-a28e-6c343c498530.aws.hashicorp.cloud

Next, retrieve your Consul root token. You will use this token to authenticate your Consul dashboard.

$ terraform output -raw consul_root_token
00000000-0000-0000-0000-000000000000

In your HCP Consul dashboard, sign in with the root token you just retrieved. After you sign in, click on Nodes to find the Consul client.

Consul dashboard showing the Consul client running on the EC2 instance

Note: If your Consul client is unable to connect to your HCP Consul server cluster, verify that your VPC, HVN, peering connection, and routes are configured correctly. Refer to the example repository for each resource's configuration.

You can also SSH into your EC2 instance to verify that it is running the Consul client and connected to your HCP Consul cluster.

First, SSH into your EC2 instance.

$ ssh ubuntu@$(terraform output -raw ec2_client) -i ./consul-client

Then, view the members in your Consul datacenter. Replace ACL_TOKEN with the Consul root token (consul_root_token output). Notice that the command returns both the HCP Consul server nodes and client nodes.

$ consul members -token ACL_TOKEN
Node             Address            Status  Type    Build       Protocol  DC                           Partition  Segment
ip-172-25-33-11  172.25.33.11:8301  alive   server  1.11.5+ent  2         learn-hcp-consul-ec2-client  default    <all>
ip-10-0-1-114    10.0.1.114:8301    alive   client  1.11.5+ent  2         learn-hcp-consul-ec2-client  default    <default>

Next steps

In this tutorial, you deployed a Consul client and connected it to your HCP Consul cluster. To learn more about Consul's features, and for step-by-step examples of how to perform common Consul tasks, complete one of the Get Started with Consul tutorials.

  • Register a Service with Consul Service Discovery
  • Secure Applications with Service Sidecar Proxies
  • Explore the Consul UI
  • Create a Consul service mesh on HCP using Envoy as a sidecar proxy

If you encounter any issues, please contact the HCP team at support.hashicorp.com.

 Previous
 Next

This tutorial also appears in:

  •  
    11 tutorials
    HCP Consul Deployment
    Deploy managed Consul in AWS or Azure. Connect Consul clients running on Azure Virtual Machines (VMs), Elastic Compute Cloud (EC2), Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS), and/or Elastic Container Service (ECS).
    • Consul

On this page

  1. Configure EC2 as a Consul Client for HCP Consul
  2. Prerequisites
  3. Clone example repository
  4. Review configuration
  5. Deploy HCP Consul
  6. Review Consul client configuration for EC2
  7. Create SSH key
  8. Deploy Consul client on EC2
  9. Verify Consul client
  10. Next steps
Give Feedback(opens in new tab)
  • Certifications
  • System Status
  • Terms of Use
  • Security
  • Privacy
  • Trademark Policy
  • Trade Controls
  • Give Feedback(opens in new tab)