• HashiCorp Developer

  • HashiCorp Cloud Platform
  • Terraform
  • Packer
  • Consul
  • Vault
  • Boundary
  • Nomad
  • Waypoint
  • Vagrant
Consul
  • Install
  • Tutorials
  • Documentation
  • API
  • CLI
  • Try Cloud(opens in new tab)
  • Sign up
Consul on HCP

Skip to main content
3 tutorials
  • Deploy HCP Consul
  • Manage Service Access Permission with Intentions
  • Upgrade Services with Canary Deployments

  • Resources

  • Tutorial Library
  • Certifications
  • Community Forum
    (opens in new tab)
  • Support
    (opens in new tab)
  • GitHub
    (opens in new tab)
  1. Developer
  2. Consul
  3. Tutorials
  4. Consul on HCP
  5. Deploy HCP Consul

Deploy HCP Consul

  • 17min

  • HCPHCP
  • ConsulConsul
  • TerraformTerraform

HashiCorp Cloud Platform (HCP) Consul lets you start using Consul for service discovery and service mesh with less setup time. It does this by providing fully managed Consul servers. Consul offers service discovery, service mesh, traffic management, and automated updates to network infrastructure device.

In this tutorial, you will deploy an HCP Consul server cluster, your choice of Kubernetes or virtual machine Consul clients, and a demo application. Then, you will explore how the demo application leverages Consul service mesh and interact with Consul using the CLI and UI.

In the following tutorials, you will interact with intentions to learn how to control service access within the service mesh, and route traffic using service resolvers and service splitters.

Prerequisites

The tutorial assumes that you are familiar with the standard Consul workflow. If you're new to Consul itself, refer first to the Getting Started tutorials for Kubernetes or virtual machines (VMs).

While you can deploy a HCP Consul server and connect the Consul clients in your cloud environments manually, this tutorial uses a Terraform quickstart configuration to significantly reduce deployment time.

You do not need to be an expert with Terraform to complete this tutorial or use this quickstart template.

For this tutorial, you will need:

  • The Terraform v1.0.0+ installed
  • Git installed
  • An HCP account configured for use with Terraform
  • An AWS account with AWS Credentials configured for use with Terraform
  • The awscli v2.7.31+ configured.

Tip: This Get Started with HCP collection currently only supports HCP Consul on AWS. Visit the Deploy HCP Consul with VM using Terraform and Deploy HCP Consul with AKS using Terraform to learn how to deploy HCP Consul on Azure.

Retrieve end-to-end Terraform configuration

The HCP Portal has a quickstart template that deploys an end-to-end development environment so you can quickly observe HCP Consul in action. This Terraform configuration:

  1. Creates a new HashiCorp virtual network (VNet) and single-node Consul development server
  2. Connects the HVN with your Azure virtual network (VNet)
  3. Provisions an Azure virtual machine (VM) instance and installs a Consul client
  4. Deploys HashiCups, a demo application that uses Consul service mesh

Note: These modules are only intended for demonstration purposes. While the Consul clients will be deployed secure-by-default, the limited configuration options presented by the module are to aid a user in quickly getting started.

Click below to learn more about these steps.

This architectural diagram shows a standard HCP Consul cluster peered to a virtual network (for example, an AWS VPC or an Azure VNet). The virtual network has Consul clients and services on the service mesh.

HCP Cloud deployment
workflow

Step 1: Create an HVN and an HCP Consul cluster

The HashiCorp Virtual Network (HVN) is a fundamental abstraction that makes HCP networking possible. An HVN lets you delegate an IPv4 CIDR range to HCP, which HashiCorp then uses to automatically create resources in your cloud network. When you create an HCP Consul cluster, you must specify an HVN to deploy it to.

HCP Cloud deployment
workflow

Step 2: Create a virtual network

After you create an HVN and an HCP Consul cluster, you must create a virtual network (for example, AWS VPC or Azure VNet) that will host your workloads. You need to create a virtual network, either with Terraform or manually, before deploying your Consul clients and services to it.

HCP Cloud deployment
workflow

Step 3: Connect HVN and virtual network

After you create an HVN and a virtual network, you must set up a peering connection between the two networks and update the corresponding route tables. This allows traffic to travel between your HCP Consul cluster (HVN) and Consul clients (hosted on your virtual network).

HCP Cloud deployment
workflow

Step 4: Deploy Consul clients

After you set up the peering connection, you are ready to deploy Consul clients onto your peered virtual network. You can deploy the Consul clients on multiple workloads – including compute instances (AWS EC2 or Azure VMs), Kubernetes clusters (AWS EKS or Azure AKS), or container platforms (AWS ECS). Your HCP Consul cluster has files that contain your Consul client configuration. This lets the Consul clients successfully connect and retrieve information from your HCP Consul cluster.

HCP Cloud deployment
workflow

Step 5: Deploy and run applications

Now that you have set up an HCP Consul cluster and deployed Consul clients onto your peered virtual network, you can deploy your applications and services and leverage HCP Consul's full capabilities.

HCP Cloud deployment
workflow

To retrieve the end-to-end Terraform configuration, visit the HCP Portal, select Consul, then click Create Cluster.

Create new HCP Consul cluster

Select AWS, then select the Terraform Automation creation method.

HCP UI Consul - Deploy with
Terraform

Select your runtime and scroll to the bottom to find the generated Terraform code.

HCP UI Consul - Deploy with Terraform - Download TF code for EKS

HCP UI Consul - Deploy with Terraform

Click on Copy code to copy it to your clipboard and save it in a file named main.tf.

Note: Content should resemble the example below. This example is not guaranteed to be up to date. Always refer to the Terraform configuration presented in the HCP Portal.

main.tf
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.43"
    }

    hcp = {
      source  = "hashicorp/hcp"
      version = ">= 0.18.0"
    }

    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = ">= 2.4.1"
    }

    helm = {
      source  = "hashicorp/helm"
      version = ">= 2.3.0"
    }

    kubectl = {
      source  = "gavinbunney/kubectl"
      version = ">= 1.11.3"
    }
  }

}

provider "aws" {
  region = local.vpc_region
}

provider "helm" {
  kubernetes {
    host                   = local.install_eks_cluster ? data.aws_eks_cluster.cluster[0].endpoint : ""
    cluster_ca_certificate = local.install_eks_cluster ? base64decode(data.aws_eks_cluster.cluster[0].certificate_authority.0.data) : ""
    token                  = local.install_eks_cluster ? data.aws_eks_cluster_auth.cluster[0].token : ""
  }
}

provider "kubernetes" {
  host                   = local.install_eks_cluster ? data.aws_eks_cluster.cluster[0].endpoint : ""
  cluster_ca_certificate = local.install_eks_cluster ? base64decode(data.aws_eks_cluster.cluster[0].certificate_authority.0.data) : ""
  token                  = local.install_eks_cluster ? data.aws_eks_cluster_auth.cluster[0].token : ""
}

provider "kubectl" {
  host                   = local.install_eks_cluster ? data.aws_eks_cluster.cluster[0].endpoint : ""
  cluster_ca_certificate = local.install_eks_cluster ? base64decode(data.aws_eks_cluster.cluster[0].certificate_authority.0.data) : ""
  token                  = local.install_eks_cluster ? data.aws_eks_cluster_auth.cluster[0].token : ""
  load_config_file       = false
}
data "aws_availability_zones" "available" {
  filter {
    name   = "zone-type"
    values = ["availability-zone"]
  }
}

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "2.78.0"

  name                 = "${local.cluster_id}-vpc"
  cidr                 = "10.0.0.0/16"
  azs                  = data.aws_availability_zones.available.names
  public_subnets       = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
  private_subnets      = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"]
  enable_nat_gateway   = true
  single_nat_gateway   = true
  enable_dns_hostnames = true
}

data "aws_eks_cluster" "cluster" {
  count = local.install_eks_cluster ? 1 : 0
  name  = module.eks[0].cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  count = local.install_eks_cluster ? 1 : 0
  name  = module.eks[0].cluster_id
}

module "eks" {
  count                  = local.install_eks_cluster ? 1 : 0
  source                 = "terraform-aws-modules/eks/aws"
  version                = "17.24.0"
  kubeconfig_api_version = "client.authentication.k8s.io/v1beta1"

  cluster_name    = "${local.cluster_id}-eks"
  cluster_version = "1.21"
  subnets         = module.vpc.private_subnets
  vpc_id          = module.vpc.vpc_id

  manage_aws_auth = false

  node_groups = {
    application = {
      name_prefix      = "hashicups"
      instance_types   = ["t3a.medium"]
      desired_capacity = 3
      max_capacity     = 3
      min_capacity     = 3
    }
  }
}

# The HVN created in HCP
resource "hcp_hvn" "main" {
  hvn_id         = local.hvn_id
  cloud_provider = "aws"
  region         = local.hvn_region
  cidr_block     = "172.25.32.0/20"
}

module "aws_hcp_consul" {
  source  = "hashicorp/hcp-consul/aws"
  version = "~> 0.8.9"

  hvn                = hcp_hvn.main
  vpc_id             = module.vpc.vpc_id
  subnet_ids         = module.vpc.private_subnets
  route_table_ids    = module.vpc.private_route_table_ids
  security_group_ids = local.install_eks_cluster ? [module.eks[0].cluster_primary_security_group_id] : [""]
}

resource "hcp_consul_cluster" "main" {
  cluster_id      = local.cluster_id
  hvn_id          = hcp_hvn.main.hvn_id
  public_endpoint = true
  tier            = "development"
}

resource "hcp_consul_cluster_root_token" "token" {
  cluster_id = hcp_consul_cluster.main.id
}

module "eks_consul_client" {
  source  = "hashicorp/hcp-consul/aws//modules/hcp-eks-client"
  version = "~> 0.8.9"

  boostrap_acl_token    = hcp_consul_cluster_root_token.token.secret_id
  cluster_id            = hcp_consul_cluster.main.cluster_id
  consul_ca_file        = base64decode(hcp_consul_cluster.main.consul_ca_file)
  consul_hosts          = jsondecode(base64decode(hcp_consul_cluster.main.consul_config_file))["retry_join"]
  consul_version        = hcp_consul_cluster.main.consul_version
  datacenter            = hcp_consul_cluster.main.datacenter
  gossip_encryption_key = jsondecode(base64decode(hcp_consul_cluster.main.consul_config_file))["encrypt"]
  k8s_api_endpoint      = local.install_eks_cluster ? module.eks[0].cluster_endpoint : ""

  # The EKS node group will fail to create if the clients are
  # created at the same time. This forces the client to wait until
  # the node group is successfully created.
  depends_on = [module.eks]
}

module "demo_app" {
  count   = local.install_demo_app ? 1 : 0
  source  = "hashicorp/hcp-consul/aws//modules/k8s-demo-app"
  version = "~> 0.8.9"

  depends_on = [module.eks_consul_client]
}
output "consul_root_token" {
  value     = hcp_consul_cluster_root_token.token.secret_id
  sensitive = true
}

output "consul_url" {
  value = hcp_consul_cluster.main.public_endpoint ? (
    hcp_consul_cluster.main.consul_public_endpoint_url
    ) : (
    hcp_consul_cluster.main.consul_private_endpoint_url
  )
}

output "kubeconfig_filename" {
  value = abspath(one(module.eks[*].kubeconfig_filename))
}

output "helm_values_filename" {
  value = abspath(module.eks_consul_client.helm_values_file)
}

output "hashicups_url" {
  value = one(module.demo_app[*].hashicups_url)
}

output "next_steps" {
  value = "HashiCups Application will be ready in ~2 minutes. Use 'terraform output consul_root_token' to retrieve the root token."
}

output "howto_connect" {
  value = <<EOF
  ${local.install_demo_app ? "The demo app, HashiCups, Has been installed for you and its components registered in Consul." : ""}
  ${local.install_demo_app ? "To access HashiCups navigate to: ${module.demo_app[0].hashicups_url}" : ""}

  To access Consul from your local client run:
  export CONSUL_HTTP_ADDR="${hcp_consul_cluster.main.consul_public_endpoint_url}"
  export CONSUL_HTTP_TOKEN=$(terraform output consul_root_token)
  
  ${local.install_eks_cluster ? "You can access your provisioned eks cluster by first running following command" : ""}
  ${local.install_eks_cluster ? "export KUBECONFIG=$(terraform output -raw kubeconfig_filename)" : ""}    

  Consul has been installed in the default namespace. To explore what has been installed run:
  
  kubectl get pods

  EOF
}

Note: Content should resemble the example below. This example is not guaranteed to be up to date. Always refer to the Terraform configuration presented in the HCP Portal.

main.tf
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.43"
    }
    hcp = {
      source  = "hashicorp/hcp"
      version = ">= 0.18.0"
    }
  }

}

provider "aws" {
  region = local.vpc_region
}

provider "consul" {
  address    = hcp_consul_cluster.main.consul_public_endpoint_url
  datacenter = hcp_consul_cluster.main.datacenter
  token      = hcp_consul_cluster_root_token.token.secret_id
}

data "aws_availability_zones" "available" {
  filter {
    name   = "zone-type"
    values = ["availability-zone"]
  }
}

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "3.10.0"

  azs                  = data.aws_availability_zones.available.names
  cidr                 = "10.0.0.0/16"
  enable_dns_hostnames = true
  name                 = "${local.cluster_id}-vpc"
  private_subnets      = []
  public_subnets       = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
}

resource "hcp_hvn" "main" {
  hvn_id         = local.hvn_id
  cloud_provider = "aws"
  region         = local.hvn_region
  cidr_block     = "172.25.32.0/20"
}

module "aws_hcp_consul" {
  source  = "hashicorp/hcp-consul/aws"
  version = "~> 0.8.9"

  hvn             = hcp_hvn.main
  vpc_id          = module.vpc.vpc_id
  subnet_ids      = module.vpc.public_subnets
  route_table_ids = module.vpc.public_route_table_ids
}

resource "hcp_consul_cluster" "main" {
  cluster_id      = local.cluster_id
  hvn_id          = hcp_hvn.main.hvn_id
  public_endpoint = true
  tier            = "development"
}

resource "hcp_consul_cluster_root_token" "token" {
  cluster_id = hcp_consul_cluster.main.id
}

resource "tls_private_key" "ssh" {
  algorithm = "RSA"
  rsa_bits  = 4096
}

resource "aws_key_pair" "hcp_ec2" {
  count = local.ssh ? 1 : 0

  public_key = tls_private_key.ssh.public_key_openssh
  key_name   = "hcp-ec2-key-${local.cluster_id}"
}

resource "local_file" "ssh_key" {
  count = local.ssh ? 1 : 0

  content         = tls_private_key.ssh.private_key_pem
  file_permission = "400"
  filename        = "${path.module}/${aws_key_pair.hcp_ec2[0].key_name}.pem"
}

module "aws_ec2_consul_client" {
  source  = "hashicorp/hcp-consul/aws//modules/hcp-ec2-client"
  version = "~> 0.8.9"

  allowed_http_cidr_blocks = ["0.0.0.0/0"]
  allowed_ssh_cidr_blocks  = ["0.0.0.0/0"]
  client_ca_file           = hcp_consul_cluster.main.consul_ca_file
  client_config_file       = hcp_consul_cluster.main.consul_config_file
  consul_version           = hcp_consul_cluster.main.consul_version
  nat_public_ips           = module.vpc.nat_public_ips
  install_demo_app         = local.install_demo_app
  root_token               = hcp_consul_cluster_root_token.token.secret_id
  security_group_id        = module.aws_hcp_consul.security_group_id
  ssh_keyname              = local.ssh ? aws_key_pair.hcp_ec2[0].key_name : ""
  ssm                      = local.ssm
  subnet_id                = module.vpc.public_subnets[0]
  vpc_id                   = module.vpc.vpc_id
}
output "consul_root_token" {
  value     = hcp_consul_cluster_root_token.token.secret_id
  sensitive = true
}

output "consul_url" {
  value = hcp_consul_cluster.main.public_endpoint ? (
    hcp_consul_cluster.main.consul_public_endpoint_url
    ) : (
    hcp_consul_cluster.main.consul_private_endpoint_url
  )
}

output "nomad_url" {
  value = "http://${module.aws_ec2_consul_client.public_ip}:8081"
}

output "hashicups_url" {
  value = "http://${module.aws_ec2_consul_client.public_ip}"
}

output "next_steps" {
  value = local.install_demo_app ? "HashiCups Application will be ready in ~2 minutes. Use 'terraform output consul_root_token' to retrieve the root token." : null
}

output "howto_connect" {
  value = <<EOF
  ${local.install_demo_app ? "The demo app, HashiCups, is installed on a Nomad server we have deployed for you." : ""}
  ${local.install_demo_app ? "To access Nomad using your local client run the following command:" : ""}
  ${local.install_demo_app ? "export NOMAD_HTTP_AUTH=nomad:$(terraform output consul_root_token)" : ""}
  ${local.install_demo_app ? "export NOMAD_ADDR=http://${module.aws_ec2_consul_client.public_ip}:8081" : ""}

  To access Consul from your local client run:
  export CONSUL_HTTP_ADDR="${hcp_consul_cluster.main.consul_public_endpoint_url}"
  export CONSUL_HTTP_TOKEN=$(terraform output consul_root_token)
  
  To connect to the ec2 instance deployed: 
${local.ssh ? "  - To access via SSH run: ssh -i ${abspath(local_file.ssh_key[0].filename)} ubuntu@${module.aws_ec2_consul_client.public_ip}" : ""}
${local.ssm ? "  - To access via SSM run: aws ssm start-session --target ${module.aws_ec2_consul_client.host_id} --region ${local.vpc_region}" : ""}
  EOF
}

Locals

The HCP Consul UI will guide you into selecting the correct values for the local variables. You can edit the cluster_id and hvn_id but make sure it does not conflict with other deployments you have in your organization.

  • vpc_region - This is the region where you deployed your VPC.
  • hvn_region - The HashiCorp Virtual Network (HVN) region.
  • cluster_id - The HCP Consul cluster ID. Use a unique name to identify your HCP Consul cluster. HCP will pre-populate this variable with a name and use the following the naming pattern consul-quickstart-<unique-ID>.
  • hvn_id - The HCP HVN ID. Use a unique name to identify your HVN. HCP will pre-populate this variable with a name and use the following the naming pattern consul-quickstart-<unique-ID>-hvn.

In addition, based on the run-time, you will have the following additional local variables.

  • install_demo_app - This deploys the HashiCups demo application that will let you quickly explore how services interact with Consul service mesh.
  • install_eks_cluster - This deploys an EKS cluster and configures to connect to your HCP Consul cluster.
  • install_demo_app - This deploys the HashiCups demo application that will let you quickly explore how services interact with Consul service mesh.
  • ssh - This configures SSH and will let you connect to the deployed EC2 instances.
  • ssm - This configures SSM and will let you connect to the deployed EC2 instances.

Deploy end-to-end development environment

Now that you have the Terraform configuration, you are now ready to deploy your infrastructure. Before you continue, verify that you have populated your AWS and HCP credentials as mentioned in the prerequisites.

Initialize the configuration to install the necessary providers and modules.

$ terraform init
Initializing the backend...

Initializing provider plugins...
## ...

Terraform has been successfully initialized!
## ...

Next, deploy the end-to-end development environment. Confirm the apply with a yes.

$ terraform apply
## ...

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

## ...

Apply complete! Resources: 91 added, 0 changed, 0 destroyed.

Outputs:
consul_root_token = <sensitive>
consul_url = "https://consul-quickstart-1663917827001.consul.98a0dcc3-5473-4e4d-a28e-6c343c498530.aws.hashicorp.cloud"
hashicups_url = "http://a997491ad692947029c3bf826f2fbe72-1316116595.us-west-2.elb.amazonaws.com"
helm_values_filename = "/Users/dos/Desktop/gs-hcp-consul/eks/helm_values_consul-quickstart-1663917827001"
howto_connect = <<EOT
  The demo app, HashiCups, Has been installed for you and its components registered in Consul.
  To access HashiCups navigate to: http://a997491ad692947029c3bf826f2fbe72-1316116595.us-west-2.elb.amazonaws.com

  To access Consul from your local client run:
  export CONSUL_HTTP_ADDR="https://consul-quickstart-1663917827001.consul.98a0dcc3-5473-4e4d-a28e-6c343c498530.aws.hashicorp.cloud"
  export CONSUL_HTTP_TOKEN=$(terraform output consul_root_token)
  
  You can access your provisioned eks cluster by first running following command
  export KUBECONFIG=$(terraform output -raw kubeconfig_filename)    

  Consul has been installed in the default namespace. To explore what has been installed run:
  
  kubectl get pods

EOT
kubeconfig_filename = "/Users/dos/Desktop/gs-hcp-consul/eks/kubeconfig_consul-quickstart-1663917827001-eks"
next_steps = "HashiCups Application will be ready in ~2 minutes. Use 'terraform output consul_root_token' to retrieve the root token."
$ terraform apply
## ...

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

## ...

Apply complete! Resources: 50 added, 0 changed, 0 destroyed.

Outputs:
consul_root_token = <sensitive>
consul_url = "https://consul-quickstart-1663917827002.consul.98a0dcc3-5473-4e4d-a28e-6c343c498530.aws.hashicorp.cloud"
hashicups_url = "http://52.34.236.127"
howto_connect = <<EOT
  The demo app, HashiCups, is installed on a Nomad server we have deployed for you.
  To access Nomad using your local client run the following command:
  export NOMAD_HTTP_AUTH=nomad:$(terraform output consul_root_token)
  export NOMAD_ADDR=http://52.34.236.127:8081

  To access Consul from your local client run:
  export CONSUL_HTTP_ADDR="https://consul-quickstart-1663917827002.consul.98a0dcc3-5473-4e4d-a28e-6c343c498530.aws.hashicorp.cloud"
  export CONSUL_HTTP_TOKEN=$(terraform output consul_root_token)
  
  To connect to the ec2 instance deployed: 
  - To access via SSH run: ssh -i /Users/dos/Desktop/gs-hcp-consul/ec2/hcp-ec2-key-consul-quickstart-1663917827002.pem ubuntu@52.34.236.127
  - To access via SSM run: aws ssm start-session --target i-00bc11b5c0b4a9e72 --region us-west-2

EOT
next_steps = "HashiCups Application will be ready in ~2 minutes. Use 'terraform output consul_root_token' to retrieve the root token."
nomad_url = "http://52.34.236.127:8081"

Once you confirm, it will take a few minutes for Terraform to set up your end-to-end development environment.

Verify created resources

Once Terraform completes, you can verify the resources using the Consul UI or CLI.

Verify with Consul UI

Retrieve your HCP Consul dashboard URL and open it in your browser.

$ terraform output consul_url
"https://consul-quickstart-1663917827002.consul.98a0dcc3-5473-4e4d-a28e-6c343c498530.aws.hashicorp.cloud"

Next, retrieve your Consul root token. You will use this token to authenticate your Consul dashboard.

$ terraform output consul_root_token
"00000000-0000-0000-0000-000000000000"

In your HCP Consul dashboard, sign in with the root token you just retrieved. You should find a list of services that include consul, ingress-gateway, and your HashiCups services.

Consul UI services
HashiCups

In your HCP Consul dashboard, sign in with the root token you just retrieved. You should find a list of services that include consul, hashicups-ingress, nomad-client and your HashiCups services.

Consul UI services
HashiCups

Verify with Consul CLI

In order to use the CLI, you must set environment variables that store your ACL token and HCP Consul cluster address.

First, set your CONSUL_HTTP_ADDR environment variable.

$ export CONSUL_HTTP_ADDR=$(terraform output -raw consul_url)

Then, set your CONSUL_HTTP_TOKEN environment variable.

$ export CONSUL_HTTP_TOKEN=$(terraform output -raw consul_root_token)

Retrieve a list of members in your datacenter to verify your Consul CLI is set up properly.

$ consul members
Node                                      Address            Status  Type    Build       Protocol  DC                               Segment
ip-172-25-33-42                           172.25.33.42:8301  alive   server  1.11.8+ent  2         consul-quickstart-1663917827001  <all>
ip-10-0-4-201.us-west-2.compute.internal  10.0.4.72:8301     alive   client  1.11.8+ent  2         consul-quickstart-1663917827001  <default>
ip-10-0-5-235.us-west-2.compute.internal  10.0.5.247:8301    alive   client  1.11.8+ent  2         consul-quickstart-1663917827001  <default>
ip-10-0-6-135.us-west-2.compute.internal  10.0.6.184:8301    alive   client  1.11.8+ent  2         consul-quickstart-1663917827001  <default>
$ consul members
Node             Address            Status  Type    Build       Protocol  DC                               Segment
ip-172-25-32-82  172.25.32.82:8301  alive   server  1.11.8+ent  2         consul-quickstart-1663917827002  <all>
ip-10-0-1-171    10.0.1.171:8301    alive   client  1.11.8+ent  2         consul-quickstart-1663917827002  <default>

Verify demo application

The end-to-end development environment deploys HashiCups. Visit the hashicups URL to verify that Terraform deployed HashiCups successfully, and its services can communicate with each other.

Tip: View the Kubernetes manifest files that defines the HashiCups application to learn more about how the HashiCups services interact with each other.

Tip: View the Nomad job that defines the HashiCups application to learn more about how the HashiCups services interact with each other.

Retrieve your HashiCups URL and open it in your browser.

$ terraform output hashicups_url
"http://a997491ad692947029c3bf826f2fbe72-1316116595.us-west-2.elb.amazonaws.com"
$ terraform output hashicups_url
"http://52.34.236.127"

HashiCups
UI

Next steps

In this tutorial, you deployed an HCP Consul server cluster, your choice of Kubernetes or virtual machine Consul clients, and a demo application. Then, you explored how the demo application leverages Consul service mesh and interacted with Consul using the CLI and UI.

In the next tutorial, you will interact with intentions to learn how to control service access within the service mesh.

To learn more about HCP Consul, visit the HCP Consul documentation. For additional runtimes and cloud providers, visit the following tutorials:

  • Deploy HCP Consul with ECS using Terraform
  • Deploy HCP Consul with Azure VM using Terraform
  • Deploy HCP Consul with AKS using Terraform
 Back to Collection
 Next

On this page

  1. Deploy HCP Consul
  2. Prerequisites
  3. Retrieve end-to-end Terraform configuration
  4. Deploy end-to-end development environment
  5. Verify created resources
  6. Next steps
Give Feedback(opens in new tab)
  • Certifications
  • System Status
  • Terms of Use
  • Security
  • Privacy
  • Trademark Policy
  • Trade Controls
  • Give Feedback(opens in new tab)