• HashiCorp Developer

  • HashiCorp Cloud Platform
  • Terraform
  • Packer
  • Consul
  • Vault
  • Boundary
  • Nomad
  • Waypoint
  • Vagrant
HashiCorp Cloud Platform
  • Tutorials
  • Documentation
  • Try Cloud(opens in new tab)
  • Sign up
HashiCorp Cloud Platform

Skip to main content
8 tutorials
  • Peering an AWS VPC with HashiCorp Cloud Platform (HCP)
  • Deploy HCP Consul
  • Configure EC2 as a Consul Client for HCP Consul
  • Connect an Elastic Kubernetes Service Cluster to HCP Consul
  • Serverless Consul service mesh with ECS and HCP
  • Admin Partitions with HCP Consul and Amazon Elastic Container Service
  • Configure Azure VM as a Consul Client for HCP Consul
  • Connect an Azure Kubernetes Service Cluster to HCP Consul

  • Resources

  • Tutorial Library
  • Community Forum
    (opens in new tab)
  • Support
    (opens in new tab)
  • GitHub
    (opens in new tab)
  1. Developer
  2. HashiCorp Cloud Platform
  3. Tutorials
  4. HashiCorp Cloud Platform
  5. Connect an Elastic Kubernetes Service Cluster to HCP Consul

Connect an Elastic Kubernetes Service Cluster to HCP Consul

  • 17min

  • HCPHCP
  • ConsulConsul

HashiCorp Cloud Platform (HCP) Consul is a fully managed Service Mesh as a Service version of Consul. After you deploy an HCP Consul server cluster, you can connect the Consul resources in your network to HCP Consul to leverage Consul's full feature set including service mesh and service discovery. For AWS, HCP Consul supports Consul resources running on Elastic Kubernetes Service (EKS), Elastic Container Service (ECS), and virtual machine (EC2) workloads.

In this tutorial, you will configure your EKS cluster to connect to your HCP Consul cluster. Then, you will deploy HashiCups, a demo application that lets you view and order customized HashiCorp branded coffee, to your EKS cluster to leverage HCP Consul's service discovery, service mesh, and ingress capabilities.

Prerequisites

For this tutorial, you will need:

  • An HCP HashiCorp Virtual Network (HVN)
  • An HCP Consul deployment
  • An EKS Cluster deployed in the VPC peered with your HVN
  • AWS CLI
  • kubectl
  • helm
  • jq

For this tutorial, you will need to ensure that you have authenticated with the AWS CLI, and that the CLI is targeting the region where you have created your EKS cluster. Review the AWS documentation for instructions on how to configure the AWS CLI.

To ensure that communication between your HCP Consul cluster servers and the Consul resources in your EKS cluster is possible, you must complete the steps detailed in either the manual deployment or in the Terraform deployment tutorial. It is also mandatory that your AWS EKS cluster is deployed in the VPC that has been associated with your HVN. Otherwise, there will not be a configured peering connection, and the control plane running on the HCP Consul cluster servers will not be able to communicate with the data plane provided by the Consul resources that you will be installing to the EKS cluster.

Configure development host

Consul versions prior to 1.14.0 use a client-based architecture while the latest versions of Consul can support a more lightweight Dataplane architecture. Feel free to explore the Consul Dataplane documentation to learn more.

Select your version of HCP Consul to ensure all instructions are relevant to your architecture.

Now that you have deployed HCP Consul, you need to retrieve the Consul configuration information. You will use this information to configure the Consul resources on your EKS cluster to securely connect to the HCP Consul cluster.

First, connect to your EKS cluster. Kubernetes stores cluster connection information in a special file called kubeconfig. You can retrieve the Kubernetes configuration settings for your EKS cluster and merge them into your local kubeconfig file by issuing the following command.

$ aws eks --region [your-region] update-kubeconfig --name [your-cluster-name]

Customize and configure your Consul installation

In this section you will use the HCP Portal to retrieve the Consul configuration information you need to connect your EKS cluster Consul resources to your HCP Consul cluster.

Navigate to the Consul resource page in the HCP portal, and then select the Consul cluster you want to connect with your EKS cluster. Set the name of your HCP Consul cluster to the DATACENTER environment variable on your development host.

$ export DATACENTER=[your-cluster-name]

Within the HCP portal, click the Access Consul dropdown, then select the copy icon for the private address or public address field from the dialog box. The public or private address is now in your clipboard. Set this address to the RETRY_JOIN environment variable on your development host so that you can reference it later in the tutorial.

$ export RETRY_JOIN=[your-consul-address]

From the same Access Consul dropdown in the HCP UI, select Generate admin token and then click copy icon from the dialog box. A global-management root token is now in your clipboard. Set this token to the CONSUL_HTTP_TOKEN environment variable on your development host so that you can reference it later in the tutorial.

$ export CONSUL_HTTP_TOKEN=[your-token]

Create a consul namespace in your Kubernetes cluster. Your Consul secrets and resources will be created in this namespace.

$ kubectl create namespace consul
namespace/consul created

HCP Consul is secure by default. This means that your EKS-based Consul resources need to be configured with a secure ACL token to establish secure communication with HCP. Create a Kubernetes secret with your Consul ACL token value.

$ kubectl create secret generic "consul-bootstrap-token" --from-literal="token=${CONSUL_HTTP_TOKEN}" --namespace consul
secret/consul-bootstrap-token created

Set the KUBE_API_URL environment variable to the API server URL of your EKS cluster.

Note: The following command relies on your cluster matching your current-context name. If you have created an alias for your context, or the current-context name does not match the cluster name for any other reason, you must manually set the KUBE_API_URL to the API server URL of your EKS cluster. You can use kubectl config view to view your cluster, and retrieve the API server URL.

$ export KUBE_API_URL=$(kubectl config view -o jsonpath="{.clusters[?(@.name == \"$(kubectl config current-context)\")].cluster.server}")

Validate that all of your environment variables have been set.

$ echo $DATACENTER && \
  echo $RETRY_JOIN && \
  echo $KUBE_API_URL && \
  echo $CONSUL_VERSION

Example output:

learn-hcp-eks-cluster
["learn-hcp-eks-cluster.private.consul.00000000-0000-0000-0000-000000000000.aws.hashicorp.cloud"]
https://000000000000.gr7.us-west-2.eks.amazonaws.com
1.14.0

Note: If any of these environment variables are not correctly set, the following script will generate an incomplete Helm values file, and the Consul Helm installation will not succeed.

Warning: The value for the global.name configuration must be unique for each Kubernetes cluster where Consul clients are installed and configured to join Consul as a shared service, such as HCP Consul. You can change the global name through the global.name value in the Helm chart.

Install Consul on EKS

How you install Consul resources depends on your HCP Consul Cluster:

  • To install Consul resources on a single Consul cluster, choose the Single Consul Cluster tab.
  • To install Consul resources on a primary or secondary Consul cluster that is part of a federated environment, select the Federated Consul Cluster tab.

Generate your customized Consul configuration file.

$ cat > values.yaml << EOF
global:
  name: consul
  enabled: false
  datacenter: ${DATACENTER}
  image: "hashicorp/consul-enterprise:${CONSUL_VERSION}-ent"
  acls:
    manageSystemACLs: true
    bootstrapToken:
      secretName: consul-bootstrap-token
      secretKey: token
  tls:
    enabled: true
  enableConsulNamespaces: true
externalServers:
  enabled: true
  hosts: ${RETRY_JOIN}
  httpsPort: 443
  useSystemRoots: true
  k8sAuthMethodHost: ${KUBE_API_URL}
server:
  enabled: false
connectInject:
  enabled: true
ingressGateways:
  enabled: true
  defaults:
    replicas: 1
  gateways:
    - name: ingress-gateway
      service:
        type: LoadBalancer
        ports:
          - port: 8080
EOF
$ cat > values.yaml << EOF
global:
  name: consul
  enabled: false
  datacenter: ${DATACENTER}
  image: "hashicorp/consul-enterprise:${CONSUL_VERSION}-ent"
  acls:
    manageSystemACLs: true
    bootstrapToken:
      secretName: consul-bootstrap-token
      secretKey: token
  tls:
    enabled: true
  federation:
    enabled: true
  enableConsulNamespaces: true
externalServers:
  enabled: true
  hosts: ${RETRY_JOIN}
  httpsPort: 443
  useSystemRoots: true
  k8sAuthMethodHost: ${KUBE_API_URL}
server:
  enabled: false
connectInject:
  enabled: true
ingressGateways:
  enabled: true
  defaults:
    replicas: 1
  gateways:
    - name: ingress-gateway
      service:
        type: LoadBalancer
        ports:
          - port: 8080
meshGateway:
  enabled: true
EOF

Validate that the config file is populated correctly.

$ more values.yaml

Now that you have customized your values.yaml file, you can deploy Consul with Helm or the Consul K8S CLI. This should only take a few minutes. The Consul pods should appear in the pod list immediately.

Tip: HashiCorp Cloud Platform offers Enterprise features. To interact with these features, you need to install the Enterprise Consul binary for your client agents. Learn more information about Consul Enterprise and the Helm Chart in the Consul Enterprise Helm Chart documentation.

$ helm repo add hashicorp https://helm.releases.hashicorp.com
"hashicorp" has been added to your repositories
$ helm install consul hashicorp/consul --values values.yaml --namespace consul --version "1.0.0"

Note: You can review the official Helm chart values to learn more about the default settings.

$ brew tap hashicorp/tap
$ brew install hashicorp/tap/consul-k8s
$ consul-k8s install -config-file=values.yaml -set global.image=hashicorp/consul:1.14.0

Note: You can review the official Consul K8S CLI documentation to learn more about additional settings.

To check the deployment process on the command line you can use kubectl get pods.

$ kubectl get pods --namespace consul
NAME                                          READY   STATUS    RESTARTS   AGE
consul-connect-injector-6cbdb8d6c6-j4lqw      1/1     Running   0          2m11s
consul-connect-injector-6cbdb8d6c6-mmjxb      1/1     Running   0          2m11s
consul-controller-6c4c67bc87-f458m            1/1     Running   0          2m11s
consul-ingress-gateway-bf7df675b-vqvs4        1/1     Running   0          2m11s
consul-webhook-cert-manager-75d696b9d-bt8mp   1/1     Running   0          2m11s

Now that you have deployed HCP Consul, you need to retrieve the Consul configuration information. You will use this information to configure the Consul resources on your EKS cluster to securely connect to the HCP Consul cluster. In addition, you will retrieve the ACL token and HCP Consul public URL to authenticate your Consul CLI.

Kubernetes stores cluster connection information in a special file called kubeconfig. You can retrieve the Kubernetes configuration settings for your EKS cluster and merge them into your local kubeconfig file by issuing the following command.

$ aws eks --region [your-region] update-kubeconfig --name [your-cluster-name]

You can use the HCP Portal to retrieve the client configuration information you need to connect your EKS cluster Consul resources to your HCP Consul cluster. Navigate to the Consul resource page in the HCP portal, and then select the Consul cluster you want to connect with your EKS cluster.

Click the Access Consul dropdown and then select Download Consul configuration files to download a zip archive that contains the Consul configuration files. The archive includes a default client configuration and certificate. Both should be considered secrets, and should be kept in a secure location.

Unzip the client config package into the current working directory, and then use ls to confirm that both the client_config.json and ca.pem files are available.

$ ls
ca.pem             client_config.json

From the same Access Consul dropdown in the HCP UI, select Generate admin token and then click copy icon from the dialog box. A global-management root token is now in your clipboard. Set this token to the CONSUL_HTTP_TOKEN environment variable on your development host so that you can reference it later in the tutorial.

$ export CONSUL_HTTP_TOKEN=[your-token]

Create a consul namespace in your Kubernetes cluster. Your Consul secrets and resources will be created in this namespace.

$ kubectl create namespace consul
namespace/consul created

Configure Consul secrets

Consul Service on HCP is secure by default. This means that your EKS-based Consul resources need to be configured with the gossip encryption key, the Consul CA cert, and a root ACL token. All three of these secrets will need to be stored in the Kubernetes secrets engine so that they can be referenced and retrieved during the helm chart installation.

Use the ca.pem file in the current directory to create a Kubernetes secret to store the Consul CA certificate.

$ kubectl create secret generic "consul-ca-cert" --from-file='tls.crt=./ca.pem' --namespace consul
secret/consul-ca-cert created

The Consul gossip encryption key is embedded in the client_config.json file that you downloaded and extracted into your current directory. Issue the following command to create a Kubernetes secret that stores the Consul gossip key encryption key. The following command uses jq to extract the value from the client_config.json file.

$ kubectl create secret generic "consul-gossip-key" --from-literal="key=$(jq -r .encrypt client_config.json)" --namespace consul
secret/consul-gossip-key created

The last secret you need to add is an ACL bootstrap token. You can use the one you set to your CONSUL_HTTP_TOKEN environment variable earlier. Issue the following command to create a Kubernetes secret to store the bootstrap ACL token.

Note: If you are configuring a production environment, you should create a client token with a minimum set of privileges. For an in depth review of how to configure ACLs for Consul, refer to the Secure Consul with Access Control Lists tutorial or the official documentation.

$ kubectl create secret generic "consul-bootstrap-token" --from-literal="token=${CONSUL_HTTP_TOKEN}" --namespace consul
secret/consul-bootstrap-token created

Install Consul on EKS

Extract some more configuration values from the client_config.json file and set them to environment variables that can be used to generate your Helm values file. Issue the following command to set the DATACENTER environment variable.

$ export DATACENTER=$(jq -r .datacenter client_config.json)

Extract the private server URL from the client config so that it can be set in the Helm values file as the externalServers:hosts entry. This value will be passed as the retry-join option to the Consul clients.

$ export RETRY_JOIN=$(jq -r --compact-output .retry_join client_config.json)

Extract the public server URL from the client config so that it can be set the Helm values file as the k8sAuthMethodHost entry.

Note: The following script relies on your cluster matching your current-context name. If you have created an alias for your context, or the current-context name does not match the cluster name for any other reason, you must manually set the KUBE_API_URL to the API server URL of your EKS cluster. You can use kubectl config view to view your cluster, and retrieve API server URL.

$ export KUBE_API_URL=$(kubectl config view -o jsonpath="{.clusters[?(@.name == \"$(kubectl config current-context)\")].cluster.server}")

Set the Consul version to match your HCP cluster version.

$ export CONSUL_VERSION=1.13.2

Validate that all of your environment variables have been set.

$ echo $DATACENTER && \
  echo $RETRY_JOIN && \
  echo $KUBE_API_URL && \
  echo $CONSUL_VERSION

Example output:

dc1
["learn-hcp-eks-cluster.private.consul.00000000-0000-0000-0000-000000000000.aws.hashicorp.cloud"]
https://000000000000.gr7.us-west-2.eks.amazonaws.com
1.13.2

Note: If any of these environment variables are not correctly set, the following script will generate an incomplete Helm values file, and the Consul Helm installation will not succeed.

Warning: The value for the global.name configuration must be unique for each Kubernetes cluster where Consul clients are installed and configured to join Consul as a shared service, such as HCP Consul. You can change the global name through the global.name value in the Helm chart.

How you install Consul clients depends on your HCP Consul Cluster:

  • To install clients on a single Consul cluster, choose the Single Consul Cluster tab.
  • To install clients on a primary or secondary Consul cluster that is part of a federated environment, select the Federated Consul Cluster tab.

Generate the Helm values file.

$ cat > values.yaml << EOF
global:
  name: consul
  enabled: false
  datacenter: ${DATACENTER}
  image: "hashicorp/consul-enterprise:${CONSUL_VERSION}-ent"
  acls:
    manageSystemACLs: true
    bootstrapToken:
      secretName: consul-bootstrap-token
      secretKey: token
  gossipEncryption:
    secretName: consul-gossip-key
    secretKey: key
  tls:
    enabled: true
    enableAutoEncrypt: true
    caCert:
      secretName: consul-ca-cert
      secretKey: tls.crt
  enableConsulNamespaces: true
externalServers:
  enabled: true
  hosts: ${RETRY_JOIN}
  httpsPort: 443
  useSystemRoots: true
  k8sAuthMethodHost: ${KUBE_API_URL}
server:
  enabled: false
client:
  enabled: true
  join: ${RETRY_JOIN}
connectInject:
  enabled: true
controller:
  enabled: true
ingressGateways:
  enabled: true
  defaults:
    replicas: 1
  gateways:
    - name: ingress-gateway
      service:
        type: LoadBalancer
        ports:
          - port: 8080
EOF
$ cat > values.yaml << EOF
global:
  name: consul
  enabled: false
  datacenter: ${DATACENTER}
  image: "hashicorp/consul-enterprise:${CONSUL_VERSION}-ent"
  acls:
    manageSystemACLs: true
    bootstrapToken:
      secretName: consul-bootstrap-token
      secretKey: token
  gossipEncryption:
    secretName: consul-gossip-key
    secretKey: key
  tls:
    enabled: true
    enableAutoEncrypt: true
    caCert:
      secretName: consul-ca-cert
      secretKey: tls.crt
  federation:
    enabled: true
  enableConsulNamespaces: true
externalServers:
  enabled: true
  hosts: ${RETRY_JOIN}
  httpsPort: 443
  useSystemRoots: true
  k8sAuthMethodHost: ${KUBE_API_URL}
server:
  enabled: false
client:
  enabled: true
  join: ${RETRY_JOIN}
connectInject:
  enabled: true
controller:
  enabled: true
ingressGateways:
  enabled: true
  defaults:
    replicas: 1
  gateways:
    - name: ingress-gateway
      service:
        type: LoadBalancer
        ports:
          - port: 8080
meshGateway:
  enabled: true
EOF

Validate that the config file is populated correctly.

$ more values.yaml

Deploy Consul

Now that you have customized your values.yaml file, you can deploy Consul with Helm or the Consul K8S CLI. We recommend you deploy Consul into its own dedicated namespace as shown below. This should only take a few minutes. The Consul pods should appear in the pod list immediately.

Tip: HashiCorp Cloud Platform offers Enterprise features. To interact with these features, you need to install the Enterprise Consul binary for your client agents. Learn more information about Consul Enterprise and the Helm Chart in the Consul Enterprise Helm Chart documentation.

$ helm repo add hashicorp https://helm.releases.hashicorp.com
"hashicorp" has been added to your repositories
$ helm install consul hashicorp/consul --values values.yaml --namespace consul --version "0.49.0"

Note: You can review the official Helm chart values to learn more about the default settings.

$ brew tap hashicorp/tap
$ brew install hashicorp/tap/consul-k8s
$ consul-k8s install -config-file=values.yaml

Note: You can review the official Consul K8S CLI documentation to learn more about additional settings.

To check the deployment process on the command line you can use kubectl get pods.

$ kubectl get pods --namespace consul
NAME                                           READY   STATUS    RESTARTS   AGE
consul-client-lqfg2                            1/1     Running   0          74s
consul-client-n2bgm                            1/1     Running   0          74s
consul-client-r7kq6                            1/1     Running   0          74s
consul-connect-injector-67498ddc87-5z9jh       1/1     Running   0          74s
consul-connect-injector-67498ddc87-85q2g       1/1     Running   0          74s
consul-controller-66dbfd4f77-l59dp             1/1     Running   0          74s
consul-ingress-gateway-584dcbc5bd-xfbcj        2/2     Running   0          74s
consul-webhook-cert-manager-7cf6df6c4f-tbx82   1/1     Running   0          74s
$ kubectl get pods --namespace consul
NAME                                           READY   STATUS    RESTARTS   AGE
consul-client-5mvcz                            1/1     Running   0          2m5s
consul-client-gz8mb                            1/1     Running   0          2m5s
consul-client-zhrk7                            1/1     Running   0          2m5s
consul-connect-injector-67498ddc87-f5d7r       1/1     Running   0          2m4s
consul-connect-injector-67498ddc87-jm5nf       1/1     Running   0          2m5s
consul-controller-66dbfd4f77-f5dmc             1/1     Running   0          2m5s
consul-ingress-gateway-584dcbc5bd-q2dw4        2/2     Running   0          2m5s
consul-mesh-gateway-556988f6fb-2l4v5           2/2     Running   0          2m5s
consul-mesh-gateway-556988f6fb-rx2qk           2/2     Running   0          2m5s
consul-webhook-cert-manager-7cf6df6c4f-6h7pm   1/1     Running   0          2m5s

meshGateway.enabled is enabled in values.yaml for this cluster. Mesh gateways enable traffic between services across different clusters. Now you need to make sure that every cluster in your federation has set meshGateway.enabled to true and redeploy if needed. Otherwise there is no counterpart for this clusters' mesh gateway to talk to.

There are multiple modes of running mesh gateway across your federation, follow this guide to set the mode to local. local will make every service in your cluster use the mesh gateway that is local to this cluster.

Deploy an example workload

Now that the Consul resources have been deployed on EKS, it is time to deploy an application workload. This tutorial will use the HashiCups demo application, an ingress gateway, and service intentions.

Issue the following command to clone the repository.

$ git clone https://github.com/hashicorp/terraform-aws-hcp-consul.git

Change directory into the example repository.

$ cd terraform-aws-hcp-consul/modules/k8s-demo-app

Issue the following command to deploy the demo application, ingress gateway, and service intentions to your EKS cluster.

$ kubectl apply -f services/

Example output:

service/frontend created
serviceaccount/frontend created
configmap/nginx-configmap created
deployment.apps/frontend created
service/product-api-service created
serviceaccount/product-api created
configmap/db-configmap created
deployment.apps/product-api created
service/postgres created
serviceaccount/postgres created
deployment.apps/postgres created
service/public-api created
serviceaccount/public-api created
deployment.apps/public-api created
serviceintentions.consul.hashicorp.com/ingress-gateway-to-frontend created
serviceintentions.consul.hashicorp.com/frontend-to-public-api created
serviceintentions.consul.hashicorp.com/public-api-to-product-api created
serviceintentions.consul.hashicorp.com/product-api-to-postgres created

Access the HashiCups UI

With the resources in place, retrieve the public URL and port of the ingress gateway.

$ kubectl get svc --namespace consul
NAME                        TYPE           CLUSTER-IP       EXTERNAL-IP                                                               PORT(S)                         AGE
consul-ingress-gateway      LoadBalancer   172.20.81.249    a7b5d9e6f022548259fa0d616ca8aaf1-1451754202.us-east-1.elb.amazonaws.com   8080:32166/TCP,8443:31069/TCP   8m54s
...TRUNCATED

Verify you have successfully deployed the application by visiting your unique ingress gateway EXTERNAL-IP DNS address in your browser, or use the following curl command.

$ INGRESS_GATEWAY=$(kubectl get svc/consul-ingress-gateway --namespace consul -o json | jq -r '.status.loadBalancer.ingress[0].hostname') && \
    echo "Connecting to \"$INGRESS_GATEWAY\"" && \
    curl -H "Host: nginx.ingress.consul" "http://$INGRESS_GATEWAY:8080"

Example output:

Connecting to "a32115aa6b19546e188afc44fbb8f3b3-1889150178.us-west-2.elb.amazonaws.com"
&lt;!doctype html&gt;
...TRUNCATED...
&lt;/html&gt;

This validates that Consul service discovery is working, because the services are able to resolve the upstreams. This also validates that Consul service mesh is working, because the intentions that were created are allowing services to interact with one another.

Next steps

In this tutorial, you connected your EKS environment to HCP Consul and deployed a demo application. To keep learning about Consul's features, and for step-by-step examples of how to perform common Consul tasks, complete one of the following tutorials.

  • Explore the Consul UI
  • Review recommend practices for Consul on Kubernetes
  • Deploy a metrics pipeline with Prometheus and Grafana

If you encounter any issues, please contact the HCP team at support.hashicorp.com.

 Previous
 Next

This tutorial also appears in:

  •  
    11 tutorials
    HCP Consul Deployment
    Deploy managed Consul in AWS or Azure. Connect Consul clients running on Azure Virtual Machines (VMs), Elastic Compute Cloud (EC2), Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS), and/or Elastic Container Service (ECS).
    • Consul

On this page

  1. Connect an Elastic Kubernetes Service Cluster to HCP Consul
  2. Prerequisites
  3. Configure development host
  4. Deploy an example workload
  5. Next steps
Give Feedback(opens in new tab)
  • Certifications
  • System Status
  • Terms of Use
  • Security
  • Privacy
  • Trademark Policy
  • Trade Controls
  • Give Feedback(opens in new tab)