• HashiCorp Developer

  • HashiCorp Cloud Platform
  • Terraform
  • Packer
  • Consul
  • Vault
  • Boundary
  • Nomad
  • Waypoint
  • Vagrant
Consul
  • Install
  • Tutorials
  • Documentation
  • API
  • CLI
  • Try Cloud(opens in new tab)
  • Sign up
Service Mesh & Gateways

Skip to main content
17 tutorials
  • Control Access into the Service Mesh with Consul API Gateway
  • Getting Started with Consul Service Mesh for Virtual Machines
  • Secure Service Communication with Consul Service Mesh and Envoy
  • Visualize Service Mesh Communication in the Consul UI
  • Consul Service Mesh in Production
  • Traffic Splitting for Service Deployments
  • Implement Circuit Breaking in Consul Service Mesh with Envoy
  • Load Balancing Services in Consul Service Mesh with Envoy
  • Application Aware Intentions with Consul Service Mesh
  • Connect Services Across Datacenters with Mesh Gateways
  • Understand Terminating Gateways
  • Connect External Services to Consul With Terminating Gateways
  • Allow External Traffic Inside Your Service Mesh With Ingress Gateways
  • Integrate Consul with Ambassador Edge Stack on Kubernetes
  • Extend your Service Mesh to Support AWS Lambda
  • Connect Services in Different Consul Clusters with Cluster Peering
  • Connect Services on Windows Workloads to Consul Service Mesh

  • Resources

  • Tutorial Library
  • Certifications
  • Community Forum
    (opens in new tab)
  • Support
    (opens in new tab)
  • GitHub
    (opens in new tab)
  1. Developer
  2. Consul
  3. Tutorials
  4. Service Mesh & Gateways
  5. Connect Services in Different Consul Clusters with Cluster Peering

Connect Services in Different Consul Clusters with Cluster Peering

  • 21min

  • TerraformTerraform
  • ConsulConsul

Service meshes provide secure communication across your services within and across your infrastructure, including on-premises and cloud environments. As your organization scales, it may need to deploy services in multiple cloud providers in different regions. Cluster peering enables you to connect multiple Consul clusters, letting services in one cluster securely communicate with services in the other.

Cluster peering removes some of the administrative burdens associated with WAN federation. Because there is no primary cluster, administrative boundaries are clearly separated per cluster since changes in one Consul cluster does not affect peered clusters. For more information on differences between WAN federation and cluster peering, refer to the cluster peering documentation.

In this tutorial you will:

  • Deploy two managed Kubernetes environments with Terraform
  • Deploy Consul in each Kubernetes cluster
  • Deploy the microservices from HashiCups, a demo application, in both Kubernetes cluster
  • Peer the two Consul clusters
  • Connect the services across the peered service mesh

Scenario overview

HashiCups is a coffee-shop demo application. It has a microservices architecture and uses Consul service mesh to securely connect the services. In this tutorial, you will deploy HashiCups services on Kubernetes clusters in two different AWS regions. By peering the Consul clusters, the frontend services in one region will be able to communicate with the API services in the other.

Diagram showing HashiCups deployment architecture

HashiCups uses the following microservices:

  • The nginx service is an NGINX instance that routes requests to the frontend microservice and serves as a reverse proxy to the public-api service.
  • The frontend service provides a React-based UI.
  • The public-api service is a GraphQL public API that communicates with the products-api and the payments services.
  • The product-api service stores the core HashiCups application logic, including authentication, coffee (product) information, and orders.
  • The postgres service is a Postgres database instance that stores user, product, and order information.
  • The payments service is a gRCP-based Java application service that handles customer payments.

Prerequisites

If you are not familiar with Consul's core functionality, refer to the Consul Getting Started tutorials collection first.

For this tutorial, you will need:

  • An AWS account configured for use with Terraform
  • aws-cli v2.0 or later
  • A Google Cloud account configured for use with Terraform
  • gcloud CLI with the gke-cloud-auth-plugin plugin installed
  • kubectl v1.21 or later
  • git v2.0 or later
  • terraform v1.2 or later
  • consul-k8s v1.0.1 or later
  • jq v1.6 or later

This tutorial uses Terraform automation to deploy the demo environment. You do not need to know Terraform to successfully complete this tutorial.

Clone example repository

Clone the GitHub repository containing the configuration files and resources.

$ git clone https://github.com/hashicorp-education/learn-consul-cluster-peering.git

Change into the directory with the newly cloned repository.

$ cd learn-consul-cluster-peering

This repository has the following:

  • The dc1 directory contains Terraform configuration to deploy an EKS cluster in us-east-2.
  • The dc2 directory contains Terraform configuration to deploy an EKS cluster in eu-west-2.
  • The k8s-yamls directory contains YAML configuration files that support this tutorial.
  • The hashicups-v1.0.2 directory contains YAML configuration files for deploying HashiCups.

This repository has the following:

  • The dc1 directory contains Terraform configuration to deploy an GKE cluster in us-central1-a.
  • The dc2 directory contains Terraform configuration to deploy an GKE cluster in us-west1-b.
  • The k8s-yamls directory contains YAML configuration files that support this tutorial.
  • The hashicups-v1.0.2 directory contains YAML configuration files for deploying HashiCups.

Deploy Kubernetes clusters and Consul

In this section, you will create a Kubernetes cluster on each datacenter, and install Consul to provide service mesh functionality.

Initialize the Terraform configuration for dc1 to download the necessary providers and modules.

$ terraform -chdir=aws/dc1 init
Initializing the backend...
## ...
Initializing provider plugins...
## ...
Terraform has been successfully initialized!
## ...
$ terraform -chdir=google-cloud/dc1 init
Initializing the backend...
## ...
Initializing provider plugins...
## ...
Terraform has been successfully initialized!
## ...

Open a new terminal window and initialize the Terraform configuration for dc2.

$ terraform -chdir=aws/dc2 init
Initializing the backend...
## ...
Initializing provider plugins...
## ...
Terraform has been successfully initialized!
## ...
$ terraform -chdir=google-cloud/dc2 init
Initializing the backend...
## ...
Initializing provider plugins...
## ...
Terraform has been successfully initialized!
## ...

Use the terraform.tfvars.example template file to create a terraform.tfvars file.

$ cp google-cloud/terraform.tfvars.example google-cloud/terraform.tfvars

Edit this file to specify your project ID and zones. By default, dc1 deploys to us-central1-a and dc2 deploys to us-west1-b.

google-cloud/terraform.tfvars
project = ""
dc1_zone = ""
dc2_zone = ""

Then, deploy the resources for dc1. Confirm the run by entering yes. This will take about 15 minutes to deploy your infrastructure.

$ terraform -chdir=aws/dc1 apply
## ...
Plan: 56 to add, 0 to change, 0 to destroy.
## ...
Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes
## ...

Apply complete! Resources: 56 added, 0 changed, 0 destroyed.

Outputs:

cluster_endpoint = "https://113671C8440A71181D0337DFDE61D5FB.gr7.us-east-2.eks.amazonaws.com"
cluster_id = "education-eks-oE9BEJM0"
cluster_name = "education-eks-oE9BEJM0"
cluster_security_group_id = "sg-08e046443a0f8552b"
region = "us-east-2"
$ terraform -chdir=google-cloud/dc1 apply
## ...
Plan: 3 to add, 0 to change, 0 to destroy.
## ...
Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes
## ...

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

Outputs:

get-credentials_command = "https://113671C8440A71181D0337DFDE61D5FB.gr7.us-east-2.eks.amazonaws.com"
rename-context_cmd = "kubectl config rename-context gke_instruqt-hashicorp_us-central1-a_consul-peering-gke-y50vupbt"

Deploy the resources for dc2. Confirm the run by entering yes.

$ terraform -chdir=aws/dc2 apply
## ...
Plan: 56 to add, 0 to change, 0 to destroy.
## ...
Apply complete! Resources: 56 added, 0 changed, 0 destroyed.
$ terraform -chdir=google-cloud/dc2 apply
## ...
Plan: 3 to add, 0 to change, 0 to destroy.
## ...
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

Configure kubectl

Now that you have deployed the two datacenters, configure the kubectl tool to interact with the Kubernetes cluster in the first datacenter.

Notice that this command stores the cluster connection information in the dc1 alias.

$ aws eks \
    update-kubeconfig \
    --region $(terraform -chdir=aws/dc1 output -raw region) \
    --name $(terraform -chdir=aws/dc1 output -raw cluster_name) \
    --alias=dc1

The Terraform output contains the commands to retrieve the kubeconfig and set it to the dc1 alias.

First, update your kubeconfig to connect to your GKE cluster. Alternatively, you can copy and run the Terraform output into your terminal.

$ eval $(terraform -chdir=google-cloud/dc1 output -raw get-credentials_command)

Then, rename the current context to dc1. This lets you target this specific Kubernetes cluster.

$ eval $(terraform -chdir=google-cloud/dc1 output -raw rename-context_cmd)

Configure the kubectl tool to interact with the Kubernetes cluster in the second datacenter.

Notice that this command stores the cluster connection information in the dc2 alias.

$ aws eks \
    update-kubeconfig \
    --region $(terraform -chdir=aws/dc2 output -raw region) \
    --name $(terraform -chdir=aws/dc2 output -raw cluster_name) \
    --alias=dc2

The Terraform output contains the commands to retrieve the kubeconfig and set it to the dc2 alias.

First, update your kubeconfig to connect to your GKE cluster. Alternatively, you can copy and run the Terraform output into your terminal.

$ eval $(terraform -chdir=google-cloud/dc2 output -raw get-credentials_command)

Then, rename the current context to dc2.

$ eval $(terraform -chdir=google-cloud/dc2 output -raw rename-context_cmd)

Deploy Consul on both Kubernetes clusters

You will now deploy Consul on your Kubernetes platforms with the Consul K8S CLI. By default, Consul deploys into its own dedicated namespace (consul). The Consul installation will use the Consul Helm chart file in the k8s-yaml directory. Cluster peering requires Consul v1.13.1+ and the global.peering.enabled parameter set to true. Deploying Consul on each Kubernetes cluster should only take a few minutes.

k8s-yamls/consul-helm.yaml
global:
##...
  peering:
    enabled: true # mandatory for cluster peering
  tls:
    enabled: true # mandatory for cluster peering
##...
meshGateway:
  enabled: true # mandatory for k8s cluster peering
##...

Deploy Consul on the Kubernetes cluster in the first datacenter. Confirm the installation with a y. Notice that this command sets global.datacenter to dc1.

$ consul-k8s install -context=dc1 -config-file=k8s-yamls/consul-helm.yaml --set=global.datacenter=dc1
##...
 ✓ Consul installed in namespace "consul".

Verify that you have installed Consul in dc1 by inspecting the Kubernetes pods in the consul namespace.

$ kubectl --context=dc1 --namespace consul get pods
NAME                                          READY   STATUS    RESTARTS   AGE
consul-connect-injector-7d68465cf9-fmkxc      1/1     Running   0          107s
consul-mesh-gateway-5554894784-w7pm4          1/1     Running   0          107s
consul-server-0                               1/1     Running   0          106s
consul-webhook-cert-manager-f59d67cb9-xjv7h   1/1     Running   0          107s

Then, deploy Consul on the Kubernetes cluster in the second datacenter. Confirm the installation with a y. Notice that this command sets global.datacenter to dc2.

$ consul-k8s install -context=dc2 -config-file=k8s-yamls/consul-helm.yaml --set=global.datacenter=dc2
##...
 ✓ Consul installed in namespace "consul".

Verify that you have installed Consul in dc2 by inspecting the Kubernetes pods in the consul namespace.

$ kubectl --context=dc2 --namespace consul get pods
NAME                                          READY   STATUS    RESTARTS   AGE
consul-connect-injector-5f68c84545-ndc45      1/1     Running   0          110s
consul-mesh-gateway-675696cd94-hwvzz          1/1     Running   0          110s
consul-server-0                               1/1     Running   0          110s
consul-webhook-cert-manager-f59d67cb9-nkc6t   1/1     Running   0          110s

Deploy HashiCups

You will now deploy the HashiCups microservices on your Kubernetes clusters. The dc1 Kubernetes cluster will host the frontend services, while the dc2 Kubernetes cluster will host the API and database services. Later in this tutorial, you will connect the Consul datacenters to form the complete HashiCups deployment. The following diagram illustrates how HashiCups will be deployed across the two clusters.

HashiCups on K8s architecture

Deploy HashiCups on first cluster

Deploy the frontend, nginx, public-api, and payments services, along with intentions-dc1, to the dc1 Kubernetes cluster.

$ for service in {frontend,nginx,public-api,payments,intentions-dc1}; do kubectl --context=dc1 apply -f hashicups-v1.0.2/$service.yaml; done
service/frontend created
serviceaccount/frontend created
servicedefaults.consul.hashicorp.com/frontend created
##...

You can view the HashiCups frontend, but the demo application will not display any products because products-api is not deployed.

Verify that you have successfully deployed the services by listing the Kubernetes services.

$ kubectl --context=dc1 get service
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
frontend     ClusterIP   172.20.9.134     <none>        3000/TCP   22s
kubernetes   ClusterIP   172.20.0.1       <none>        443/TCP    26m
nginx        ClusterIP   172.20.9.95      <none>        80/TCP     19s
payments     ClusterIP   172.20.22.222    <none>        1800/TCP   13s
public-api   ClusterIP   172.20.153.101   <none>        8080/TCP   16s

List the services registered with Consul in dc1. This command runs consul catalog services in one of the Consul server agents.

$ kubectl exec --namespace consul -it --context dc1 consul-server-0 -- consul catalog services
consul
frontend
frontend-sidecar-proxy
mesh-gateway
nginx
nginx-sidecar-proxy
payments
payments-sidecar-proxy
public-api
public-api-sidecar-proxy

Deploy products-api microservice on second cluster

Deploy the product-api and postgres services, along with intentions-dc2, to the dc2 Kubernetes cluster.

$ for service in {products-api,postgres,intentions-dc2}; do kubectl --context=dc2 apply -f hashicups-v1.0.2/$service.yaml; done
service/products-api created
serviceaccount/products-api created
servicedefaults.consul.hashicorp.com/products-api created
##...

Verify that you have successfully deployed the services by listing the Kubernetes services.

$ kubectl --context=dc2 get service
NAME           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
kubernetes     ClusterIP   172.20.0.1       <none>        443/TCP    15m
postgres       ClusterIP   172.20.165.165   <none>        5432/TCP   28s
products-api   ClusterIP   172.20.222.90    <none>        9090/TCP   33s

List the services registered with Consul in dc2.

$ kubectl exec --namespace consul -it --context dc2 consul-server-0 -- consul catalog services
consul
mesh-gateway
postgres
postgres-sidecar-proxy
products-api
products-api-sidecar-proxy

Explore the Consul UI (optional)

Retrieve the Consul UI address for dc1 and open it in your browser.

$ kubectl --context=dc1 --namespace consul get services consul-ui -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'
a50cc674a11fe4603a018583364a19aa-749640116.eu-north-1.elb.amazonaws.com
$ kubectl --context=dc1 --namespace consul get services consul-ui -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
35.238.125.40

Retrieve the Consul UI address for dc2 and open it in your browser.

$ kubectl --context=dc2 --namespace consul get services consul-ui -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' 
a94d753af861e436ca9e2d3ff5faef07-1571001778.eu-north-1.elb.amazonaws.com
$ kubectl --context=dc2 --namespace consul get services consul-ui -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
34.83.93.166

Explore HashiCups in browser

Open the HashiCups application. First, open a new terminal and port forward the nginx service locally to port 8080.

$ kubectl --context=dc1 port-forward deploy/nginx 8080:80

Open localhost:8080 in your browser to view the HashiCups UI. Notice that it displays no products, since there is no instance of product-api on dc1.

HashiCups UI showing no products

Configure Consul cluster peering

Tip: Consul cluster peering works on both Enterprise and OSS versions of Consul. On Consul OSS, you can only peer clusters between the default partitions. On Consul Enterprise, you can peer clusters between any partition.

You will now peer the two data centers to enable services in dc1 to communicate to product-api in dc2. Consul cluster peering works by defining two cluster roles:

  1. A peering acceptor is the cluster that generates a peering token and accepts an incoming peering connection.
  2. A peering dialer is the cluster that uses a peering token to make an outbound peering connection with the cluster that generated the token.

Configure cluster peering traffic routing

You can peer Consul clusters by either directly connecting Consul server nodes or connecting the Consul mesh gateways.

Most Kubernetes deployments will not let services connect outside the cluster. This prevents the Consul server pods from communicating to other Kubernetes clusters. Therefore, we recommend configuring the clusters to use mesh gateways for peering. The following file configures the Consul clusters to use mesh gateways for cluster peering:

./k8s-yamls/peer-through-meshgateways.yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: Mesh
metadata:
  name: mesh
spec:
  peering:
    peerThroughMeshGateways: true

Configure cluster peering traffic for both dc1 and dc2 to be routed via the mesh gateways.

$ for dc in {dc1,dc2}; do kubectl --context=$dc apply -f k8s-yamls/peer-through-meshgateways.yaml; done
mesh.consul.hashicorp.com/mesh created
mesh.consul.hashicorp.com/mesh created

There are two modes for routing traffic from local services to remote services when cluster peering connections are routed through mesh gateways. In remote mode, your local services contact the remote mesh gateway in order to reach remote services. In local mode, your local services contact their local gateway in order to reach remote services. Refer to the modes documentation and well as the Mesh architecture diagram for more information.

We recommend you use local mode because most Kubernetes deployments do not allow local services to connect outside the cluster. The following configuration specifies local mode for traffic routed over the mesh gateways:

./k8s-yamls/originate-via-meshgateways.yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ProxyDefaults
metadata:
  name: global
spec:
  meshGateway:
    mode: local

Configure local mode for traffic routed over the mesh gateways for both dc1 and dc2.

$ for dc in {dc1,dc2}; do kubectl --context=$dc apply -f k8s-yamls/originate-via-meshgateways.yaml; done
proxydefaults.consul.hashicorp.com/global created
proxydefaults.consul.hashicorp.com/global created

Create a peering token

Configuring the peering acceptor role for a cluster generates a peering token and waits to accept an incoming peering connection. The following configuration sets dc1 as the peering acceptor:

./k8s-yamls/acceptor-on-dc1-for-dc2.yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: PeeringAcceptor
metadata:
  name: dc2
spec:
  peer:
    secret:
      name: "peering-token-dc2"
      key: "data"
      backend: "kubernetes"

Configure a PeeringAcceptor role for dc1.

$ kubectl --context=dc1 apply -f k8s-yamls/acceptor-on-dc1-for-dc2.yaml
peeringacceptor.consul.hashicorp.com/dc2 created

Confirm you successfully created the peering acceptor custom resource definition (CRD).

$ kubectl --context=dc1 get peeringacceptors
NAME         SYNCED   LAST SYNCED   AGE
dc2   True     5s            5s

Confirm that the PeeringAcceptor CRD generated a peering token secret.

$ kubectl --context=dc1 get secrets peering-token-dc2
NAME                       TYPE     DATA   AGE
peering-token-dc2   Opaque   1      97s

Import the peering token generated in dc1 into dc2.

$ kubectl --context=dc1 get secret peering-token-dc2 -o yaml | kubectl --context=dc2 apply -f -
secret/peering-token-dc2 created

Establish a connection between clusters

Configuring a peering dialer role for a cluster makes an outbound peering connection towards a peering acceptor cluster using the specified peering token. The following configuration sets dc2 as the peering dialer and peering-token-dc2 as its token.

./k8s-yamls/dialer-dc2.yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: PeeringDialer
metadata:
  name: dc1
spec:
  peer:
    secret:
      name: "peering-token-dc2"
      key: "data"
      backend: "kubernetes"

Configure a PeeringDialer role for dc2. This will create a peering connection from the second datacenter towards the first one.

$ kubectl --context=dc2 apply -f k8s-yamls/dialer-dc2.yaml
peeringdialer.consul.hashicorp.com/dc1 created

Verify that the two Consul clusters are peered. This command queries the peering API endpoint on the Consul server agent in dc1.

$ kubectl exec --namespace consul -it --context dc1 consul-server-0 \
-- curl --cacert /consul/tls/ca/tls.crt --header "X-Consul-Token: $(kubectl --context=dc1 --namespace consul get secrets consul-bootstrap-acl-token -o go-template='{{.data.token|base64decode}}')" "https://127.0.0.1:8501/v1/peering/dc2" \
 | jq

Notice the state is Active, which means that the two clusters are peered successfully.

{
  "ID": "1aa44921-8081-a16d-4290-0210b205bcc9",
  "Name": "dc2",
  "State": "ACTIVE",
  "PeerCAPems": [
    "-----BEGIN CERTIFICATE-----\nMIICDTCCAbOgAwIBAgIBCjAKBggqhkjOPQQDAjAwMS4wLAYDVQQDEyVwcmktMTNn\nZGx5bC5jb25zdWwuY2EuNjk5MmE2YjYuY29uc3VsMB4XDTIyMTEyMzE1MzYzN1oX\nDTMyMTEyMDE1MzYzN1owMDEuMCwGA1UEAxMlcHJpLTEzZ2RseWwuY29uc3VsLmNh\nLjY5OTJhNmI2LmNvbnN1bDBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABIaEuBVb\nGFJlqqXCCwhyYEwyKvhQV19IfCS+K6Uc+W5VI6+t7zsQHVFl2qy+i2Z0Rj8QnEYD\nI0YelQuVrRFV7x6jgb0wgbowDgYDVR0PAQH/BAQDAgGGMA8GA1UdEwEB/wQFMAMB\nAf8wKQYDVR0OBCIEIAoE4KUzkS2rUvB2ra1EC1BuubYh8fJmPp4dwXIf+mTMMCsG\nA1UdIwQkMCKAIAoE4KUzkS2rUvB2ra1EC1BuubYh8fJmPp4dwXIf+mTMMD8GA1Ud\nEQQ4MDaGNHNwaWZmZTovLzY5OTJhNmI2LWVlMGUtNDUzMC1iMGRkLWM2ZTA3Y2Iy\nOGU3My5jb25zdWwwCgYIKoZIzj0EAwIDSAAwRQIhAPRoqiwv4o5urXQnrP3cxU4y\n6dSffViR1ZBbFUdPvYTFAiAS8jGNn3Me0NyhRTCgK+bEJfw8wVLJK4wWZRCr42/e\nxw==\n-----END CERTIFICATE-----\n"
  ],
  "StreamStatus": {
    "ImportedServices": null,
    "ExportedServices": null,
    "LastHeartbeat": "2022-11-23T18:21:08.069062738Z",
    "LastReceive": "2022-11-23T18:21:08.069062738Z",
    "LastSend": "2022-11-23T18:20:53.071747214Z"
  },
  "CreateIndex": 695,
  "ModifyIndex": 701,
  "Remote": {
    "Partition": "",
    "Datacenter": "dc2"
  }
}

Export the products-api service

After you peer the Consul clusters, you need to create a configuration entry that defines the services you want to export to other clusters. Consul uses this configuration entry to advertise those services' information and connect those services across Consul clusters. The following configuration exports the products-api service into the dc1 peer.

apiVersion: consul.hashicorp.com/v1alpha1
kind: ExportedServices
metadata:
  name: default ## The name of the partition containing the service
spec:
  services:
    - name: products-api ## The name of the service you want to export
      consumers:
      - peer: dc1 ## The name of the peering connection that receives the service

In dc2, apply the ExportedServices custom resource file that exports the products-api service to dc1.

$ kubectl --context=dc2 apply -f k8s-yamls/exportedsvc-products-api.yaml
exportedservices.consul.hashicorp.com/default created

Confirm that the Consul cluster in dc1 can access the products-api in dc2. This command queries the services API endpoint on the Consul server agent in dc1 about the products-api service from dc2 for its sidecar service ID and the related peer name.

$ kubectl \
--context=dc1 --namespace consul exec -it consul-server-0 \
-- curl --cacert /consul/tls/ca/tls.crt \
--header "X-Consul-Token: $(kubectl --context=dc1 --namespace consul get secrets consul-bootstrap-acl-token -o go-template='{{.data.token|base64decode}}')" "https://127.0.0.1:8501/v1/health/connect/products-api?peer=dc2" \
| jq '.[].Service.ID,.[].Service.PeerName'

Notice the output contains the products-api sidecar service ID and the name of the related cluster peering.

"products-api-sidecar-proxy-instance-0"
"dc2"

Create a cross-cluster service intention

In order for communication from products-api service in dc1 to reach public-api in dc2, you must define a ServiceIntentions custom resource definition that enables communication from products-api service in dc1.

k8s-yamls/intention-dc1-public-api-to-dc2-products-api.yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceIntentions
metadata:
  name: dc1-public-api-to-dc2-products-api
spec:
  destination:
    name: products-api
  sources:
   - name: public-api
     action: allow
     peer: dc1

Create a service intention in dc2 that that allows communication from the public-api service in dc1 to the products-api service in dc2.

$ kubectl --context=dc2 apply -f k8s-yamls/intention-dc1-public-api-to-dc2-products-api.yaml
serviceintentions.consul.hashicorp.com/dc1-public-api-to-dc2-products-api created

Set up new upstream for the public-api service

At this point, the public-api service in dc1 is set to connect to its local instance of the products-api service. Because product-api is in another datacenter, you must define the upstream to point to the products-api service in dc2. Consul uses service virtual IP lookups to look up services offered across mesh gateways. The format of the lookup is <service>.virtual[.<namespace>].<peer>.<domain>. The namespace segment is only available in Consul Enterprise. In this case, the correct lookup is products-api.virtual.dc2.consul. The following configuration is for the updated public-api service in dc1:

k8s-yamls/public-api-peer.yaml
##...
apiVersion: apps/v1
kind: Deployment
metadata:
  name: public-api
spec:
##...
  template:
  ##...
    spec:
      serviceAccountName: public-api
      containers:
        - name: public-api
          image: hashicorpdemoapp/public-api:v0.0.6
          imagePullPolicy: Always
          ports:
            - containerPort: 8080
          env:
            - name: BIND_ADDRESS
              value: ":8080"
            - name: PRODUCT_API_URI
              value: "http://products-api.virtual.dc2.consul"
            - name: PAYMENT_API_URI
              value: "http://payments:1800"
##...

Override the public-api service definition in dc1 with the updated configuration pointing to upstream in dc2.

$ kubectl --context=dc1 apply -f k8s-yamls/public-api-peer.yaml
service/public-api unchanged
serviceaccount/public-api unchanged
servicedefaults.consul.hashicorp.com/public-api unchanged
deployment.apps/public-api configured

Verify peered Consul services

Port forward the nginx service locally to port 8080.

$ kubectl --context=dc1 port-forward deploy/nginx 8080:80

Then, open localhost:8080 in your browser. Notice that it now displays a curated selection of coffee drinks. HashiCups UI

Destroy environment

Now that you have peered two Consul clusters, you will now remove the exported service and cluster peering, before destroying the environment.

Remove exported service

Stop the products-api service from being exported.

$ kubectl --context=dc2 delete -f k8s-yamls/exportedsvc-products-api.yaml
exportedservices.consul.hashicorp.com "default" deleted

Remove cluster peering

To remove a peering connection, delete both the PeeringAcceptor and PeeringDialer resources.

First, delete the PeeringDialer from dc2.

$ kubectl --context=dc2 delete -f k8s-yamls/dialer-dc2.yaml
peeringdialer.consul.hashicorp.com "dc1" deleted

Then, delete the PeeringAcceptor from dc1.

$ kubectl --context=dc1 delete -f k8s-yamls/acceptor-on-dc1-for-dc2.yaml
peeringacceptor.consul.hashicorp.com "dc2" deleted

Verify the two clusters are no longer peered by querying the /health HTTP endpoint in dc1.

$ kubectl exec --namespace consul -it --context dc1 consul-server-0 -- curl --insecure 'https://127.0.0.1:8501/v1/health/connect/backend?peer=dc2'
[]

Delete supporting infrastructure

To destroy the environment, first uninstall Consul from both Kubernetes clusters.

First, uninstall Consul from dc1. Confirm with a y.

$ consul-k8s uninstall -context=dc1
##...
    Proceed with uninstall? (y/N) y
##...
   Only approve if all data from this installation can be deleted. (y/N) y
##...

Then, uninstall Consul from dc2. Confirm with a y.

$ consul-k8s uninstall -context=dc2
##...
    Proceed with uninstall? (y/N) y
##...
   Only approve if all data from this installation can be deleted. (y/N) y
##...

Note: Before running the terraform destroy command, make sure that all the services in the Consul namespace have been terminated. If you try to perform a destroy before that, your Terraform run will fail and you will have to restart it.

$ kubectl --context=dc1 --namespace consul get service
No resources found in consul namespace.

Then, destroy the supporting infrastructure in your first datacenter.

$ terraform -chdir=aws/dc1 destroy

##...

Plan: 0 to add, 0 to change, 56 to destroy.

##...

Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes

##...

Destroy complete! Resources: 56 destroyed.
$ terraform -chdir=google-cloud/dc1 destroy

##...

Plan: 0 to add, 0 to change, 3 to destroy.

##...

Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes

##...

Destroy complete! Resources: 3 destroyed.

Verify that you have removed all services from your second datacenter.

$ kubectl --context=dc2 --namespace consul get service
No resources found in consul namespace.

Then, destroy the supporting infrastructure in your second datacenter.

$ terraform -chdir=aws/dc2 destroy

##...

Plan: 0 to add, 0 to change, 56 to destroy.

##...

Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes

##...

Destroy complete! Resources: 56 destroyed.
$ terraform -chdir=google-cloud/dc2 destroy

##...

Plan: 0 to add, 0 to change, 3 to destroy.

##...

Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes

##...

Destroy complete! Resources: 3 destroyed.

Next steps

In this tutorial, you used the Consul cluster peering to route traffic across service meshes in two Consul clusters. In the process, you learned the benefits of using cluster peering for cluster interconnections with minimal shared administrative overhead.

Feel free to explore these tutorials and collections to learn more about Consul service mesh, microservices, and Kubernetes security.

  • Consul Kubernetes Deployment Guide
  • Migrate to Microservices collection
  • Consul Kubernetes Security
  • Consul Cluster Peering documentation
 Previous
 Next

On this page

  1. Connect Services in Different Consul Clusters with Cluster Peering
  2. Scenario overview
  3. Prerequisites
  4. Clone example repository
  5. Deploy Kubernetes clusters and Consul
  6. Deploy HashiCups
  7. Configure Consul cluster peering
  8. Export the products-api service
  9. Create a cross-cluster service intention
  10. Set up new upstream for the public-api service
  11. Verify peered Consul services
  12. Destroy environment
  13. Next steps
Give Feedback(opens in new tab)
  • Certifications
  • System Status
  • Terms of Use
  • Security
  • Privacy
  • Trademark Policy
  • Trade Controls
  • Give Feedback(opens in new tab)