Consul
Cluster peering overview
This topic provides an overview of cluster peering, which lets you connect two or more independent Consul clusters so that services deployed to different partitions or datacenters can communicate. Cluster peering is enabled in Consul by default. For specific information about cluster peering configuration and usage, refer to following pages.
What is cluster peering?
Consul supports cluster peering connections between two admin partitions in different datacenters. Deployments without an Enterprise license can still use cluster peering because every datacenter automatically includes a default partition. Meanwhile, admin partitions in the same datacenter do not require cluster peering connections because you can export services between them without generating or exchanging a peering token.
The following diagram describes Consul's cluster peering architecture.
In this diagram, the default
partition in Consul DC 1 has a cluster peering connection with the web
partition in Consul DC 2. Enforced by their respective mesh gateways, this cluster peering connection enables Service B
to communicate with Service C
as a service upstream.
Cluster peering leverages several components of Consul's architecture to enforce secure communication between services:
- A peering token contains an embedded secret that securely establishes communication when shared symmetrically between datacenters. Sharing this token enables each datacenter's server agents to recognize requests from authorized peers, similar to how the gossip encryption key secures agent LAN gossip.
- A mesh gateway encrypts outgoing traffic, decrypts incoming traffic, and directs traffic to healthy services. Consul's service mesh features must be enabled in order to use mesh gateways. Mesh gateways support the specific admin partitions they are deployed on. Refer to Mesh gateways for more information.
- An exported service communicates with downstreams deployed in other admin partitions. They are explicitly defined in an
exported-services
configuration entry. - A service intention secures service-to-service communication in a service mesh. Intentions enable identity-based access between services by exchanging TLS certificates, which the service's sidecar proxy verifies upon each request.
Compared with WAN federation
WAN federation and cluster peering are different ways to connect services through mesh gateways so that they can communicate across datacenters. WAN federation connects multiple datacenters to make them function as if they were a single cluster, while cluster peering treats each datacenter as a separate cluster. As a result, WAN federation requires a primary datacenter to maintain and replicate global states such as ACLs and configuration entries, but cluster peering does not.
WAN federation and cluster peering also treat encrypted traffic differently. While mesh gateways between WAN federated datacenters use mTLS to keep data encrypted, mesh gateways between peers terminate mTLS sessions, decrypt data to HTTP services, and then re-encrypt traffic to send to services. Data must be decrypted in order to evaluate and apply dynamic routing rules at the destination cluster, which reduces coupling between peers.
Regardless of whether you connect your clusters through WAN federation or cluster peering, human and machine users can use either method to discover services in other clusters or dial them through the service mesh.
WAN Federation | Cluster Peering | |
---|---|---|
Connects clusters across datacenters | ✅ | ✅ |
Shares support queries and service endpoints | ✅ | ✅ |
Connects clusters owned by different operators | ❌ | ✅ |
Functions without declaring primary datacenter | ❌ | ✅ |
Can use sameness groups for identical services | ❌ | ✅ |
Replicates exported services for service discovery | ❌ | ✅ |
Gossip protocol: Requires LAN gossip only | ❌ | ✅ |
Forwards service requests for service discovery | ✅ | ❌ |
Shares key/value stores | ✅ | ❌ |
Can replicate ACL tokens, policies, and roles | ✅ | ❌ |
Guidance
The following resources are available to help you use Consul's cluster peering features.
Tutorials
- To learn how to peer clusters and connect services across peers in AWS Elastic Kubernetes Service (EKS) and Google Kubernetes Engine (GKE) environments, complete the Connect services between Consul datacenters with cluster peering tutorial.
Usage documentation
- Establish cluster peering connections
- Manage cluster peering connections
- Manage L7 traffic with cluster peering
- Create sameness groups
Kubernetes documentation
- Cluster peering on Kubernetes technical specifications
- Establish cluster peering connections on Kubernetes
- Manage cluster peering connections on Kubernetes
- Manage L7 traffic with cluster peering on Kubernetes
- Create sameness groups on Kubernetes
HCP Consul Central documentation
- Cluster peering
- Cluster peering topologies
- Establish cluster peering connections on HCP Consul Central
- Cluster peering with HCP Consul Central
Reference documentation
- Cluster peering technical specifications
- HTTP API reference:
/peering/
endpoint - CLI reference:
peering
command.
Basic troubleshooting
If you experience errors when using Consul's cluster peering features, refer to the following list of technical constraints.
- Peer names can only contain lowercase characters.
- Services with node, instance, and check definitions totaling more than 8MB cannot be exported to a peer.
- Two admin partitions in the same datacenter cannot be peered. Use the
exported-services
configuration entry instead. - To manage intentions that specify services in peered clusters, use configuration entries. The
consul intention
CLI command is not supported. - The Consul UI does not support exporting services between clusters or creating service intentions. Use either the API or the CLI to complete these required steps when establishing new cluster peering connections.
- Accessing key/value stores across peers is not supported.