Consul
Single Consul Datacenter in Multiple Kubernetes Clusters
Note: For running Consul across multiple Kubernetes clusters, it is generally recommended to utilize Admin Partitions for production environments. This Consul Enterprise feature allows for the ability to accommodate for multiple tenants without concerns of resource collisions when administering a cluster at scale, and for the ability to run Consul on Kubernetes clusters across a non-flat network.
This page describes deploying a single Consul datacenter in multiple Kubernetes clusters, with servers and clients running in one cluster and only clients in the rest of the clusters. This example uses two Kubernetes clusters, but this approach could be extended to using more than two.
Requirements
- Consul-Helm version
v0.32.1
or higher - This deployment topology requires that the Kubernetes clusters have a flat network for both pods and nodes so that pods or nodes from one cluster can connect to pods or nodes in another. In many hosted Kubernetes environments, this may have to be explicitly configured based on the hosting provider's network. Refer to the following documentation for instructions:
- Either the Helm release name for each Kubernetes cluster must be unique, or
global.name
for each Kubernetes cluster must be unique to prevent collisions of ACL resources with the same prefix.
Prepare Helm release name ahead of installs
The Helm release name must be unique for each Kubernetes cluster.
The Helm chart uses the Helm release name as a prefix for the
ACL resources that it creates, such as tokens and auth methods. If the names of the Helm releases are identical, or if global.name
for each cluster is identical, subsequent Consul on Kubernetes clusters will overwrite existing ACL resources and cause the clusters to fail.
Before proceeding with installation, prepare the Helm release names as environment variables for both the server and client install.
$ export HELM_RELEASE_SERVER=server
$ export HELM_RELEASE_CLIENT=client
...
$ export HELM_RELEASE_CLIENT2=client2
Deploying Consul servers and clients in the first cluster
First, we will deploy the Consul servers with Consul clients in the first cluster. For that, we will use the following Helm configuration:
cluster1-config.yaml
global:
datacenter: dc1
tls:
enabled: true
enableAutoEncrypt: true
acls:
manageSystemACLs: true
gossipEncryption:
secretName: consul-gossip-encryption-key
secretKey: key
connectInject:
enabled: true
controller:
enabled: true
ui:
service:
type: NodePort
Note that we are deploying in a secure configuration, with gossip encryption, TLS for all components, and ACLs. We are enabling the Consul Service Mesh and the controller for CRDs so that we can use them to later verify that our services can connect with each other across clusters.
We're also setting UI's service type to be NodePort
.
This is needed so that we can connect to servers from another cluster without using the pod IPs of the servers,
which are likely going to change.
To deploy, first we need to generate the Gossip encryption key and save it as a Kubernetes secret.
$ kubectl create secret generic consul-gossip-encryption-key --from-literal=key=$(consul keygen)
Now we can install our Consul cluster with Helm:
$ helm install ${HELM_RELEASE_SERVER} --values cluster1-config.yaml hashicorp/consul
Note: The Helm release name must be unique for each Kubernetes cluster. That is because the Helm chart will use the Helm release name as a prefix for the ACL resources that it creates, such as tokens and auth methods. If the names of the Helm releases are the same, the Helm installation in subsequent clusters will clobber existing ACL resources.
Once the installation finishes and all components are running and ready, we need to extract the gossip encryption key we've created, the CA certificate and the ACL bootstrap token generated during installation, so that we can apply them to our second Kubernetes cluster.
$ kubectl get secret consul-gossip-encryption-key ${HELM_RELEASE_SERVER}-consul-ca-cert ${HELM_RELEASE_SERVER}-consul-bootstrap-acl-token --output yaml > cluster1-credentials.yaml
Deploying Consul clients in the second cluster
Note: If multiple Kubernetes clusters will be joined to the Consul Datacenter, then the following instructions will need to be repeated for each additional Kubernetes cluster.
Now we can switch to the second Kubernetes cluster where we will deploy only the Consul clients that will join the first Consul cluster.
First, we need to apply credentials we've extracted from the first cluster to the second cluster:
$ kubectl apply --filename cluster1-credentials.yaml
To deploy in the second cluster, we will use the following Helm configuration:
cluster2-config.yaml
global:
enabled: false
datacenter: dc1
acls:
manageSystemACLs: true
bootstrapToken:
secretName: cluster1-consul-bootstrap-acl-token
secretKey: token
gossipEncryption:
secretName: consul-gossip-encryption-key
secretKey: key
tls:
enabled: true
enableAutoEncrypt: true
caCert:
secretName: cluster1-consul-ca-cert
secretKey: tls.crt
externalServers:
enabled: true
# This should be any node IP of the first k8s cluster
hosts: ["10.0.0.4"]
# The node port of the UI's NodePort service
httpsPort: 31557
tlsServerName: server.dc1.consul
# The address of the kube API server of this Kubernetes cluster
k8sAuthMethodHost: https://kubernetes.example.com:443
client:
enabled: true
join: ["provider=k8s kubeconfig=/consul/userconfig/cluster1-kubeconfig/kubeconfig label_selector=\"app=consul,component=server\""]
extraVolumes:
- type: secret
name: cluster1-kubeconfig
load: false
connectInject:
enabled: true
Note that we're referencing secrets from the first cluster in ACL, gossip, and TLS configuration.
Next, we need to set up the externalServers
configuration.
The externalServers.hosts
and externalServers.httpsPort
refer to the IP and port of the UI's NodePort service deployed in the first cluster.
Set the externalServers.hosts
to any Node IP of the first cluster,
which you can see by running kubectl get nodes --output wide
.
Set externalServers.httpsPort
to the nodePort
of the cluster1-consul-ui
service.
In our example, the port is 31557
.
$ kubectl get service cluster1-consul-ui --context cluster1
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cluster1-consul-ui NodePort 10.0.240.80 <none> 443:31557/TCP 40h
We set the externalServer.tlsServerName
to server.dc1.consul
. This the DNS SAN
(Subject Alternative Name) that is present in the Consul server's certificate.
We need to set it because we're connecting to the Consul servers over the node IP,
but that IP isn't present in the server's certificate.
To make sure that the hostname verification succeeds during the TLS handshake, we need to set the TLS
server name to a DNS name that is present in the certificate.
Next, we need to set externalServers.k8sAuthMethodHost
to the address of the second Kubernetes API server.
This should be the address that is reachable from the first cluster, and so it cannot be the internal DNS
available in each Kubernetes cluster. Consul needs it so that consul login
with the Kubernetes auth method will work
from the second cluster.
More specifically, the Consul server will need to perform the verification of the Kubernetes service account
whenever consul login
is called, and to verify service accounts from the second cluster it needs to
reach the Kubernetes API in that cluster.
The easiest way to get it is to set it from your kubeconfig
by running kubectl config view
and grabbing
the value of cluster.server
for the second cluster.
Lastly, we need to set up the clients so that they can discover the servers in the first cluster.
For this, we will use Consul's cloud auto-join feature
for the Kubernetes provider.
To use it we need to provide a way for the Consul clients to reach the first Kubernetes cluster.
To do that, we need to save the kubeconfig
for the first cluster as a Kubernetes secret in the second cluster
and reference it in the clients.join
value. Note that we're making that secret available to the client pods
by setting it in client.extraVolumes
.
Note: The kubeconfig you're providing to the client should have minimal permissions. The cloud auto-join provider will only need permission to read pods. Please see Kubernetes Cloud auto-join for more details.
Now we're ready to install!
$ helm install ${HELM_RELEASE_CLIENT} --values cluster2-config.yaml hashicorp/consul
Verifying the Consul Service Mesh works
When Transparent proxy is enabled, services in one Kubernetes cluster that need to communicate with a service in another Kubernetes cluster must have a explicit upstream configured through the "consul.hashicorp.com/connect-service-upstreams" annotation.
Now that we have our Consul cluster in multiple k8s clusters up and running, we will deploy two services and verify that they can connect to each other.
First, we'll deploy static-server
service in the first cluster:
static-server.yaml
---
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceIntentions
metadata:
name: static-server
spec:
destination:
name: static-server
sources:
- name: static-client
action: allow
---
apiVersion: v1
kind: Service
metadata:
name: static-server
spec:
type: ClusterIP
selector:
app: static-server
ports:
- protocol: TCP
port: 80
targetPort: 8080
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: static-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: static-server
spec:
replicas: 1
selector:
matchLabels:
app: static-server
template:
metadata:
name: static-server
labels:
app: static-server
annotations:
"consul.hashicorp.com/connect-inject": "true"
spec:
containers:
- name: static-server
image: hashicorp/http-echo:latest
args:
- -text="hello world"
- -listen=:8080
ports:
- containerPort: 8080
name: http
serviceAccountName: static-server
Note that we're defining a Service intention so that our services are allowed to talk to each other.
Then we'll deploy static-client
in the second cluster with the following configuration:
static-client.yaml
apiVersion: v1
kind: Service
metadata:
name: static-client
spec:
selector:
app: static-client
ports:
- port: 80
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: static-client
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: static-client
spec:
replicas: 1
selector:
matchLabels:
app: static-client
template:
metadata:
name: static-client
labels:
app: static-client
annotations:
"consul.hashicorp.com/connect-inject": "true"
"consul.hashicorp.com/connect-service-upstreams": "static-server:1234"
spec:
containers:
- name: static-client
image: curlimages/curl:latest
command: [ "/bin/sh", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
serviceAccountName: static-client
Once both services are up and running, we can connect to the static-server
from static-client
:
$ kubectl exec deploy/static-client -- curl --silent localhost:1234
"hello world"