Multi-Tenancy with Administrative Partitions
Enterprise Only
The functionality described in this tutorial is available only in Consul Enterprise. To explore Consul Enterprise features, you can sign up for a free 30-day trial from here.
Admin partitions (available in Consul 1.11+) allow enterprises to provide a shared service-networking solution for multiple tenants—across scheduler-based (Kubernetes, Nomad) and VM-based deployments—within a single Consul datacenter.
Two major challenges organizations face when scaling a service mesh are:
Increased operational complexity associated with deploying and managing a service mesh across multiple tenants. This can become unmanageable for operators when there are dozens, if not hundreds, of Kubernetes clusters and virtual machines deployed within an organization.
Lack of organizational autonomy when providing resources such as networking, namespaces, and services to individual teams.
Without admin partitions, operators would have to manage separate control planes for each Kubernetes cluster or VM group, as shown in the architecture below.
Using admin partitions, operators can manage a single control plane while still providing individual teams with organizational autonomy and isolation in managing their services.
This diagram illustrates a single Consul control plane for multiple Kubernetes clusters and VMs.
Consul 1.11 multi-tenancy with admin partitions allows multiple namespaces with the same name to exist independently of each other.
In this tutorial, you will
Configure and deploy a (single-node) Consul cluster with admin partitions enabled into a Kubernetes cluster.
Deploy two Consul admin partitions—part1 and part2—into two additional Kubernetes clusters.
Observe that namespaces and services with the same name deployed into different admin partitions are isolated by deploying the following:
The countdash sample application into the default namespace in part1.
The countdash application with a mock counting service into the default namespace in part2.
For the best learning experience, we recommend using the interactive environment. This guide requires running several Kubernetes clusters. The interactive environment does that non-trivial setup for you, but you can also perform that setup in a hosted service such as AKS, EKS, or GKE. The steps in this guide are written for the interactive environment, but map directly to the steps necessary in other environments.
Launch Terminal
This tutorial includes a free interactive command-line lab that lets you follow along on actual cloud infrastructure.
Requirements
If you are not using the interactive environment, you will need the following to perform the steps in this tutorial.
- Kubernetes 1.21.5+ (The interactive environment uses k3s v1.21.5+k3s2)
- Helm v3.7.1+ (Preinstalled in the interactive environment)
- Consul Helm chart v0.43.0+
- Consul Enterprise v1.12.0+ent
- Three Kubernetes Clusters with network access to each other
Visual Overview of the scenario
The scenario uses three Kubernetes clusters named cluster-a, cluster-b, and cluster-c. The interactive environment implements these with three k3s clusters, with cluster-a having a dedicated worker node—cluster-a-w1.
Install Consul Servers in cluster-a
Prepare the Kubernetes namespaces
We recommend to deploy Consul into its own dedicated namespace per cluster
as shown below. Create a namespace consul
in each cluster with the following
commands.
Install your trial license
Obtain a Consul Enterprise license key. If you do not have a Consul Enterprise license, you can register for a 30 day trial license.
Switch to the Editor tab, open the consul.hclic
file, and paste in your
license key. Click on the save icon to save the file.
The Consul helm chart retrieves the Consul Enterprise license as a Kubernetes secret. Write it into Kubernetes with the following command.
Deploy the license to other clusters
The interactive environment collects the kubeconfigs for each cluster—cluster-a,
cluster-b, and cluster-c—into the /root/.kube/config
file. You can use these
contexts to push the license to the other clusters. The current-context in the
kubeconfig files on each cluster node are set to themselves in the interactive
environment as well.
Review the values.yaml file
Switch to the Editor tab and select the values.yaml file in the sidebar. Take a moment to review it. To enable Consul admin partitions, you must:
- Create a dictionary named
adminPartitions
as a child of theglobal
dictionary. - Set
exposeGossipAndRPCPorts: true
in theserver
dictionary.
The adminPartitions
dictionary
In the provided values.yaml file, the adminPartitions
dictionary contains
the following.
Note
Typically, you use a LoadBalancer
service type. However, some
small-scale Kubernetes environments can encounter port conflicts when using a
LoadBalancer
service type. In that case, you can use a NodePort
service type
and specify the https
, rpc
, and serf
ports manually to prevent a conflict.
Install Consul with helm
Switch back to the cluster-a tab. Install Consul with the following command.
The command will take a little while to run. Once complete, you will see the following output and be returned to the command line.
Verify the status of Consul
Run the kubectl get pods --namespace consul
command to check that all the
pods starting with consul-consul- are in the Running state and that
passing their readiness checks.
The easiest way to watch this over time is by using the watch
command to run
kubectl get pods --namespace consul
Once all the services are started, your output should look like the following. Of specific note are the READY and STATUS columns.
Once all pods are in Running status and have passed their readiness checks, quit
the watch command by pressing Control-C
.
Run the consul members
command in the consul-consul-server-0
pod using the
kubectl exec
command.
Verify that there are two members listed—one server and two clients. The output should look like the following. Note that your Address values will likely differ.
Export Consul internal CA certificate and key
The partitioned clusters will need access to the Consul's internal CA
certificate and secret. They are stored as secrets in Kubernetes and can be
exported to file by using the kubectl get secret
command.
Save certificate and key as cluster-b secret
Use the kubectl
command to export the CA certificate and apply it to
cluster-b with the following commands.
Use the kubectl command to export the CA key and apply it to cluster-b.
Save certificate and key as cluster-c secret
Repeat the previous steps to export the CA certificate and key to cluster-c.
Get the Consul partition service IP address
Run kubectl get services
to get the IP address of the Consul partition service
load-balancer.
The service information will be output, including the external IP of the load balancer. The following is sample output; your external IP address will likely differ.
You will use the external IP of the LoadBalancer service to connect your Consul clients back to the server cluster. Make a note of it now.
Click the Check button to continue.
Install Consul in cluster-b
Verify required secrets are present
Recall that you pushed several secrets into cluster-b in the previous step.
Run the kubectl get secrets
command to see the secrets that are available in
cluster-b.
Verify that the following secrets are present:
Update the Consul Helm chart values file
When deploying Consul in a Consul admin partition, as before, you need to create
a global.adminPartitions
dictionary in your values file.
You also need to provide the address of the consul-consul-partition-service
service in the client.join
list. Since the Consul clusters are external to the
Kubernetes cluster you are deploying into, you need to add an externalServers
dictionary at the document root.
Finally, you need to set client values. Set client.exposeGossipPorts
to true
and client.join
list to the same value as the externalServers.hosts
list.
Click on the Editor tab to switch to the code editor. From the sidebar,
select the values.yaml
file. Note the following.
The
global.adminPartitions
value has been created for you with the name of the admin partition that the cluster will join. In this case, it'spart1
.The
externalServers.hosts
and theclient.join
lists have been preset to theconsul-consul-partition-service
LoadBalancer's external IP address.The
externalServers.tlsServerName
value has been preset to thedc1
, the default Consul datacenter name.The
client.exposeGossipPorts
value has been set totrue
.
Install Consul in cluster-b using Helm
Switch back to the cluster-b tab and run the following helm install
command to install Consul.
Wait for the command to finish; when complete it outputs the following.
Verify the status of Consul
Run the kubectl get pods
command to check that all eleven of the Consul pods
are in the Running state and that they are passing their readiness checks.
Once all the services are started, your output should look like the following. Of specific note are the READY and STATUS columns. Your pod names will have different decoration, because they are generated at runtime.
Check nodes in the Consul UI
Open the Consul UI and select the Nodes view. Choose part1 in the Admin Partition dropdown. Verify that your cluster-b node is present in the output. You might have to refresh the Consul UI for the admin partition or nodes to appear.
Click the Check button to continue.
Install Consul in cluster-c
Verify required secrets are present
As with cluster-b, run the kubectl get secrets --namespace consul
command to see the secrets that
are available in cluster-c.
Verify that the following secrets are present:
Inspect the values.yaml file
Because the values.yaml file is identical to cluster-b with the exception of
the partition name, the scenario environment copies it from cluster-b and
updates the global.adminPartition.name
value for you. You can use the
Editor tab to view it now if you wish, or you continue on to the
next step.
Install Consul in cluster-c using Helm
If you switched to the Editor tab, switch back to the cluster-c tab. Run
the following helm install
command to install Consul.
Wait for the command to finish. Helm will output the release information as before and return to the command line.
Verify the status of Consul
Run the kubectl get pods
command to check that all eleven of the Consul pods
are in the Running state and that they are passing their readiness checks.
Use the watch
command to run kubectl get pods --namespace consul
Once all the services are started, your output should look like the following. Of specific note are the READY and STATUS columns.
Check nodes in the Consul UI
Open the Consul UI and select the Nodes view. Choose part2 in the Admin Partition dropdown. Verify that your cluster-c node is present in the output. You might have to refresh the Consul UI for the admin partition or nodes to appear.
Click the Check button to continue.
Deploy countdash into cluster-b
You will now deploy a two-tier application made of a backend data service that
returns a number (the counting
service), and a frontend dashboard
that pulls
from the counting
service over HTTP and displays the number.
Use kubectl
to deploy the counting service.
Use kubectl
to deploy the dashboard service.
To verify the services were deployed, select the Consul UI tab. Switch to the Services view and select the part1 admin partition. Watch for the counting and dashboard services to be running and to transition to healthy—indicated by green checkmarks.
Connect to the dashboard service
Run the following command to create a proxy to the dashboard service.
Once the proxy is running, select the Port 9999 tab to view the dashboard application.
Click the Check button to continue.
Deploy countdash with a mock counting service into cluster-c
For cluster-c, you are going to deploy the counting application again. However, this time you will deploy a mock counting service using http-echo. These services use the same namespace and upstream names as the deployment in cluster-b.
Start the dashboard service as before.
This time, use kubectl
to deploy the mock counting service.
To verify the services were deployed, select the Consul UI tab. Switch to the Services view and select part2 in the Admin Partition dropdown. Watch for the counting and dashboard services to be running and to transition to healthy—indicated by green checkmarks.
Connect to the dashboard service
Run the following command to create a proxy to the dashboard service.
Once the proxy is running, select the Port 9999 tab to view the dashboard application.
Verify the services are isolated
Finally, look at both applications to verify that they are consulting the proper upstream counting service. Some experiments to consider at this point are:
Try updating the application in cluster-b to use its own mock counting service with different constants.
Try stopping and starting the backend service in cluster-c.
Click Check once you are ready to complete the tutorial.
Next steps
Consul admin partitions enable multi-tenancy by allowing multiple namespaces with the same name to exist independently of each other. Access control lists (ACLs) are augmented to allow a partition administrator full control over their local resources without needing operator permissions for the entire cluster. That helps operations teams to better scale and manage complex architectures, as shown below.
In this tutorial, you deployed a Consul server cluster and two Consul admin partitions across three Kubernetes clusters. You deployed a sample application and created a "backend" service in each partition with no conflicts between them.
Learn more about Consul admin partitions in the Consul documentation.