• HashiCorp Developer

  • HashiCorp Cloud Platform
  • Terraform
  • Packer
  • Consul
  • Vault
  • Boundary
  • Nomad
  • Waypoint
  • Vagrant
Consul
  • Install
  • Tutorials
  • Documentation
  • API
  • CLI
  • Try Cloud(opens in new tab)
  • Sign up
Kubernetes Service Mesh

Skip to main content
17 tutorials
  • Consul and Kubernetes Reference Architecture
  • Consul and Kubernetes Deployment Guide
  • Secure Applications with Service Sidecar Proxies
  • Secure Consul and Registered Services on Kubernetes
  • Secure Service Mesh Communication Across Kubernetes Clusters
  • Layer 7 Observability with Prometheus, Grafana, and Kubernetes
  • Manage Consul with Kubernetes Custom Resource Definitions (CRDs)
  • Consul Service Discovery and Service Mesh on Minikube
  • Consul Service Discovery and Mesh on Kubernetes in Docker (kind)
  • Deploy Consul on Azure Kubernetes Service (AKS)
  • Deploy Consul on Google Kubernetes Engine (GKE)
  • Deploy Consul on Amazon Elastic Kubernetes Service (EKS)
  • Deploy Consul on RedHat OpenShift
  • Control Access into the Service Mesh with Consul API Gateway
  • Deploy Federated Multi-Cloud Kubernetes Clusters
  • Multi Cluster Applications with Consul Enterprise Admin Partitions
  • Vault as Secrets Management for Consul

  • Resources

  • Tutorial Library
  • Certifications
  • Community Forum
    (opens in new tab)
  • Support
    (opens in new tab)
  • GitHub
    (opens in new tab)
  1. Developer
  2. Consul
  3. Tutorials
  4. Kubernetes Service Mesh
  5. Secure Consul and Registered Services on Kubernetes

Secure Consul and Registered Services on Kubernetes

  • 22min

  • ConsulConsul

When operating pre-production environments it is common to initialize an environment without security features enabled, in order make the development process more transparent. Cleartext network traffic can be a useful debugging tool.

To secure an existing environment, or create an environment that is secured from the beginning, you can apply security controls using the official Consul Helm chart.

In this tutorial, you will:

  • Review the types of Consul service mesh traffic
  • Install an unsecured Consul service mesh on Kubernetes for development or debugging
  • Verify that gossip encryption, TLS, and ACLs are not enabled
  • Upgrade the installation to enable gossip encryption, TLS, and ACLs
  • Verify that gossip encryption, TLS, and ACLs are enabled
  • Deploy two example services to the service mesh
  • Configure zero-trust networking using Consul intentions

Intended audience

  • admins with privileged permissions to administer local, pre-production, and production clusters
  • developers with privileged permissions to administer at least local or pre-production clusters
  • contributors who wish to contribute to HashiCorp open source projects

Prerequisites

To complete this tutorial in your own environment you will need the following:

  • Consul
  • consul-helm
  • helm
  • kubectl (CLI)
  • Administrative access to an active Kubernetes cluster

This tutorial was tested using:

  • Consul 1.12.0
  • consul-k8s 0.49.0
  • kind v0.17.0
  • helm v3.8.1
  • kubectl v1.25.0

Types of Consul service mesh traffic

The diagram below depicts the conceptual model of an active Consul service mesh on Kubernetes. The number of nodes varies based on your configuration, but the fundamental architecture and communication flows are represented.

Consul on Kubernetes

  • Consul uses a gossip protocol to manage membership and broadcast messages to the service mesh.
  • Consul uses the remote procedure call (RPC) pattern for communication between client and server nodes. Each server provides an HTTP API that supports read and write operations on the catalog which tracks the status of nodes, services, and other state information.
  • Consul uses Access Control Lists (ACLs) to secure the UI, API, CLI, service communications, and agent communications. At the core, ACLs operate by grouping rules into policies, then associating one or more policies with a token.
  • Consul uses intentions to control which services may establish connections. Intentions can be managed via the API, CLI, or UI.

Create a Kubernetes cluster (optional)

Run a local Kubernetes cluster using kind.

$ kind create cluster --image kindest/node:v1.24.7
Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.24.7) đŸ–ŧ
 ✓ Preparing nodes đŸ“Ļ
 ✓ Writing configuration 📜
 ✓ Starting control-plane đŸ•šī¸
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind

Have a nice day! 👋

Kind will configure your kubectl tool to interact with your local K8s cluster. Verify that kubectl is configured to interact with the K8s control plane running on your local machine (127.0.0.1).

$ kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:65419
CoreDNS is running at https://127.0.0.1:65419/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Install an unsecured Consul service mesh

Create an unsecured config file

Now you will create an unsecured Consul Helm configuration file called dc1.yaml. Consul server containers in Kubernetes do not run as root by default, and the Alpine Linux image they are based off of intentionally does not include tools like tcpdump. In order to modify the running container, you will need to add Consul Helm configuration that overrides the default, and does run the Consul agent containers as root. By doing that, you will be able to run apk and add the utilities you need for this tutorial. Specifically, you will be adding a securityContext stanza under the server region as shown in the following YAML example.

Create the Consul Helm configuration file.

$ cat > dc1.yaml <<EOF
global:
  name: consul
  enabled: true
  datacenter: dc1
server:
  replicas: 1
  securityContext:
    runAsNonRoot: false
    runAsUser: 0
connectInject:
  enabled: true
controller:
  enabled: true
EOF

Install Consul in your cluster

You can now deploy a complete Consul datacenter in your Kubernetes cluster using the official Consul Helm chart or the Consul K8S CLI.

Helm chart preparation

Consul on Kubernetes provides a Helm chart to deploy a Consul datacenter on Kubernetes in a highly customized configuration. Review the docs on Helm chart Configuration to learn more about the available configurations.

Add the official HashiCorp Helm repository and download the latest official consul-helm chart now.

$ helm repo add hashicorp https://helm.releases.hashicorp.com && helm repo update
"hashicorp" has been added to your repositories
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "hashicorp" chart repository
Update Complete. ⎈Happy Helming!⎈

Verify chart version

To ensure you have version 0.43.0 of the Helm chart, search your local repo.

$ helm search repo hashicorp/consul
NAME                CHART VERSION   APP VERSION DESCRIPTION
hashicorp/consul    0.43.0          1.12.0      Official HashiCorp Consul Chart

Install Consul in your cluster

Now, issue the helm install command. The following command specifies that the installation should:

  • Use the custom values file dc1.yaml you created earlier
  • Use the hashicorp/consul chart
  • Set your Consul installation name to consul
  • Create Consul resources in the consul namespace
  • Use consul-helm chart version 0.43.0
$ helm install consul hashicorp/consul --values dc1.yaml --create-namespace --namespace consul --version "0.43.0" --wait

NAME: consul
LAST DEPLOYED: Sat Apr 23 11:00:16 2022
NAMESPACE: consul
STATUS: deployed
REVISION: 1
NOTES:
Thank you for installing HashiCorp Consul!

Your release is named consul.

To learn more about the release, run:

  $ helm status consul
  $ helm get all consul

Consul on Kubernetes Documentation:
https://www.consul.io/docs/platform/k8s

Consul on Kubernetes CLI Reference:
https://www.consul.io/docs/k8s/k8s-cli

Consul K8s CLI is a tool for quickly installing and interacting with Consul on Kubernetes. Ensure that you are installing the correct version of the CLI for your Consul on Kubernetes deployment, as the CLI and the control plane are version dependent.

In order to install the latest version of the Consul K8s CLI depending on your operating system, follow the instructions here.

$ consul-k8s install -config-file=dc1.yaml -set global.image=hashicorp/consul:1.12.0

Note: You can review the official Consul K8S CLI documentation to learn more about additional settings.

When Consul is installed successfully, expect the following output:

==> Installing Consul
 ✓ Downloaded charts
 --> creating 1 resource(s)
 --> creating 46 resource(s)
 --> beginning wait for 46 resources with timeout of 10m0s
 ✓ Consul installed in namespace "consul".

Verify installation

Use kubectl get pods to verify your installation.

$ watch kubectl get pods --namespace consul
NAME                                           READY   STATUS    RESTARTS   AGE
consul-client-477vv                            1/1     Running   0          6m
consul-connect-injector-5f485f9cb6-8fvkh       1/1     Running   0          6m
consul-connect-injector-5f485f9cb6-s97m5       1/1     Running   0          6m
consul-controller-5cc8d5867-vqh5l              1/1     Running   0          6m
consul-server-0                                1/1     Running   0          6m
consul-webhook-cert-manager-5cf7f8d655-k7ktd   1/1     Running   0          6m

Once all pods have a status of Running, enter CTRL-C to stop the watch.

At this point, the Consul service mesh has been installed in the Kubernetes cluster, but no security features have been enabled. This means that:

  • All gossip traffic between agents is in clear text
  • All RPC communications between agents is in clear text
  • There are no Access Controls in place
  • No intentions have been defined

Verify security not enabled (optional)

To verify that network traffic is in cleartext inspect it.

View server traffic

To view network traffic, connect to the consul-server-0 container with kubectl, and observe its traffic using the tcpdump program.

$ kubectl exec -it --namespace consul consul-server-0 -- /bin/sh

The container images used by the Consul Helm chart are lightweight alpine images. They ship with limited tools. Issue the following commands to install tcpdump:

$ apk update && apk add tcpdump
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86_64/APKINDEX.tar.gz
...TRUNCATED...

Now start tcpdump to view traffic to the server container. This script limits the port range to the range of ports used by Consul.

$ tcpdump -an portrange 8300-8700 -A

Traffic occurs rapidly. The following output is an abbreviated example.

... m m.q.w...Node.consul-server-0.SeqNo....SourceAddr.
....SourceNode.control-plane.SourcePort. m
14:43:11.325622 IP 10.244.0.8.8301 > 10.244.0.5.8301: UDP, length 152
E....x@.@.}.

Inspect the output and observe that the traffic is in cleartext. Note the UDP operations, these entries are the gossip protocol at work. This proves that gossip encryption is not enabled.

Next, issue a Consul CLI command to prove two things:

  • RPC traffic is currently unencrypted
  • ACLs are not enabled

If the previous tcpdump process is still active, type CTRL-C to end it. This slightly modified version of the tcpdump command writes results to a log file. Start it now so that you can grep for interesting log entries later.

$ tcpdump -an portrange 8300-8700 -A > /tmp/tcpdump.log
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes

Traffic is now being captured in a log file rather than being output to the terminal.

Next, generate some Consul traffic in a different terminal using kubectl to exec a command against the Consul CLI on the client container. This traffic will originate from the client to the server, and will prove that RPC traffic is in cleartext.

$ kubectl exec --namespace consul $(kubectl get pods --namespace consul -l component=client -o jsonpath='{.items[0].metadata.name}') -- consul catalog services

The command succeeds, but notice that you did not pass a -token option nor did you set the CONSUL_HTTP_TOKEN environment variable. One or the other is required when ACLs are enabled. This proves that ACLs are not enabled.

Now, from the terminal session on the server container type CTRL-C to stop the tcpdump process, and then search the log file for the CLI operation with the following command:

$ grep 'ServiceMethod' /tmp/tcpdump.log
....A...Seq..ServiceMethod.Catalog.ListServices..AllowStale..Datacenter.dc1.Filter..MaxAge..MaxQueryTime..MaxStaleDuration..MinQueryIndex..MustRevalidate..NodeMetaFilters..RequireConsistent..Source..Datacenter..Ip..Node..Segment..StaleIfError..Token..UseCache.

Note that you are able to inspect the RPC operation in cleartext. This proves that RPC traffic is not encrypted.

Exit the terminal session on the server container.

$ exit

Upgrade to a secured Consul service mesh

Next, upgrade your Consul service mesh installation to enable gossip encryption, TLS, and ACLs. You can upgrade the service mesh by updating your custom configuration yaml file settings, and then passing the new configuration via Helm or Consul K8s.

The following command will generate the secure-dc1.yaml file with gossip encryption, TLS, and ACLs enabled.

$ cat > secure-dc1.yaml <<EOF
global:
  name: consul
  enabled: true
  datacenter: dc1
  gossipEncryption:
    autoGenerate: true
  tls:
    enabled: true
    enableAutoEncrypt: true
    verify: true
  acls:
    manageSystemACLs: true
server:
  replicas: 1
  securityContext:
    runAsNonRoot: false
    runAsUser: 0
connectInject:
  enabled: true
controller:
  enabled: true
EOF

Notice that the file now includes a gossipEncryption stanza.

gossipEncryption:
  # This stanza instructs Kubernetes to generate a
  # gossip encryption key and register it as a Kubernetes
  # secret. Consul will use this key automatically at
  # runtime.
  autoGenerate: true

TLS is enabled by this stanza.

tls:
  enabled: true
  # By enabling TLS and setting `enableAutoEncrypt` to true,
  # the TLS system will configure itself. You
  # do not need to take any action beyond setting these values
  # in the config file.
  enableAutoEncrypt: true
  # The `verify` setting instructs Consul to verify the
  # authenticity of servers and clients.
  verify: true

ACLs are enabled by this stanza.

acls:
  # By setting `manageSystemACLs` to true, the ACL system
  # will configure itself. You do not need to take any
  # action beyond setting the value in the config file.
  manageSystemACLs: true

Upgrade the deployment

The config file for the chart has been properly configured, and all necessary secrets have been registered with Kubernetes. Execute the following command to upgrade the installation with these changes. The upgrade may take a minute or two to complete.

$ helm upgrade consul hashicorp/consul --namespace consul --version "0.43.0" --values ./secure-dc1.yaml --wait
$ consul-k8s upgrade -config-file=secure-dc1.yaml -set global.image=hashicorp/consul:1.12.0

Verify the upgrade

Use kubectl get pods to verify your installation.

$ watch kubectl get pods --namespace consul
NAME                                          READY   STATUS    RESTARTS   AGE
consul-client-fxctc                           1/1     Running   0          99s
consul-connect-injector-6fc7d6bc7f-7mdvg      1/1     Running   0          57s
consul-connect-injector-6fc7d6bc7f-czgfh      1/1     Running   0          107s
consul-controller-556c46db8b-9wv2c            1/1     Running   0          107s
consul-server-0                               1/1     Running   0          99s
consul-webhook-cert-manager-b6d9bb4fc-7wv22   1/1     Running   0          6m12s

Once all pods have a status of Running you can proceed to the next step.

Verify security enabled

Now, you can verify that gossip encryption and TLS are enabled, and that ACLs are being enforced. Otherwise, you can skip ahead to Configuring Consul intentions.

In a separate terminal, forward port 8501 from the Consul server on Kubernetes so that you can interact with the Consul CLI from the development host.

Note: with TLS enabled, Consul uses port 8501 instead of 8500 by default.

$ kubectl port-forward --namespace consul consul-server-0 8501:8501

Set the CONSUL_HTTP_ADDR environment variable to use the HTTPS address/port on the development host.

$ export CONSUL_HTTP_ADDR=https://127.0.0.1:8501

Export the CA file from Kubernetes so that you can pass it to the CLI.

$ kubectl get secret --namespace consul consul-ca-cert -o jsonpath="{.data['tls\.crt']}" | base64 --decode > ca.pem

Now, execute consul members and provide Consul with the ca-file option to verify TLS connections. You will observe a list of all members of the service mesh.

$ consul members -ca-file ca.pem
Node             Address          Status  Type    Build   Protocol  DC   Partition  Segment
consul-server-0  172.17.0.5:8301  alive   server  1.12.0  2         dc1  default    <all>
minikube         172.17.0.3:8301  alive   client  1.12.0  2         dc1  default    <default>

The actions you performed in this section of the tutorial prove that TLS is being enforced.

Set an ACL token

Now, try launching a debug session. The command will fail.

$ consul debug -ca-file ca.pem
==> Capture validation failed: error querying target agent: Unexpected response code: 403 (Permission denied: token with AccessorID '00000000-0000-0000-0000-000000000002' lacks permission 'agent:read' on "consul-server-0"). verify connectivity and agent address

The 403 response proves that ACLs are being enforced. You have not yet supplied an ACL token, so the command fails. The consul members command worked because consul-helm created an anonymous token and set the following policy for it:

node_prefix "" {
   policy = "read"
}
service_prefix "" {
   policy = "read"
}

Note: This policy is necessary for DNS, more specifically, so that the Kubernetes DNS server can make DNS queries against Consul. Queries like consul members end up working without a client specifically providing a token. If you don't want to use Consul DNS, you could disable it in the chart by setting dns.enabled to false. This will configure the ACL bootstrapping job to not create a policy for the anonymous token. However, if you want to use Consul DNS, this policy is required.

In order to perform operations that require a higher level of authority, you must provide a token with the necessary permissions. In this tutorial you will set the CONSUL_HTTP_TOKEN environment variable.

Note: You could alternately pass the token as the -token option to your CLI commands.

The consul-helm chart created several secrets during the initialization process and registered them with Kubernetes. For a list of all Kubernetes secrets issue the following command:

$ kubectl get secrets --namespace consul
NAME                                                         TYPE                                  DATA   AGE
consul-bootstrap-acl-token                                   Opaque                                1      5m36s
consul-ca-cert                                               Opaque                                1      6m13s
consul-ca-key                                                Opaque                                1      6m12s
...TRUNCATED...

Notice that one of the secrets is named consul-bootstrap-acl-token. To view the Kubernetes secret, execute the following command:

$ kubectl get secret --namespace consul consul-bootstrap-acl-token -o yaml | more
apiVersion: v1
data:
  token: OTk5ZGU1MGUtY2Q2Zi1iZGZiLTlkZmQtMDgwYjc0YTJjM2Jh
kind: Secret
...TRUNCATED...

This secret contains the Consul ACL bootstrap token. The bootstrap token is a full access token that can perform any operation in the service mesh. In a production scenario, you should avoid using the bootstrap token, and instead create tokens with specific permissions. In this tutorial, you will use it for convenience.

The value of interest is the string in the data stanza's token field. That value is a base64 encoded string that contains the bootstrap token generated during the consul-helm ACL init process.

For this tutorial you can retrieve the value, decode it, and set it to the CONSUL_HTTP_TOKEN environment variable with the following command.

$ export CONSUL_HTTP_TOKEN=$(kubectl get --namespace consul secrets/consul-bootstrap-acl-token --template={{.data.token}} | base64 -d)

Start a debug session again with an ACL token set.

$ consul debug -ca-file ca.pem
==> Starting debugger and capturing static information...
     Agent Version: '1.12.0'
          Interval: '30s'
          Duration: '2m0s'
            Output: 'consul-debug-2022-04-28T13-20-08+0300.tar.gz'
           Capture: 'metrics, logs, pprof, host, agent, members'
==> Beginning capture interval 2022-04-28 10:20:08.76562 +0000 UTC (0)
==> Capture successful 2022-04-28 10:20:09.037701 +0000 UTC (0)

The command succeeds. This proves that ACLs are being enforced. Type CTRL-C to end the debug session in the terminal.

Verify that network traffic is encrypted

Now that you have proven that ACLs are enabled, and that TLS verification is being enforced, you will prove that all gossip and RPC traffic is encrypted.

Start a shell session on the server container.

$ kubectl exec -it --namespace consul consul-server-0 -- /bin/sh

Since the containers were recycled during the helm upgrade, you will have to add tcpdump again.

$ apk update && apk add tcpdump
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86_64/APKINDEX.tar.gz
...TRUNCATED...

Next, start tcpdump and observe the gossip traffic.

$ tcpdump -an portrange 8300-8700 -A
... m m......R...[...%;..c...P.t3.y.)..\....Og...[....$.ySr.,q...........K3.......F......7.8.U.Ny.Z6........y.!oj....I.a..... ...^.
19:48:09.379382 IP 10.244.0.16.8301 > 10.244.0.15.8301: UDP, length 181
E.....@.?.W.
...
... m m.....F.#N.2.u}..C.4d..c*..$...G....d...G...e<.eE...>Fv.> ..-......\r.$F..5........6..U.  y._.3....M.............O.uAel[..]..# a#K....q..EX.3K.8;3.\.T.."d.....q....%...hm.c..y^D.{.[l {..%

Notice that none of the traffic is in cleartext, as it was before. This proves that gossip traffic is now encrypted.

Type CTRL-C to stop the tcpdump session. Use the following command to restart tcpdump and pipe the results to a log file so that you can search for cleartext RPC traffic.

$ tcpdump -an portrange 8300-8700 -A > /tmp/tcpdump.log

From a different terminal, list services with the Consul cli. There will be only one service - consul.

$ kubectl exec --namespace consul $(kubectl get pods --namespace consul -l component=client -o jsonpath='{.items[0].metadata.name}') -- consul catalog services -token $(kubectl get --namespace consul secrets/consul-bootstrap-acl-token --template={{.data.token}} | base64 -d)
consul

Switch back to server, type CTRL-C to stop tcpdump, and grep log for an RPC entry

$ grep 'ServiceMethod' /tmp/tcpdump.log

Notice that no rows were found this time. This proves that RCP traffic is now encrypted. Exit the terminal session on the server container.

$ exit

Configure Consul intentions

Now, deploy two sample services, and manage them using Consul intentions.

Deploy example services

To simulate an active environment you will deploy a client, and an upstream backend service. First, issue the following command to create a file named server.yaml that will be used to create an http echo server on Kubernetes:

$ cat > server.yaml <<EOF
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceDefaults
metadata:
  name: static-server
spec:
  protocol: 'http'
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: static-server
---
apiVersion: v1
kind: Service
metadata:
  name: static-server
spec:
  selector:
    app: static-server
  ports:
    - port: 80
      targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: static-server
  name: static-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: static-server
  template:
    metadata:
      annotations:
        consul.hashicorp.com/connect-inject: 'true'
      labels:
        app: static-server
    spec:
      serviceAccountName: static-server
      containers:
        - name: static-server
          image: hashicorp/http-echo:latest
          args:
            - -text="hello world"
            - -listen=:8080
          ports:
            - containerPort: 8080
EOF

Next, deploy the sample backend service.

$ kubectl apply -f server.yaml
servicedefaults.consul.hashicorp.com/static-server created
serviceaccount/static-server created
service/static-server created
deployment.apps/static-server created

Next, create a file named client.yaml that defines the sample client service.

$ cat > client.yaml <<EOF
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceDefaults
metadata:
  name: static-client
spec:
  protocol: 'http'
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: static-client
---
apiVersion: v1
kind: Service
metadata:
  name: static-client
spec:
  selector:
    app: static-client
  ports:
    - port: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: static-client
  name: static-client
spec:
  replicas: 1
  selector:
    matchLabels:
      app: static-client
  template:
    metadata:
      annotations:
        consul.hashicorp.com/connect-inject: 'true'
      labels:
        app: static-client
    spec:
      serviceAccountName: static-client
      containers:
        - name: static-client
          image: rancher/curlimages-curl:7.73.0
          command: ['/bin/sh', '-c', '--']
          args: ['while true; do sleep 30; done;']
EOF

Next, deploy the sample client.

$ kubectl apply -f client.yaml
servicedefaults.consul.hashicorp.com/static-client created
serviceaccount/static-client created
service/static-client created
deployment.apps/static-client created

Finally, ensure all pods/containers having a status of Running before proceeding to the next section.

$ watch kubectl get pods
NAME                             READY   STATUS    RESTARTS   AGE
static-client-755f485c45-dzg47   2/2     Running   0          14m
static-server-6d5fb5f5d5-cz7sz   2/2     Running   0          14m

With manageSystemACLs set to true, the Consul Helm chart will, by default, create a deny all intention. This means that services will not be able to communicate until an explicit intention is defined that allows them to communicate.

Issue the following command to validate that the default deny all intention is enforced.

$ kubectl exec deploy/static-client -c static-client -- curl -s http://static-server
error: unable to upgrade connection: container not found ("static-client")

The command is unable to resolve the service name. This proves the intention is enforced.

Run the following script to create a YAML file containingallow ServiceIntention for client to server traffic.

$ cat > client-to-server-intention.yaml <<EOF
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceIntentions
metadata:
  name: client-to-server
spec:
  destination:
    name: static-server
  sources:
    - name: static-client
      action: allow
EOF

Use kubectl to apply the ServiceIntention to the service mesh.

$ kubectl apply -f client-to-server-intention.yaml
serviceintentions.consul.hashicorp.com/client-to-server created

Finally, validate the intention allows traffic from the client to the server. If this fails, wait a few seconds for the intention to be applied, and try again.

$ kubectl exec deploy/static-client -c static-client -- curl -s http://static-server
"hello world"

This proves the intention is allowing traffic from the client to the server.

Next steps

In this tutorial, you enabled Consul security controls for Consul on Kubernetes using the Consul Helm chart.

Specifically, you:

  • Installed an unsecured Consul service mesh on Kubernetes for development or debugging
  • Verified that gossip encryption, TLS, and ACLs were not enabled
  • Upgraded the installation to enable gossip encryption, TLS, and ACLs
  • Verified that gossip encryption, TLS, and ACLs were now enabled
  • Deployed two example services to the service mesh
  • Configured zero-trust networking using Consul intentions

Next, consider reviewing our L7 observability tutorial to learn more techniques for monitoring or debugging Consul on Kubernetes.

 Previous
 Next

This tutorial also appears in:

  •  
    7 tutorials
    Secure Service Communication
    Authenticate service-to-service communication with mTLS and authorization with Access Control Lists (ACLs).
    • Consul
  •  
    4 tutorials
    Consul on Kubernetes in Production
    Review production best practices for all Kubernetes installation types and learn cloud-specific configurations.
    • Consul
  •  
    4 tutorials
    KubeCon - Consul on Kubernetes Tutorials
    Follow-up on Consul on Kubernetes content from KubeCon 2020 with hands-on, interactive tutorials.
    • Consul

On this page

  1. Secure Consul and Registered Services on Kubernetes
  2. Types of Consul service mesh traffic
  3. Create a Kubernetes cluster (optional)
  4. Install an unsecured Consul service mesh
  5. Verify security not enabled (optional)
  6. Upgrade to a secured Consul service mesh
  7. Verify security enabled
  8. Configure Consul intentions
  9. Next steps
Give Feedback(opens in new tab)
  • Certifications
  • System Status
  • Terms of Use
  • Security
  • Privacy
  • Trademark Policy
  • Trade Controls
  • Give Feedback(opens in new tab)