• HashiCorp Developer

  • HashiCorp Cloud Platform
  • Terraform
  • Packer
  • Consul
  • Vault
  • Boundary
  • Nomad
  • Waypoint
  • Vagrant
Consul
  • Install
  • Tutorials
  • Documentation
  • API
  • CLI
  • Try Cloud(opens in new tab)
  • Sign up
Kubernetes Service Mesh

Skip to main content
17 tutorials
  • Consul and Kubernetes Reference Architecture
  • Consul and Kubernetes Deployment Guide
  • Secure Applications with Service Sidecar Proxies
  • Secure Consul and Registered Services on Kubernetes
  • Secure Service Mesh Communication Across Kubernetes Clusters
  • Layer 7 Observability with Prometheus, Grafana, and Kubernetes
  • Manage Consul with Kubernetes Custom Resource Definitions (CRDs)
  • Consul Service Discovery and Service Mesh on Minikube
  • Consul Service Discovery and Mesh on Kubernetes in Docker (kind)
  • Deploy Consul on Azure Kubernetes Service (AKS)
  • Deploy Consul on Google Kubernetes Engine (GKE)
  • Deploy Consul on Amazon Elastic Kubernetes Service (EKS)
  • Deploy Consul on RedHat OpenShift
  • Control Access into the Service Mesh with Consul API Gateway
  • Deploy Federated Multi-Cloud Kubernetes Clusters
  • Multi Cluster Applications with Consul Enterprise Admin Partitions
  • Vault as Secrets Management for Consul

  • Resources

  • Tutorial Library
  • Certifications
  • Community Forum
    (opens in new tab)
  • Support
    (opens in new tab)
  • GitHub
    (opens in new tab)
  1. Developer
  2. Consul
  3. Tutorials
  4. Kubernetes Service Mesh
  5. Layer 7 Observability with Prometheus, Grafana, and Kubernetes

Layer 7 Observability with Prometheus, Grafana, and Kubernetes

  • 21min

  • ConsulConsul

Consul service mesh deploys an Envoy sidecar proxy alongside each service instance in a datacenter. The sidecar proxy brokers traffic between the local service instance and other services registered with Consul. The proxy is aware of all traffic that passes through it. In addition to securing inter-service communication, the proxy can also collect and expose data about the service instance. Starting with version 1.5, Consul service mesh is able to configure Envoy to expose layer 7 metrics, such as HTTP status codes or request latency, and as of 1.10 is able to export full mesh telemetry for Consul native resources as well as services including Consul agents and gateways.

In this tutorial, you will:

  • Configure Consul to expose Envoy metrics to Prometheus.
  • Deploy Prometheus and Grafana using their official Helm charts.
  • Deploy Consul using the official Helm chart or the Consul K8S CLI.
  • Deploy a multi-tier demo application that is configured to be scraped by Prometheus.
  • Start a traffic simulation deployment, and observe the application traffic in Grafana.

Tip: While this tutorial shows you how to deploy a metrics pipeline on Kubernetes, all the technologies the tutorial uses are platform agnostic; Kubernetes is not necessary to collect and visualize layer 7 metrics with Consul service mesh.

Prerequisites

If you already have a Kubernetes cluster running with helm and kubectl installed, you can start on the tutorial right away. If not, set up a Kubernetes cluster using your favorite method that supports persistent volume claims, or install and start Minikube v1.10.1+ or kind v0.8.1+.

You must also install kubectl, and both install and initialize Helm. Also, to ensure you have the latest helm charts for Consul, Prometheus, and Grafana, you can run the following command.

$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts && \
helm repo add grafana https://grafana.github.io/helm-charts && \
helm repo add hashicorp https://helm.releases.hashicorp.com && \
helm repo update

Next, clone the GitHub repository that contains the files you'll use with this tutorial.

$ git clone https://github.com/hashicorp/learn-consul-kubernetes.git
$ git clone git@github.com:hashicorp/learn-consul-kubernetes.git

Change directories into the tutorial specific folder in the repository you just cloned.

$ cd learn-consul-kubernetes/layer7-observability

Checkout the tagged version verified for this tutorial.

$ git checkout tags/v0.0.15

We'll refer to this directory as your working directory, and you'll run the rest of the commands in this tutorial from this directory.

Deploy Consul

You can deploy a complete Consul datacenter using the official Consul Helm chart or the Consul K8S CLI. Feel free to review the Consul Kubernetes installation documentation to learn more about these installation options.

Review the custom values file

Once you have installed the prerequisites, you're ready to install Consul. Open the file in your working directory called helm/consul-values.yaml and review the configuration. It should match the YAML shown below.

consul-values.yaml
global:
  enabled: true
  name: consul
  datacenter: dc1
  metrics:
    enabled: true
    enableAgentMetrics: true
    agentMetricsRetentionTime: "1m"
server:
  replicas: 1
ui:
  enabled: true
  metrics:
    enabled: true
    provider: "prometheus"
    baseURL: http://prometheus-server.default.svc.cluster.local
connectInject:
  enabled: true
  default: true
controller:
  enabled: true

Warning: By default, the chart will install an insecure configuration of Consul. This provides a less complicated out-of-box experience for new users, but is not appropriate for a production setup. Review the Secure Consul and Registered Services on Kubernetes tutorial for instructions on how to secure your datacenter for production.

Install Consul in your cluster

You can now deploy a complete Consul datacenter in your Kubernetes cluster using the official Consul Helm chart or the Consul K8S CLI.

$ helm repo add hashicorp https://helm.releases.hashicorp.com
"hashicorp" has been added to your repositories
$ helm install --values helm/consul-values.yaml consul hashicorp/consul --create-namespace --namespace consul --version "0.43.0"

Note: You can review the official Helm chart values to learn more about the default settings.

$ brew tap hashicorp/tap
$ brew install hashicorp/tap/consul-k8s
$ consul-k8s install -config-file=helm/consul-values.yaml -set global.image=hashicorp/consul:1.12.0

Note: You can review the official Consul K8S CLI documentation to learn more about additional settings.

Check that Consul is running in your Kubernetes cluster using kubectl. Consul setup is complete when all pods have a status of Running, as illustrated in the following output.

$ kubectl get pods --namespace consul
NAME                                           READY   STATUS    RESTARTS   AGE
consul-client-72kt4                            1/1     Running   0          42s
consul-connect-injector-67d6495b75-rmx2f       1/1     Running   0          42s
consul-connect-injector-67d6495b75-xck5f       1/1     Running   0          42s
consul-controller-559465fd96-zq869             1/1     Running   0          42s
consul-server-0                                1/1     Running   0          42s
consul-webhook-cert-manager-7cf6df6c4f-bhsqq   1/1     Running   0          42s

Deploy the metrics pipeline

Consul service mesh can integrate with a variety of metrics tooling, but in this tutorial, you will use Prometheus and Grafana to collect and visualize metrics. In this tutorial, Envoy is not injected into the Prometheus and Grafana pod. Leaving Envoy out of the pod is due to Prometheus and Grafana not conducting an application-type workload but instead serving as support tooling. Keeping Prometheus and Grafana out of the service catalog more accurately reflects a list of application services but you may inject Envoy into the Prometheus and Grafana pod if you so choose.

You can disable Envoy by configuring podAnnotations with the "consul.hashicorp.com/connect-inject": "false" annotation for each of those applications. Refer to their respect values files in the helm directory for more information.

Deploy Prometheus with Helm

Install the official Prometheus Helm chart using the values in helm/prometheus-values.yaml.

$ helm install -f helm/prometheus-values.yaml prometheus prometheus-community/prometheus --version "15.5.3" --wait
NAME: prometheus
LAST DEPLOYED: Thu Mar  3 13:57:57 2022
NAMESPACE: default
STATUS: deployed
...TRUNCATED...
For more information on running Prometheus, visit:
https://prometheus.io/

Check that Prometheus is running in your Kubernetes cluster using kubectl. Prometheus setup is complete when all pods have a status of Running, as illustrated in the following output.

$ kubectl get pods --namespace default
NAME                                             READY   STATUS    RESTARTS   AGE
prometheus-kube-state-metrics-644f869f97-hgxr8   1/1     Running   0          3m21s
prometheus-node-exporter-v8jsj                   1/1     Running   0          3m20s
prometheus-pushgateway-67cf8576b7-vsjfl          1/1     Running   0          3m20s
prometheus-server-84dfcc8695-mkds2               2/2     Running   0          3m20s

Deploy Grafana with Helm

Installing Grafana will follow a similar process. Install the official Grafana Helm chart using the values in helm/grafana-values.yaml. This configuration will tell Grafana to use Prometheus as a datasource, and set the admin password to password.

$ helm install -f helm/grafana-values.yaml grafana grafana/grafana --version "6.23.1" --wait
NAME: grafana
LAST DEPLOYED: Thu Mar  3 14:03:25 2022
NAMESPACE: default
STATUS: deployed
...TRUNCATED...

Check that Grafana is running in your Kubernetes cluster using kubectl. Grafana setup is complete when all pods have a status of Running, as illustrated in the following output.

$ kubectl get pods --namespace default
NAME                                             READY   STATUS    RESTARTS   AGE
grafana-d7fcbc6b8-7v5zz                          1/1     Running   0          74s
prometheus-kube-state-metrics-644f869f97-hgxr8   1/1     Running   0          6m42s
prometheus-node-exporter-v8jsj                   1/1     Running   0          6m41s
prometheus-pushgateway-67cf8576b7-vsjfl          1/1     Running   0          6m41s
prometheus-server-84dfcc8695-mkds2               2/2     Running   0          6m41s

To expose the Grafana UI outside the cluster, issue the following command.

$ kubectl port-forward svc/grafana 3000:3000
Forwarding from 127.0.0.1:3000 -> 3000
Forwarding from [::1]:3000 -> 3000

Leave this port-forward session active so that you can visit the UI again later once metrics are being collected.

Navigate to http://localhost:3000 in a browser tab and log in to the Grafana UI using admin as the username and password as the password. Once you have logged into the Grafana UI, hover over the dashboards icon (four squares in the left-hand menu), and then click the Browse option.

Add a dashboard using the Grafana GUI

This will take you to a page that gives you some choices about how to upload Grafana dashboards. Click the Import button on the right-hand side of the screen. Open the file in the grafana subdirectory named hashicups-dashboard.json and copy the contents into the JSON window of the Grafana UI. Click through the rest of the options, and you will end up with a dashboard waiting for data to display.

Deploy a demo application on Kubernetes

Now that your monitoring pipeline is set up, deploy a demo application that will generate data. We will be using HashiCups, an application that emulates an online order app for a coffee shop. For this tutorial, the HashiCups application includes a React front end, a GraphQL API, a REST API and a Postgres database.

All the files defining HashiCups are in the hashicups directory. Open a new terminal, and deploy the demo application.

$ kubectl apply -f hashicups
service/frontend created
serviceaccount/frontend created
servicedefaults.consul.hashicorp.com/frontend created
configmap/nginx-configmap created
deployment.apps/frontend created
service/postgres created
serviceaccount/postgres created
servicedefaults.consul.hashicorp.com/postgres created
deployment.apps/postgres created
service/product-api created
serviceaccount/product-api created
servicedefaults.consul.hashicorp.com/product-api created
configmap/db-configmap created
deployment.apps/product-api created
service/public-api created
serviceaccount/public-api created
servicedefaults.consul.hashicorp.com/public-api created
deployment.apps/public-api created

Check that HashiCups is running in your Kubernetes cluster using kubectl. HashiCups setup is complete when all pods have a status of Running, as illustrated in the following output.

$ kubectl get pods --namespace default
NAME                                             READY   STATUS    RESTARTS   AGE
frontend-65f9ff786f-ppkf6                        2/2     Running   0          83s
grafana-d7fcbc6b8-7v5zz                          1/1     Running   0          6m12s
postgres-f6f5ff9d5-d96lt                         2/2     Running   0          83s
product-api-5f869745dd-tsb79                     2/2     Running   0          83s
prometheus-kube-state-metrics-644f869f97-hgxr8   1/1     Running   0          11m
prometheus-node-exporter-v8jsj                   1/1     Running   0          11m
prometheus-pushgateway-67cf8576b7-vsjfl          1/1     Running   0          11m
prometheus-server-84dfcc8695-mkds2               2/2     Running   0          11m
public-api-5b7c6cf5cc-5c8rw                      2/2     Running   0          83s

Test the application by viewing the React front end. You can do this by forwarding the frontend deployment's port 80 to your development host.

$ kubectl port-forward deploy/frontend 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80

Navigate to http://localhost:8080 in a browser window. You should observe the following screen.

View the HashiCups Front End

Now that the application is running let's verify that Consul did, in fact, configure Envoy to publish metrics at port 20200. The Envoy side-car proxy can be reached at port 19000. Enter CTRL-C to stop the port forward tunnel for the frontend service on port 80, and then issue the following command which will open a tunnel to the Envoy proxy.

$ kubectl port-forward deploy/frontend 19000:19000
Forwarding from 127.0.0.1:19000 -> 19000
Forwarding from [::1]:19000 -> 19000

Navigate to http://localhost:19000/config_dump in a browser window. You should observe what looks like a raw JSON document dumped to the screen. This is the Envoy configuration. Search for 20200 and you should find two different stanzas that reference this port. One of them is included next for reference.

{
  "name": "envoy_prometheus_metrics_listener",
  "address": {
  "socket_address": {
    "address": "0.0.0.0",
    "port_value": 20200
  }
}

This confirms that Consul has configured Envoy to publish Prometheus metrics. Enter CTRL-C to stop the port-forward session from the side-car proxy. You will not need to reference it again for the remainder of the tutorial.

Visualize application metrics

While Grafana is optimized for at a glance observability, the Prometheus UI can be useful as well. Issue the following command to expose the Prometheus UI to your development host.

$ kubectl port-forward deploy/prometheus-server 9090:9090
Forwarding from 127.0.0.1:9090 -> 9090
Forwarding from [::1]:9090 -> 9090

Discover available metrics with Prometheus

Navigate to http://localhost:9090 in a browser window, and you should observe the default Prometheus UI. In the textbox at the top of the screen paste sum by(__name__)({app="product-api"})!= 0 and then click the button labeled Execute. Your screen will now look similar to the following.

Discover products-api available metrics

You have now performed a PromQL query that will list all available metrics for resources that have the product-api label. In this case, that is the REST API resource you deployed from the app folder. This list of metrics can be used as constraints for further PromQL queries both here and in the Grafana UI.

Next, from within the Prometheus UI, click the dropdown menu item labeled Status in the menu bar at the top of the screen, and click on the option labeled Targets. This will navigate the UI to the Targets screen where you can review the resources that Prometheus is monitoring. The Prometheus Helm chart you installed earlier is configured to monitor the Kubernetes infrastructure as well as Prometheus itself.

Click on the button labeled show less next to each section header except the one labeled kubernetes-pods. Your screen should appear similar to the following.

View Prometheus targets

Notice that several pods are running, and have a State of UP. Look at the first label for each pod in the Label column. You should have entries for app="frontend", app="postgres", app="products-api", and app="public-api". This confirms that Prometheus is collecting metrics from all of your annotated resources, and that they are all up and running.

Go back to the terminal where your port-forward session for the Prometheus UI is running, and type CTRL-C to end the session. You will not need to access the Prometheus UI for the remainder of the tutorial.

Simulate traffic

Now that you know the application is running, start generating some load so that you will have some metrics to look at in Grafana.

$ kubectl apply -f traffic.yaml
configmap/k6-configmap created
service/traffic created
deployment.apps/traffic created
serviceintentions.consul.hashicorp.com/traffic-to-frontend created

Envoy exposes a huge number of metrics. Which metrics are important to monitor will depend on your application. For this tutorial we have preconfigured a HashiCups-specific Grafana dashboard with a couple of basic metrics, but you should systematically consider what others you will need to collect as you move from testing into production.

View Grafana dashboard

Now that you have metrics flowing through your pipeline, and a traffic simulation deployment running, navigate back to your Grafana dashboard in the Grafana UI, and you should observe a screen similar to the following.

View Dashboard traffic metrics

Notice that once you started the traffic simulation deployment Prometheus started to log active connections. This dashboard is simplistic, but illustrates that metrics are flowing through the pipeline, and should give you a reasonable starting point for setting up your own observability tooling.

Viewing traffic in Consul UI

You can also view service metrics in the Consul UI. In a new terminal session, issue the following command to expose the Consul UI to the development host.

$ kubectl port-forward consul-server-0 --namespace consul 8500:8500
Forwarding from 127.0.0.1:8500 -> 8500
Forwarding from [::1]:8500 -> 8500

Open http://localhost:8500 in a new browser tab, and navigate to the Services screen. Select the postgres service. You should observe that a chart with some basic metrics is embedded in the service tile.

View Consul embedded Dashboard

Hover over the four tile elements to get tooltip descriptions of the different metrics.

You can also hover the timeline chart embedded in the tile, and review additional metrics for any point in time during the last 15 minutes.

Clean up

If you want to get rid of the configuration files and Consul Helm chart, recursively remove the learn-consul-kubernetes directory. `

$ cd ../.. && rm -rf learn-consul-kubernetes

Next steps

In this tutorial, you set up layer 7 metrics collection and visualization in a Kubernetes cluster using Consul service mesh, Prometheus, and Grafana, all deployed via Helm charts. Specifically, you:

  • Configured Consul and Envoy to expose application metrics to Prometheus.
  • Deployed Consul using the official helm chart.
  • Deployed Prometheus and Grafana using their official Helm charts.
  • Deployed a multi-tier demo application that was configured to be scraped by Prometheus.
  • Started a traffic simulation deployment, and observed the metrics in Prometheus and Grafana.

Because all of these programs can run outside of Kubernetes, you can set this pipeline up in any environment or collect metrics from workloads running on mixed infrastructure.

To learn more about the configuration options in Consul that enable layer 7 metrics collection with or without Kubernetes, refer to our documentation. For more information on centrally configuring Consul, take a look at the centralized configuration documentation.

 Previous
 Next

This tutorial also appears in:

  •  
    3 tutorials
    Observability
    Collect and visualize mesh metrics to better understand communication between services.
    • Consul
  •  
    8 tutorials
    Explore Service Mesh Features
    Gain hands on experience with service mesh features including Layer7 observability and gateways.
    • Consul
  •  
    4 tutorials
    KubeCon - Consul on Kubernetes Tutorials
    Follow-up on Consul on Kubernetes content from KubeCon 2020 with hands-on, interactive tutorials.
    • Consul
  •  
    5 tutorials
    Monitoring Operations
    Ensure your Consul datacenter is healthy by setting up monitoring to track Consul metrics.
    • Consul

On this page

  1. Layer 7 Observability with Prometheus, Grafana, and Kubernetes
  2. Prerequisites
  3. Deploy Consul
  4. Deploy the metrics pipeline
  5. Visualize application metrics
  6. Clean up
  7. Next steps
Give Feedback(opens in new tab)
  • Certifications
  • System Status
  • Terms of Use
  • Security
  • Privacy
  • Trademark Policy
  • Trade Controls
  • Give Feedback(opens in new tab)