Consul
Securely connect your services with Consul service mesh
In the previous tutorial, you deployed Consul client agents and registered services to your Consul catalog.
In this tutorial, you will connect workloads using Consul service mesh to enable secure service-to-service communication. A service mesh also allows you to leverage Consul's full suite of features.
To create your service mesh, you will edit the service definitions on your Consul clients, launch Envoy sidecar proxies, and create service intentions to allow traffic across your services in your network.
In this tutorial, you will:
- Review and create intentions to manage traffic permissions
- Modify Consul services' configuration for Consul service mesh
- Start an Envoy sidecar proxy for each service in the mesh
- Restart the services to listen on localhost interface
Note
Because this tutorial is part of the Get Started on VMs tutorial collection, the following workflow was designed for education and demonstration. It uses scripts to generate agent configurations and requires you to execute commands manually on different nodes. If you are setting up a production environment you should codify and automate the installation and deployment process according to your infrastructure and networking needs. Refer to the VM production patterns tutorial collection for Consul production deployment considerations and best practices.
Tutorial scenario
This tutorial uses HashiCups, a demo coffee shop application made up of several microservices running on VMs.
At the beginning of the tutorial, you have a Consul datacenter that consists of one server and four clients running on VMs. The services connect directly to each other using the VM's address and every VM has access to every service in the network.
By the end of this tutorial, you will have a fully deployed Consul service mesh with Envoy sidecar proxies running alongside each service. The services will be configured so they cannot be reachable, unless explicitly allowed through Consul service intentions.
Prerequisites
If you completed the previous tutorial, the infrastructure is already in place with all prerequisites.
Log in to the bastion host VM
Terraform output provides a series of useful information, including bastion host IP address.
Login to the bastion host using ssh.
$ ssh -i certs/id_rsa.pem testadmin@`terraform output -raw ip_bastion`
Verify Envoy binary
Check on each of the client nodes to verify Envoy is installed.
- NGINX :
hashicups-nginx-0
- Frontend:
hashicups-frontend-0
- API:
hashicups-api-0
- Database:
hashicups-db-0
For example, to check Envoy installation on the Database VM.
Login to the Database VM from the bastion host.
$ ssh -i certs/id_rsa hashicups-db-0
Verify Envoy binary is installed.
$ envoy --version
envoy version: 688c4bbe47f4d05bb8ed268f5172bb026cf03242/1.31.5/Clean/RELEASE/BoringSSL
Check if the Envoy version is compatible with the Consul version running using the related compatibility matrix.
Return to the bastion host by exiting the SSH session.
$ exit
logout
Connection to hashicups-db-0 closed.
testadmin@bastion:~$
Repeat the steps for all VMs you want to add to the Consul service mesh.
Configure environment
The tutorial uses scripts to create all files in a destination folder. Export the path where you want to create the configuration files for the scenario.
$ export OUTPUT_FOLDER=/home/testadmin/assets/scenario/conf/
Make sure the folder exists.
$ mkdir -p ${OUTPUT_FOLDER}
Source the env-scenario.env
file to set the variables in the terminal session.
$ source assets/scenario/env-scenario.env
Configure the Consul CLI to interact with the Consul server.
$ export CONSUL_HTTP_ADDR="https://consul-server-0:8443" \
export CONSUL_HTTP_SSL=true \
export CONSUL_CACERT="${OUTPUT_FOLDER}secrets/consul-agent-ca.pem" \
export CONSUL_TLS_SERVER_NAME="server.${CONSUL_DATACENTER}.${CONSUL_DOMAIN}" \
export CONSUL_HTTP_TOKEN=`cat ${OUTPUT_FOLDER}secrets/acl-token-bootstrap.json | jq -r ".SecretID"`
Verify your Consul CLI can interact with your Consul server.
$ consul members
Node Address Status Type Build Protocol DC Partition Segment
consul-server-0 172.18.0.2:8301 alive server 1.20.2 2 dc1 default <all>
hashicups-api-0 172.18.0.6:8301 alive client 1.20.2 2 dc1 default <default>
hashicups-db-0 172.18.0.11:8301 alive client 1.20.2 2 dc1 default <default>
hashicups-frontend-0 172.18.0.3:8301 alive client 1.20.2 2 dc1 default <default>
hashicups-nginx-0 172.18.0.4:8301 alive client 1.20.2 2 dc1 default <default>
Review and create service intentions
The initial Consul configuration denies all service connections by default. We recommend this setting in production environments because it follows the least-privilege principle by restricting all network access unless explicitly defined.
Service intentions let you allow and restrict access between services. Intentions are destination-orientated, meaning you create the intentions for the destination, then define which services can access it.
The following intentions are required for HashiCups:
- The
db
service needs to be reached by theapi
service. - The
api
service needs to be reached by thenginx
services. - The
frontend
service needs to be reached by thenginx
service.
Use the provided script to generate service intentions.
$ ~/ops/scenarios/00_base_scenario_files/supporting_scripts/generate_global_config_hashicups.sh
[generate_global_config_hashicups.sh] - - Generate configuration for HashiCups application
+ --------------------
| Parameter Check
+ --------------------
[WARN] Generated configuration will be placed under:
[WARN] OUTPUT_FOLDER = ~/assets/scenario/conf/
[WARN] ----------
+ --------------------
| Prepare folder
+ --------------------
- Cleaning folder from pre-existing files
- Generate scenario config folders.
+ --------------------
| Create global configuration definition files
+ --------------------
- Create global proxy configuration
- Create hashicups-db service defaults configutation
+ --------------------
| Create intention definition files
+ --------------------
- Intentions for Database service
- Intentions for API service
- Intentions for Frontend service
Check the files generated by the script.
$ tree ${OUTPUT_FOLDER}/global
~/assets/scenario/conf/global
|-- config-global-proxy-default.hcl
|-- config-global-proxy-default.json
|-- config-hashicups-db-service-defaults.hcl
|-- config-hashicups-db-service-defaults.json
|-- intention-api.hcl
|-- intention-api.json
|-- intention-db.hcl
|-- intention-db.json
|-- intention-frontend.hcl
`-- intention-frontend.json
1 directory, 10 files
Finally apply the intentions to your Consul datacenter.
Use consul config write
to apply the intentions.
Create the intentions for the hashicups-db
service.
$ consul config write ${OUTPUT_FOLDER}global/intention-db.hcl
Config entry written: service-intentions/hashicups-db
Create the intentions for the hashicups-api
service.
$ consul config write ${OUTPUT_FOLDER}global/intention-api.hcl
Config entry written: service-intentions/hashicups-api
Create the intentions for the hashicups-frontend
service.
$ consul config write ${OUTPUT_FOLDER}global/intention-frontend.hcl
Config entry written: service-intentions/hashicups-frontend
Apply global proxy configurations
To make sure Consul service mesh recognizes services correctly, apply a global proxy configuration for sidecars to your datacenter.
Use consul config write
to apply the configuration.
$ consul config write ${OUTPUT_FOLDER}global/config-global-proxy-default.hcl
Config entry written: proxy-defaults/global
$ consul config write ${OUTPUT_FOLDER}global/config-hashicups-db-service-defaults.hcl
Config entry written: service-defaults/hashicups-db
Register services in Consul service mesh
To register services in Consul service mesh you need a newly generated service definition that includes TLS configurations.
Copy the service configuration files generated in the previous section to the remote nodes.
First, configure the Consul configuration directory.
$ export CONSUL_REMOTE_CONFIG_DIR=/etc/consul.d/
Copy configuration for Database
Use rsync
to copy the service configuration file to the remote node.
$ rsync -av --no-g --no-t --no-p \
-e "ssh -i ~/certs/id_rsa" \
${OUTPUT_FOLDER}hashicups-db-0/svc/service_mesh/svc-hashicups-db.hcl \
hashicups-db-0:${CONSUL_REMOTE_CONFIG_DIR}/svc.hcl
The output is similar to the following:
sending incremental file list
svc-hashicups-db.hcl
sent 526 bytes received 41 bytes 1,134.00 bytes/sec
total size is 405 speedup is 0.71
Copy configuration for API
Use rsync
to copy the service configuration file into the remote node.
$ rsync -av --no-g --no-t --no-p \
-e "ssh -i ~/certs/id_rsa" \
${OUTPUT_FOLDER}hashicups-api-0/svc/service_mesh/svc-hashicups-api.hcl \
hashicups-api-0:${CONSUL_REMOTE_CONFIG_DIR}/svc.hcl
The output is similar to the following:
sending incremental file list
svc-hashicups-api.hcl
sent 990 bytes received 47 bytes 2,074.00 bytes/sec
total size is 1,035 speedup is 1.00
Copy configuration for Frontend
Use rsync
to copy the service configuration file into the remote node.
$ rsync -av --no-g --no-t --no-p \
-e "ssh -i ~/certs/id_rsa" \
${OUTPUT_FOLDER}hashicups-frontend-0/svc/service_mesh/svc-hashicups-frontend.hcl \
hashicups-frontend-0:${CONSUL_REMOTE_CONFIG_DIR}/svc.hcl
The output is similar to the following:
sending incremental file list
svc-hashicups-frontend.hcl
sent 568 bytes received 41 bytes 1,218.00 bytes/sec
total size is 441 speedup is 0.72
Copy configuration for NGINX
Use rsync
to copy the service configuration file into the remote node.
$ rsync -av --no-g --no-t --no-p \
-e "ssh -i ~/certs/id_rsa" \
${OUTPUT_FOLDER}hashicups-nginx-0/svc/service_mesh/svc-hashicups-nginx.hcl \
hashicups-nginx-0:${CONSUL_REMOTE_CONFIG_DIR}/svc.hcl
The output is similar to the following:
sending incremental file list
svc-hashicups-nginx.hcl
sent 851 bytes received 41 bytes 594.67 bytes/sec
total size is 728 speedup is 0.82
Start sidecar proxies for services
After you copy the configuration files to each of the VMs, log into each Consul client VM and start the Envoy sidecar proxy for the agent.
Start sidecar proxy for Database
Log into hashicups-db-0
from the bastion host.
$ ssh -i certs/id_rsa hashicups-db-0
##..
testadmin@hashicups-db-0:~
Define the Consul configuration directory.
$ export CONSUL_CONFIG_DIR=/etc/consul.d/
Setup a valid token to interact with Consul agent.
$ export CONSUL_HTTP_TOKEN=`cat ${CONSUL_CONFIG_DIR}/svc.hcl | grep token | awk '{print $3}'| sed 's/"//g'`
Finally, start the Envoy sidecar proxy for the service.
$ /usr/bin/consul connect envoy \
-token=${CONSUL_HTTP_TOKEN} \
-envoy-binary /usr/bin/envoy \
-sidecar-for hashicups-db-0 > /tmp/sidecar-proxy.log 2>&1 &
The command starts the Envoy sidecar proxy in the background to avoid a lock on the terminal. You can access the Envoy log through the /tmp/sidecar-proxy.log
file.
Exit the SSH session to return to the bastion host.
$ exit
logout
Connection to hashicups-db-0 closed.
testadmin@bastion:~$
Start sidecar proxy for API
Log into hashicups-api-0
from the bastion host.
$ ssh -i certs/id_rsa hashicups-api-0
##..
testadmin@hashicups-api-0:~
Define the Consul configuration directory.
$ export CONSUL_CONFIG_DIR=/etc/consul.d/
Set up a valid token to interact with Consul agent.
$ export CONSUL_HTTP_TOKEN=`cat ${CONSUL_CONFIG_DIR}/svc.hcl | grep token | awk '{print $3}'| sed 's/"//g'`
Finally, start the Envoy sidecar proxy for the service.
$ /usr/bin/consul connect envoy \
-token=${CONSUL_HTTP_TOKEN} \
-envoy-binary /usr/bin/envoy \
-sidecar-for hashicups-api-0 > /tmp/sidecar-proxy.log 2>&1 &
The command starts the Envoy sidecar proxy in the background to avoid a lock on the terminal. You can access the Envoy log through the /tmp/sidecar-proxy.log
file.
Exit the SSH session to return to the bastion host.
$ exit
logout
Connection to hashicups-api-0 closed.
testadmin@bastion:~$
Start sidecar proxy for Frontend
Log into hashicups-frontend-0
from the bastion host.
$ ssh -i certs/id_rsa hashicups-frontend-0
##..
testadmin@hashicups-frontend-0:~
Define the Consul configuration directory.
$ export CONSUL_CONFIG_DIR=/etc/consul.d/
Set up a valid token to interact with Consul agent.
$ export CONSUL_HTTP_TOKEN=`cat ${CONSUL_CONFIG_DIR}/svc.hcl | grep token | awk '{print $3}'| sed 's/"//g'`
Finally, start the Envoy sidecar proxy for the service.
$ /usr/bin/consul connect envoy \
-token=${CONSUL_HTTP_TOKEN} \
-envoy-binary /usr/bin/envoy \
-sidecar-for hashicups-frontend-0 > /tmp/sidecar-proxy.log 2>&1 &
The command starts the Envoy sidecar proxy in the background to avoid a lock on the active terminal. You can access the Envoy log through the /tmp/sidecar-proxy.log
file.
Exit the SSH session to return to the bastion host.
$ exit
logout
Connection to hashicups-frontend-0 closed.
testadmin@bastion:~$
Start sidecar proxy for NGINX
Log into hashicups-nginx-0
from the bastion host.
$ ssh -i certs/id_rsa hashicups-nginx-0
##..
testadmin@hashicups-nginx-0:~
Define the Consul configuration directory.
$ export CONSUL_CONFIG_DIR=/etc/consul.d/
Set up a valid token to interact with Consul agent.
$ export CONSUL_HTTP_TOKEN=`cat ${CONSUL_CONFIG_DIR}/svc.hcl | grep token | awk '{print $3}'| sed 's/"//g'`
Finally, start the Envoy sidecar proxy for the service.
$ /usr/bin/consul connect envoy \
-token=${CONSUL_HTTP_TOKEN} \
-envoy-binary /usr/bin/envoy \
-sidecar-for hashicups-nginx-0 > /tmp/sidecar-proxy.log 2>&1 &
The command starts the Envoy sidecar proxy in the background to avoid a lock on the active terminal. You can access the Envoy log through the /tmp/sidecar-proxy.log
file.
Exit the SSH session to return to the bastion host.
$ exit
logout
Connection to hashicups-nginx-0 closed.
testadmin@bastion:~$
Restart services to listen on localhost
Now that you have applied the service configuration and intentions, and have started Envoy sidecars for each service, all the components for the Consul service mesh are in place. The Consul sidecar proxies will route the services' traffic to the target destination.
Because traffic moves through the sidecar proxies, you no longer need to expose your services externally. Reconfigure service nodes to listen on the loopback interface only to improve your overall security.
Reload the services to operate on the localhost
interface.
$ ssh -i ~/certs/id_rsa hashicups-db-0 "bash -c 'bash ./start_service.sh start --local'"; \
ssh -i ~/certs/id_rsa hashicups-api-0 "bash -c 'bash ./start_service.sh start --local'"; \
ssh -i ~/certs/id_rsa hashicups-frontend-0 "bash -c 'bash ./start_service.sh start --local'"; \
ssh -i ~/certs/id_rsa hashicups-nginx-0 "bash -c 'bash ./start_service.sh start --ingress'";
The output is similar to the following:
# ------------------------------------ #
| HashiCups DB service |
# ------------------------------------ #
Stop pre-existing instances.
START - Start services on all interfaces.
START LOCAL - Start services on local interface.
Start service instance.
Postgres is still starting - sleeping ...
DB started on local insteface
# ------------------------------------ #
| HashiCups API service |
# ------------------------------------ #
Stop pre-existing instances.
START - Start services on all interfaces.
START LOCAL - Start services on local interface.
Start service instance.
Service started on local insteface
{
"db_connection": "host=localhost port=5432 user=hashicups password=hashicups_pwd dbname=products sslmode=disable",
"bind_address": "localhost:9090",
"metrics_address": "localhost:9103"
}
Starting payments application
Starting Product API
Starting Public API
# ------------------------------------ #
| HashiCups Frontend service |
# ------------------------------------ #
Checking for NEXT_PUBLIC_PUBLIC_API_URL env var
Checking for NEXT_PUBLIC_FOOTER_FLAG env var
Stop pre-existing instances.
START - Start services on all interfaces.
START LOCAL - Start services on local interface.
Start service instance.
Starting HashiCups Frontend on local interface.
# ------------------------------------ #
| HashiCups NGINX service |
# ------------------------------------ #
Stop pre-existing instances.
START - Start services on all interfaces.
START AS INGRESS - Starts the service on all interfaces and connects to upstreams in the service mesh.
Start service instance.
Service started on local insteface
upstream frontend_upstream {
server localhost:3000;
}
upstream api_upstream {
server localhost:8081;
}
Starting NGINX...attempt 1
Verify configuration
Use the Consul CLI to query the service catalog.
$ consul catalog services
consul
hashicups-api
hashicups-api-sidecar-proxy
hashicups-db
hashicups-db-sidecar-proxy
hashicups-frontend
hashicups-frontend-sidecar-proxy
hashicups-nginx
hashicups-nginx-sidecar-proxy
The catalog shows the -sidecar-proxy
services registered alongside the regular services.
Note
This tutorial configures the NGINX service to listen on the VM's IP so you can access it remotely. For production, we recommend using an API gateway to manage access to the service mesh.
Retrieve the HashiCups UI address from Terraform.
$ terraform output -raw ui_hashicups
Open the address in a browser.
Confirm that HashiCups still works despite its services being configured to
communicate on localhost
. The Envoy sidecar proxies route each service's local
traffic to the relevant upstream.
Next steps
In this tutorial, you learned how to migrate your Consul services from service
discovery to Consul service mesh by updating each service's definition,
starting Envoy sidecar proxies for each service, and updating the services'
dependencies to bind to localhost
.
In the process, you integrated Zero Trust Security in your network and learned how to define explicit service-to-service permissions using service intentions.
At this point, the NGINX service used to expose the application externally is still accessible over an insecure connection. While it is possible to configure it for secure traffic using TLS, Consul offers an integrated solution through the Consul API gateway.
If you want to stop at this tutorial, use Terraform to destroy the infrastructure.
Otherwise, continue to the next tutorial, Access services in your service mesh, to learn about deploying the Consul API Gateway to further secure external access to traffic in the Consul service mesh.
From the ./self-managed/infrastruture/azure
folder of the repository, use
terraform
to destroy the infrastructure.
$ terraform destroy --auto-approve
In the next tutorial, you will learn how to add a Consul API Gateway to your service mesh and secure external network access to applications and services running in your Consul service mesh.
You can automate service deployment in your Consul service mesh using Nomad. Learn more in the Integrate service mesh and API gateway Nomad tutorial.
For more information about the topics covered in this tutorial, refer to the following resources: