Consul
Register your services to Consul
In the previous tutorial, you deployed a Consul server, enabled ACL security features, and explored how to use Consul as a KV store and DNS server.
Consul gains additional use as a centralized service catalog. This use case requires Consul client agents to serve as a distributed health monitoring platform for your services.
In this tutorial, you will deploy Consul client agents to your virtual machine (VM) workloads. Then, you will register the services to the Consul catalog and setup a distributed monitoring system using Consul health checks.
In this tutorial, you will:
- (Optional) Deploy your VM environment on AWS EC2 using Terraform
- Configure Consul client agents for the different VMs
- Start Consul client instances on your workload VMs
- Configure your terminal to communicate with the Consul datacenter
- Verify Consul datacenter members
- Query the Consul catalog using CLI, API, and DNS interfaces
- Modify a service definition and update the service in Consul catalog
Note
Because this tutorial is part of the Get Started on VMs tutorial collection, the following workflow was designed for education and demonstration. It uses scripts to generate agent configurations and requires you to execute commands manually on different nodes. If you are setting up a production environment you should codify and automate the installation and deployment process according to your infrastructure and networking needs. Refer to the VM production patterns tutorial collection for Consul production deployment considerations and best practices.
Tutorial scenario
This tutorial uses HashiCups, a demo coffee shop application made up of several microservices running on VMs.
t the beginning of this tutorial, there are six VMs: an instance of the HashiCups application running on four VMs, the VM running the Consul server that you deployed in the previous tutorial, and one Bastion host to interact with the other VMs.
At the end of this tutorial, a Consul client agent runs on each VM that hosts a HashiCups service. The HashiCups services are registered in the Consul catalog, and there are health checks set up for each service.
Prerequisites
If you completed the previous tutorial, the infrastructure is already in place with all prerequisites.
Login into the bastion host VM
Terraform output provides a series of useful information, including bastion host IP address.
Log in to the bastion host using ssh.
$ ssh -i certs/id_rsa.pem admin@`terraform output -raw ip_bastion`
Verify Consul binary in service servers
Verify that the VMs you want to deploy the Consul agents on have the Consul binary.
For example, to check Consul installation on the Database VM, login to the Database VM from the bastion host.
$ ssh -i certs/id_rsa hashicups-db-0
Verify Consul binary is installed.
$ consul version
Consul v1.20.2
Revision 33e5727a
Build Date 2025-01-03T14:38:40Z
Protocol 2 spoken by default, understands 2 to 3 (agent will automatically use protocol >2 when speaking to compatible agents)
Return to the bastion host by exiting the SSH session.
$ exit
logout
Connection to hashicups-db-0 closed.
admin@bastion:~$
Repeat the steps for all VMs (hashicups-nginx
, hashicups-frontend
, hashicups-api
) you want to add to the Consul datacenter.
Configure environment
The tutorial uses scripts to creates all files in a destination folder. Export the path where you want to create the configuration files for the scenario.
$ export OUTPUT_FOLDER=/home/admin/assets/scenario/conf/
Make sure the folder exists.
$ mkdir -p ${OUTPUT_FOLDER}
Source the env-scenario.env
file to set the variables in the terminal session.
$ source assets/scenario/env-scenario.env
Configure the Consul CLI to interact with the Consul server.
$ export CONSUL_HTTP_ADDR="https://consul-server-0:8443" \
export CONSUL_HTTP_SSL=true \
export CONSUL_CACERT="${OUTPUT_FOLDER}secrets/consul-agent-ca.pem" \
export CONSUL_TLS_SERVER_NAME="server.${CONSUL_DATACENTER}.${CONSUL_DOMAIN}" \
export CONSUL_HTTP_TOKEN=`cat ${OUTPUT_FOLDER}secrets/acl-token-bootstrap.json | jq -r ".SecretID"`
Verify your Consul CLI can interact with your Consul server.
$ consul members
Node Address Status Type Build Protocol DC Partition Segment
consul-server-0 172.18.0.4:8301 alive server 1.20.2 2 dc1 default <all>
Generate Consul clients configuration
The Consul datacenter is configured with ACLs enabled by default, so you must define the ACL tokens you want to pass to the Consul clients when you create the configuration.
First, export the token you generated for DNS so you can use it as default token for clients.
$ export CONSUL_DNS_TOKEN=`cat ${OUTPUT_FOLDER}secrets/acl-token-dns.json | jq -r ".SecretID"`
Then define a second Consul token for the service definition file to secure communication with the Consul datacenter.
In this example, you will use the bootstrap token.
$ export CONSUL_AGENT_TOKEN="${CONSUL_HTTP_TOKEN}"
Generate configuration for Database node
First, define the Consul node name.
$ export NODE_NAME="hashicups-db-0"
Then, generate the Consul configuration.
$ ~/ops/scenarios/00_base_scenario_files/supporting_scripts/generate_consul_client_config.sh
[generate_consul_client_config.sh] - - Generate configuration for [hashicups-db-0]
+ --------------------
| Parameter Check
+ --------------------
[WARN] Script is running with the following values:
[WARN] ----------
[WARN] CONSUL_DATACENTER = dc1
[WARN] CONSUL_DOMAIN = consul
[WARN] CONSUL_RETRY_JOIN = consul-server-0
[WARN] CONSUL_CONFIG_DIR = /etc/consul.d/
[WARN] CONSUL_DATA_DIR = /opt/consul/
[WARN] ----------
[WARN] Generated configuration will be placed under:
[WARN] OUTPUT_FOLDER = ~/assets/scenario/conf/
[WARN] ----------
+ --------------------
| Generate configuration for Consul agent hashicups-db-0
+ --------------------
- Cleaning folder from pre-existing files
[WARN] Removing pre-existing configuration in ~/assets/scenario/conf/
- Generate folder structure
- Copy available configuration
- Generate configuration files
- Validate configuration for hashicups-db-0
To complete Consul agent configuration, you need to set up tokens for the client. For this tutorial, you are using the bootstrap token. We recommend you create more restrictive tokens for the client agents in production.
$ tee ${OUTPUT_FOLDER}${NODE_NAME}/agent-acl-tokens.hcl > /dev/null << EOF
acl {
tokens {
agent = "${CONSUL_HTTP_TOKEN}"
default = "${CONSUL_DNS_TOKEN}"
config_file_service_registration = "${CONSUL_HTTP_TOKEN}"
}
}
EOF
After you generate the Consul agent configuration, copy the configuration to the remote node.
First, define the Consul configuration directory.
$ export CONSUL_REMOTE_CONFIG_DIR=/etc/consul.d/
Then, remove the existing configuration from the VM.
$ ssh -i ~/certs/id_rsa ${NODE_NAME} "sudo rm -rf ${CONSUL_REMOTE_CONFIG_DIR}*"
Finally, use rsync
to copy the configuration files into the remote node.
$ rsync -av --no-g --no-t --no-p \
-e "ssh -i ~/certs/id_rsa" \
${OUTPUT_FOLDER}${NODE_NAME}/ \
${NODE_NAME}:${CONSUL_REMOTE_CONFIG_DIR}
The output is similar to the following:
sending incremental file list
./
agent-acl-tokens.hcl
agent-gossip-encryption.hcl
consul-agent-ca.pem
consul.hcl
sent 3,153 bytes received 95 bytes 6,496.00 bytes/sec
total size is 2,800 speedup is 0.86
Generate configuration for API node
First, define the Consul node name.
$ export NODE_NAME="hashicups-api-0"
Then, generate the Consul configuration.
$ ~/ops/scenarios/00_base_scenario_files/supporting_scripts/generate_consul_client_config.sh
[generate_consul_client_config.sh] - - Generate configuration for [hashicups-api-0]
+ --------------------
| Parameter Check
+ --------------------
[WARN] Script is running with the following values:
[WARN] ----------
[WARN] CONSUL_DATACENTER = dc1
[WARN] CONSUL_DOMAIN = consul
[WARN] CONSUL_RETRY_JOIN = consul-server-0
[WARN] CONSUL_CONFIG_DIR = /etc/consul.d/
[WARN] CONSUL_DATA_DIR = /opt/consul/
[WARN] ----------
[WARN] Generated configuration will be placed under:
[WARN] OUTPUT_FOLDER = ~/assets/scenario/conf/
[WARN] ----------
+ --------------------
| Generate configuration for Consul agent hashicups-api-0
+ --------------------
- Cleaning folder from pre-existing files
[WARN] Removing pre-existing configuration in ~/assets/scenario/conf/
- Generate folder structure
- Copy available configuration
- Generate configuration files
- Validate configuration for hashicups-api-0
To complete Consul agent configuration, you need to setup tokens for the client. For this tutorial, you are using the bootstrap token. In production environments, we recommend that you create more restrictive tokens for the client agents.
$ tee ${OUTPUT_FOLDER}${NODE_NAME}/agent-acl-tokens.hcl > /dev/null << EOF
acl {
tokens {
agent = "${CONSUL_HTTP_TOKEN}"
default = "${CONSUL_DNS_TOKEN}"
config_file_service_registration = "${CONSUL_HTTP_TOKEN}"
}
}
EOF
After you generate the Consul agent configuration, copy the configuration to the remote node.
First, define the Consul configuration directory.
$ export CONSUL_REMOTE_CONFIG_DIR=/etc/consul.d/
Then, remove the existing configuration from the VM.
$ ssh -i ~/certs/id_rsa ${NODE_NAME} "sudo rm -rf ${CONSUL_REMOTE_CONFIG_DIR}*"
Finally, use rsync
to copy the configuration files into the remote node.
$ rsync -av --no-g --no-t --no-p \
-e "ssh -i ~/certs/id_rsa" \
${OUTPUT_FOLDER}${NODE_NAME}/ \
${NODE_NAME}:${CONSUL_REMOTE_CONFIG_DIR}
The output is similar to the following:
sending incremental file list
./
agent-acl-tokens.hcl
agent-gossip-encryption.hcl
consul-agent-ca.pem
consul.hcl
sent 3,154 bytes received 95 bytes 6,498.00 bytes/sec
total size is 2,801 speedup is 0.86
Generate configuration for Frontend node
First, define the Consul node name.
$ export NODE_NAME="hashicups-frontend-0"
Then, generate the Consul configuration.
$ ~/ops/scenarios/00_base_scenario_files/supporting_scripts/generate_consul_client_config.sh
[generate_consul_client_config.sh] - - Generate configuration for [hashicups-frontend-0]
+ --------------------
| Parameter Check
+ --------------------
[WARN] Script is running with the following values:
[WARN] ----------
[WARN] CONSUL_DATACENTER = dc1
[WARN] CONSUL_DOMAIN = consul
[WARN] CONSUL_RETRY_JOIN = consul-server-0
[WARN] CONSUL_CONFIG_DIR = /etc/consul.d/
[WARN] CONSUL_DATA_DIR = /opt/consul/
[WARN] ----------
[WARN] Generated configuration will be placed under:
[WARN] OUTPUT_FOLDER = ~/assets/scenario/conf/
[WARN] ----------
+ --------------------
| Generate configuration for Consul agent hashicups-frontend-0
+ --------------------
- Cleaning folder from pre-existing files
[WARN] Removing pre-existing configuration in ~/assets/scenario/conf/
- Generate folder structure
- Copy available configuration
- Generate configuration files
- Validate configuration for hashicups-frontend-0
To complete Consul agent configuration, you need to set up tokens for the client. For this tutorial, you are using the bootstrap token. In production environments, we recommend that you create more restrictive tokens for the client agents.
$ tee ${OUTPUT_FOLDER}${NODE_NAME}/agent-acl-tokens.hcl > /dev/null << EOF
acl {
tokens {
agent = "${CONSUL_HTTP_TOKEN}"
default = "${CONSUL_DNS_TOKEN}"
config_file_service_registration = "${CONSUL_HTTP_TOKEN}"
}
}
EOF
Once Consul agent configuration is generated, you can copy the configuration to the remote node.
First, define the Consul configuration directory.
$ export CONSUL_REMOTE_CONFIG_DIR=/etc/consul.d/
Then remove any existing configurations from the VM. This step is useful when you run the Getting Started tutorials out of order or more than once in a row on the same devices.
$ ssh -i ~/certs/id_rsa ${NODE_NAME} "sudo rm -rf ${CONSUL_REMOTE_CONFIG_DIR}*"
Finally, use rsync
to copy the configuration files into the remote node.
$ rsync -av --no-g --no-t --no-p \
-e "ssh -i ~/certs/id_rsa" \
${OUTPUT_FOLDER}${NODE_NAME}/ \
${NODE_NAME}:${CONSUL_REMOTE_CONFIG_DIR}
The output is similar to the following:
sending incremental file list
./
agent-acl-tokens.hcl
agent-gossip-encryption.hcl
consul-agent-ca.pem
consul.hcl
sent 3,154 bytes received 95 bytes 6,498.00 bytes/sec
total size is 2,806 speedup is 0.86
Generate configuration for NGINX node
First, define the Consul node name.
$ export NODE_NAME="hashicups-nginx-0"
Then, generate the Consul configuration.
$ ~/ops/scenarios/00_base_scenario_files/supporting_scripts/generate_consul_client_config.sh
[generate_consul_client_config.sh] - - Generate configuration for [hashicups-nginx-0]
+ --------------------
| Parameter Check
+ --------------------
[WARN] Script is running with the following values:
[WARN] ----------
[WARN] CONSUL_DATACENTER = dc1
[WARN] CONSUL_DOMAIN = consul
[WARN] CONSUL_RETRY_JOIN = consul-server-0
[WARN] CONSUL_CONFIG_DIR = /etc/consul.d/
[WARN] CONSUL_DATA_DIR = /opt/consul/
[WARN] ----------
[WARN] Generated configuration will be placed under:
[WARN] OUTPUT_FOLDER = ~/assets/scenario/conf/
[WARN] ----------
+ --------------------
| Generate configuration for Consul agent hashicups-nginx-0
+ --------------------
- Cleaning folder from pre-existing files
[WARN] Removing pre-existing configuration in ~/assets/scenario/conf/
- Generate folder structure
- Copy available configuration
- Generate configuration files
- Validate configuration for hashicups-nginx-0
To complete Consul agent configuration, you need to set up tokens for the client. For this tutorial, you are using the bootstrap token. In production environments, we recommend that you create more restrictive tokens for the client agents.
$ tee ${OUTPUT_FOLDER}${NODE_NAME}/agent-acl-tokens.hcl > /dev/null << EOF
acl {
tokens {
agent = "${CONSUL_HTTP_TOKEN}"
default = "${CONSUL_DNS_TOKEN}"
config_file_service_registration = "${CONSUL_HTTP_TOKEN}"
}
}
EOF
Once Consul agent configuration is generated, you can copy the configuration to the remote node.
First, define the Consul configuration directory.
$ export CONSUL_REMOTE_CONFIG_DIR=/etc/consul.d/
Then remove any existing configurations from the VM. This step is useful when you run the Getting Started tutorials out of order or more than once in a row on the same devices.
$ ssh -i ~/certs/id_rsa ${NODE_NAME} "sudo rm -rf ${CONSUL_REMOTE_CONFIG_DIR}*"
Finally, use rsync
to copy the configuration files into the remote node.
$ rsync -av --no-g --no-t --no-p \
-e "ssh -i ~/certs/id_rsa" \
${OUTPUT_FOLDER}${NODE_NAME}/ \
${NODE_NAME}:${CONSUL_REMOTE_CONFIG_DIR}
The output is similar to the following:
sending incremental file list
./
agent-acl-tokens.hcl
agent-gossip-encryption.hcl
consul-agent-ca.pem
consul.hcl
sent 3,151 bytes received 95 bytes 6,492.00 bytes/sec
total size is 2,803 speedup is 0.86
Start Consul on client nodes
Now that each client VM has access to an agent configuration file, start the Consul client agent on each VM.
Start Consul on Database node
Log into the Database VM from the bastion host.
$ ssh -i certs/id_rsa hashicups-db-0
##..
admin@hashicups-db-0:~
Ensure your user has write permission to the Consul data directory.
$ sudo chmod g+w /opt/consul/
Finally, start the Consul client process.
$ consul agent -config-dir=/etc/consul.d/ > /tmp/consul-client.log 2>&1 &
The command starts the Consul server in the background to not lock the terminal. You can access the Consul server log through the /tmp/consul-client.log
file.
Exit the SSH session to return to the bastion host.
$ exit
logout
Connection to hashicups-db-0 closed.
admin@bastion:~$
Start Consul on API node
Log into the API VM from the bastion host.
$ ssh -i certs/id_rsa hashicups-api-0
##..
admin@hashicups-api-0:~
Ensure your user has write permission to the Consul data directory.
$ sudo chmod g+w /opt/consul/
Finally, start the Consul client process.
$ consul agent -config-dir=/etc/consul.d/ > /tmp/consul-client.log 2>&1 &
The command starts the Consul server in the background to avoid a lock on the active terminal. You can access the Consul server log through the /tmp/consul-client.log
file.
Exit the SSH session to return to the bastion host.
$ exit
logout
Connection to hashicups-api-0 closed.
admin@bastion:~$
Start Consul on Frontend node
Log into the Frontend VM from the bastion host.
$ ssh -i certs/id_rsa hashicups-frontend-0
##..
admin@hashicups-frontend-0:~
Ensure your user has write permission to the Consul data directory.
$ sudo chmod g+w /opt/consul/
Finally, start the Consul client process.
$ consul agent -config-dir=/etc/consul.d/ > /tmp/consul-client.log 2>&1 &
The command starts the Consul server in the background to not lock the terminal. You can access the Consul server log through the /tmp/consul-client.log
file.
Exit the SSH session to return to the bastion host.
$ exit
logout
Connection to hashicups-frontend-0 closed.
admin@bastion:~$
Start Consul on NGINX node
Log into the NGINX VM from the bastion host.
$ ssh -i certs/id_rsa hashicups-nginx-0
##..
admin@hashicups-nginx-0:~
Ensure your user has write permission to the Consul data directory.
$ sudo chmod g+w /opt/consul/
Finally, start the Consul client process.
$ consul agent -config-dir=/etc/consul.d/ > /tmp/consul-client.log 2>&1 &
The command starts the Consul server in the background to avoid a lock on the terminal. You can access the Consul server log through the /tmp/consul-client.log
file.
Exit the SSH session to return to the bastion host.
$ exit
logout
Connection to hashicups-nginx-0 closed.
admin@bastion:~$
Verify Consul datacenter members
After you start all of the Consul agents, verify that they successfully joined the Consul datacenter.
Retrieve the agents in the Consul datacenter.
$ consul members
Node Address Status Type Build Protocol DC Partition Segment
consul-server-0 172.18.0.4:8301 alive server 1.20.2 2 dc1 default <all>
hashicups-api-0 172.18.0.5:8301 alive client 1.20.2 2 dc1 default <default>
hashicups-db-0 172.18.0.7:8301 alive client 1.20.2 2 dc1 default <default>
hashicups-frontend-0 172.18.0.2:8301 alive client 1.20.2 2 dc1 default <default>
hashicups-nginx-0 172.18.0.8:8301 alive client 1.20.2 2 dc1 default <default>
Register services in the Consul catalog
After the Consul client agent is running on a node, you can use that node to register services in the Consul catalog and make them discoverable to your network.
Register the Database service
First, define the Consul node name.
$ export NODE_NAME="hashicups-db-0"
Then, generate the service configuration for the node.
$ ~/ops/scenarios/00_base_scenario_files/supporting_scripts/generate_hashicups_service_config.sh
[generate_hashicups_service_config.sh] - [hashicups-db-0]
+ --------------------
| Parameter Check
+ --------------------
- Check if a service token is defined
+ --------------------
| Generate service configuration files for hashicups-db-0
+ --------------------
- Get service details
- Prepare folders
- Generate checks definition
- Generate service definition for service discovery
- Generate service definition for service mesh
Then copy the service definition file into the remote node.
$ rsync -av --no-g --no-t --no-p \
-e "ssh -i ~/certs/id_rsa" \
${OUTPUT_FOLDER}${NODE_NAME}/svc/service_discovery/svc-hashicups-db.hcl \
${NODE_NAME}:${CONSUL_REMOTE_CONFIG_DIR}/svc.hcl
The output is similar to the following:
sending incremental file list
svc-hashicups-db.hcl
sent 549 bytes received 35 bytes 1,168.00 bytes/sec
total size is 429 speedup is 0.73
Register the API service
First, define the Consul node name.
$ export NODE_NAME="hashicups-api-0"
Then, generate the service configuration for the node.
$ ~/ops/scenarios/00_base_scenario_files/supporting_scripts/generate_hashicups_service_config.sh
[generate_hashicups_service_config.sh] - [hashicups-api-0]
+ --------------------
| Parameter Check
+ --------------------
- Check if a service token is defined
+ --------------------
| Generate service configuration files for hashicups-api-0
+ --------------------
- Get service details
- Prepare folders
- Generate checks definition
- Generate service definition for service discovery
- Generate service definition for service mesh
Then, copy the service definition file into the remote node.
$ rsync -av --no-g --no-t --no-p \
-e "ssh -i ~/certs/id_rsa" \
${OUTPUT_FOLDER}${NODE_NAME}/svc/service_discovery/svc-hashicups-api.hcl \
${NODE_NAME}:${CONSUL_REMOTE_CONFIG_DIR}/svc.hcl
The output is similar to the following:
sending incremental file list
svc-hashicups-api.hcl
sent 992 bytes received 35 bytes 2,054.00 bytes/sec
total size is 871 speedup is 0.85
Register the Frontend service
First, define the Consul node name.
$ export NODE_NAME="hashicups-frontend-0"
Then, generate the service configuration for the node.
$ ~/ops/scenarios/00_base_scenario_files/supporting_scripts/generate_hashicups_service_config.sh
[generate_hashicups_service_config.sh] - [hashicups-frontend-0]
+ --------------------
| Parameter Check
+ --------------------
- Check if a service token is defined
+ --------------------
| Generate service configuration files for hashicups-frontend-0
+ --------------------
- Get service details
- Prepare folders
- Generate checks definition
- Generate service definition for service discovery
- Generate service definition for service mesh
Then, copy the service definition file into the remote node.
$ rsync -av --no-g --no-t --no-p \
-e "ssh -i ~/certs/id_rsa" \
${OUTPUT_FOLDER}${NODE_NAME}/svc/service_discovery/svc-hashicups-frontend.hcl \
${NODE_NAME}:${CONSUL_REMOTE_CONFIG_DIR}/svc.hcl
The output is similar to the following:
sending incremental file list
svc-hashicups-frontend.hcl
sent 592 bytes received 35 bytes 1,254.00 bytes/sec
total size is 465 speedup is 0.74
Register the NGINX service
First, define the Consul node name.
$ export NODE_NAME="hashicups-nginx-0"
Then, generate the service configuration for the node.
$ ~/ops/scenarios/00_base_scenario_files/supporting_scripts/generate_hashicups_service_config.sh
[generate_hashicups_service_config.sh] - [hashicups-nginx-0]
+ --------------------
| Parameter Check
+ --------------------
- Check if a service token is defined
+ --------------------
| Generate service configuration files for hashicups-nginx-0
+ --------------------
- Get service details
- Prepare folders
- Generate checks definition
- Generate service definition for service discovery
- Generate service definition for service mesh
Then, copy the service definition file into the remote node.
$ rsync -av --no-g --no-t --no-p \
-e "ssh -i ~/certs/id_rsa" \
${OUTPUT_FOLDER}${NODE_NAME}/svc/service_discovery/svc-hashicups-nginx.hcl \
${NODE_NAME}:${CONSUL_REMOTE_CONFIG_DIR}/svc.hcl
The output is similar to the following:
sending incremental file list
svc-hashicups-nginx.hcl
sent 567 bytes received 35 bytes 1,204.00 bytes/sec
total size is 443 speedup is 0.74
Query services in Consul catalog
You can query services using the Consul CLI, API, or DNS to return a healthy instance.
Use the Consul CLI to query the service catalog.
$ consul catalog services -tags
consul
hashicups-api v1
hashicups-db v1
hashicups-frontend v1
hashicups-nginx v1
Modify service definition tags
When you use the Consul CLI or the API endpoints to return information about services, Consul can return custom metadata associated with the services. In this tutorial, you registered each service with the v1
tag.
In this section, you will learn how to update Consul service definitions. You must run these commands on the virtual machine that hosts the services.
Edit the service definition file for the Database service to add a v2
tag to the service.
$ sed 's/"v1"/"v1","v2"/' \
${OUTPUT_FOLDER}hashicups-db-0/svc/service_discovery/svc-hashicups-db.hcl \
> ${OUTPUT_FOLDER}hashicups-db-0/svc/service_discovery/svc-hashicups-db-multi-tag.hcl
Review the file to verify the change was applied.
$ cat ${OUTPUT_FOLDER}hashicups-db-0/svc/service_discovery/svc-hashicups-db-multi-tag.hcl
Tags array now contains both v1
and v2
.
svc-hashicups-db-multi-tag.hcl
## -----------------------------
## svc-hashicups-db.hcl
## -----------------------------
service {
name = "hashicups-db"
id = "hashicups-db-0"
tags = [ "v1","v2" ]
port = 5432
token = "69b2a14e-7030-0232-e56c-db1aae43f102"
check
{
id = "check-hashicups-db",
name = "hashicups-db status check",
service_id = "hashicups-db-0",
tcp = "localhost:5432",
interval = "5s",
timeout = "5s"
}
}
Finally, copy the service definition file to the remote node to apply the change.
$ rsync -av --no-g --no-t --no-p \
-e "ssh -i ~/certs/id_rsa" \
${OUTPUT_FOLDER}hashicups-db-0/svc/service_discovery/svc-hashicups-db-multi-tag.hcl \
hashicups-db-0:${CONSUL_REMOTE_CONFIG_DIR}svc.hcl
The output is similar to the following:
sending incremental file list
svc-hashicups-db-multi-tag.hcl
sent 565 bytes received 41 bytes 1,212.00 bytes/sec
total size is 434 speedup is 0.72
Query services by tags
After you updated the database service definition, query it to verify the new tag.
Retrieve the tags associated with each service and verify the new v2
tag for the database service.
$ consul catalog services -tags
consul
hashicups-api v1
hashicups-db v1,v2
hashicups-frontend v1
hashicups-nginx v1
Next steps
In this tutorial, you deployed Consul clients on each HashiCups node VM. In addition, you created service definitions that includes health checks, registered the services, and updated a service definition.
You now have a distributed system to monitor and resolve your services, all without changing your services' configuration or implementation. At this stage, you can use Consul to automatically configure and monitor your services. However, they have the same security they had before introducing Consul.
If you want to stop at this tutorial, you can destroy the infrastructure now.
From the ./self-managed/infrastruture/aws
folder of the repository, use
terraform
to destroy the infrastructure.
$ terraform destroy --auto-approve
In the next tutorial, you will learn how to implement Consul's service mesh to introduce zero trust security in your network.
You can automate Consul service deployment in your datacenter using Nomad. To learn more, complete the Integrate service discovery tutorial in the Nomad tutorials.
For more information about the topics covered in this tutorial, refer to the following resources: