Consul
Deploy Consul on VMs
Consul is a service networking solution that helps you manage secure network connectivity between services, and that works across on-premise and multi-cloud environments and runtimes. Consul offers service discovery, service mesh, traffic management, and automated updates to network infrastructure devices. Check out the What is Consul? page to learn more.
In this tutorial, you will configure, deploy, and bootstrap a Consul server on
a virtual machine (VM). After deploying Consul, you will interact with Consul
using the UI, CLI, and API.
You will use this server in the other tutorials of the Get Started on VMs tutorial collection. In those tutorials you will deploy a demo application, configure it to use Consul service discovery, secure it with service mesh, allow external traffic into the service mesh, and enhance observability into your service mesh. During the process, you will learn how to leverage Consul to securely connect your services running on any environment.
In this tutorial, you will:
- Deploy your VM environment on AWS EC2 using Terraform
- Configure a Consul server
- Start a Consul server instance
- Configure your terminal to communicate with the Consul datacenter
- Bootstrap Consul ACL system and create tokens for Consul management
- Interact with Consul API, the KV store, and the UI
Note
Because this tutorial is part of the Get Started on VMs tutorial collection, the following workflow was designed for education and demonstration. It uses scripts to generate agent configurations and requires you to execute commands manually on different nodes. If you are setting up a production environment you should codify and automate the installation and deployment process according to your infrastructure and networking needs. Refer to the VM production patterns tutorial collection for Consul production deployment considerations and best practices.
Tutorial scenario
This tutorial uses HashiCups, a demo coffee shop application made up of several microservices running on VMs.
At the beginning of this tutorial, there are six VMs: an instance of the HashiCups application running on four VMs, one empty VM that you will deploy Consul server on, and one Bastion host to interact with the other VMs.
At the end of this tutorial, you will have deployed a Consul server agent running on one of the machines.
Prerequisites
For this tutorial, you will need:
- An AWS account configured for use with Terraform
- aws-cli >= 2.0
- terraform >= 1.0
- git >= 2.0
Clone GitHub repository
Clone the GitHub repository containing the configuration files and resources.
$ git clone https://github.com/hashicorp-education/learn-consul-get-started-vms
Change into the directory that contains the complete configuration files for this tutorial.
$ cd learn-consul-get-started-vms/self-managed/infrastructure/aws
Create infrastructure
With these Terraform configuration files, you are ready to deploy your infrastructure.
Issue the terraform init
command from your working directory to download the
necessary providers and initialize the backend.
$ terraform init
Initializing the backend...
Initializing provider plugins...
## ...
Terraform has been successfully initialized!
## ...
Then, deploy the resources. Enter yes
to confirm the run.
$ terraform apply --var-file=../../ops/conf/GS_00_base_scenario.tfvars
## ...
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
## ...
Apply complete! Resources: 49 added, 0 changed, 0 destroyed.
The Terraform deployment could take up to 15 minutes to complete. Feel free to explore the other sections of this tutorial while you wait for the environment to finish initialization.
After it completes deployment, Terraform returns a list of outputs you can use to interact with the newly created environment.
Outputs:
connection_string = "ssh -i certs/id_rsa.pem admin@`terraform output -raw ip_bastion`"
ip_bastion = "54.185.247.184"
retry_join = <sensitive>
ui_consul = "https://18.236.102.73:8443"
ui_grafana = "http://54.185.247.184:3000/d/hashicups/hashicups"
ui_hashicups = "http://34.220.205.132"
ui_hashicups_API_GW = "https://35.91.171.47:8443"
The Terraform output provides useful information, including the bastion host IP address. The following is a brief description of the Terraform outputs:
- The
connection_string
provides the command to SSH connect to the bastion host. - The
ip_bastion
provides the IP address of the bastion host. You will use the bastion host to run the rest of the commands in this tutorial. - The
retry_join
output lists Consul'sretry_join
configuration parameter. The next tutorials will use this to generate Consul server and client configuration. - The
ui_consul
output lists the Consul UI address. The Consul UI is not currently running. You will use this later in the tutorial to verify Consul started correctly. - The
ui_grafana
output lists the Grafana UI address. You will use this in the service mesh monitoring tutorial. - The
ui_hashicups
output lists the HashiCups UI address. You can use it to verify the HashiCups demo application is running properly. - The
ui_hashicups_API_GW
output lists the API gateway address that permits access to HashiCups UI address. The API gateway is not currently running. You will use this address in the service mesh access tutorial to verify the HashiCups demo application is running properly in the service mesh.
Login into the bastion host VM
Log in to the bastion host using ssh
.
$ ssh -i certs/id_rsa.pem admin@`terraform output -raw ip_bastion`
Verify Consul binary
You deploy the Consul server on the bastion host, so make sure the Consul binary is installed.
$ consul version
Consul v1.20.2
Revision 33e5727a
Build Date 2025-01-03T14:38:40Z
Protocol 2 spoken by default, understands 2 to 3 (agent will automatically use protocol >2 when speaking to compatible agents)
Configure a Consul server
The interactive lab environment setup installs scripts and supporting files from the learn-consul-get-started-vms
repository on GitHub. Verify that the scripts are on the Bastion host.
$ tree ~/ops/scenarios/00_base_scenario_files/supporting_scripts/
~/ops/scenarios/00_base_scenario_files/supporting_scripts/
|-- download_consul_esm.sh
|-- download_consul_template.sh
|-- generate_consul_client_config.sh
|-- generate_consul_monitoring_config.sh
|-- generate_consul_server_config.sh
|-- generate_consul_server_tokens.sh
|-- generate_global_config_hashicups.sh
`-- generate_hashicups_service_config.sh
0 directories, 8 files
The scripts use environment variables to generate the configuration files. Source the env-scenario.env
file to set the variables in the terminal session.
$ source assets/scenario/env-scenario.env
Verify that the variables exported correctly in the environment.
$ env | grep CONSUL_
CONSUL_SERVER_NUMBER=1
CONSUL_RETRY_JOIN=<retry join string>
CONSUL_DATACENTER=dc1
CONSUL_DOMAIN=consul
CONSUL_DATA_DIR=/opt/consul/
CONSUL_CONFIG_DIR=/etc/consul.d/
The scripts also require a destination folder for the files they create. Export the path where you want to create the configuration files for the scenario.
$ export OUTPUT_FOLDER=/home/admin/assets/scenario/conf/
Note
When following this tutorial, we suggest you use the default variables to help you avoid typos and focus on the process. If you decide to use custom values, verify that export
commands always use the correct custom values.
Make sure the folder exists.
$ mkdir -p ${OUTPUT_FOLDER}
Generate all necessary files to configure and run the Consul server agent.
$ ~/ops/scenarios/00_base_scenario_files/supporting_scripts/generate_consul_server_config.sh
[generate_consul_server_config.sh] - Generate Consul servers configuration
+ --------------------
| Parameter Check
+ --------------------
[WARN] Script is running with the following values
[WARN] ----------
[WARN] CONSUL_DATACENTER = dc1
[WARN] CONSUL_DOMAIN = consul
[WARN] CONSUL_SERVER_NUMBER = 1
[WARN] CONSUL_RETRY_JOIN = consul-server-0
[WARN] CONSUL_CONFIG_DIR = /etc/consul.d/
[WARN] CONSUL_DATA_DIR = /opt/consul
[WARN] ----------
[WARN] Generated configuration will be placed under:
[WARN] OUTPUT_FOLDER = ~/assets/scenario/conf/
[WARN] ----------
+ --------------------
| Prepare folder
+ --------------------
- Cleaning folder from pre-existing files
[WARN] Removing pre-existing configuration in ~/assets/scenario/conf/
- Generate scenario config folders.
+ --------------------
| Generate secrets
+ --------------------
Generating Gossip Encryption Key.
Generate CA for *.dc1.consul
==> Saved consul-agent-ca.pem
==> Saved consul-agent-ca-key.pem
Generate Server Certificates
==> WARNING: Server Certificates grants authority to become a
server and access all state in the cluster including root keys
and all ACL tokens. Do not distribute them to production hosts
that are not server nodes. Store them as securely as CA keys.
==> Using consul-agent-ca.pem and consul-agent-ca-key.pem
==> Saved dc1-server-consul-0.pem
==> Saved dc1-server-consul-0-key.pem
+ --------------------
| Generate Consul server agent configuration
+ --------------------
Generating Configuration for consul-server-0
- Copy certificate files
- Generate consul.hcl - requirement for systemd service
- Generate agent-server-specific.hcl - server specific configuration
- Generate agent-server-specific-ui.hcl - server specific UI configuration
- Generate agent-server-networking.hcl - server networking configuration
- Generate agent-server-tls.hcl - server TLS configuration
- Generate agent-server-acl.hcl - server ACL configuration
- Generate agent-server-telemetry.hcl - server telemetry configuration
- Validate configuration for consul-server-0
+ --------------------
When the script completes, list the generated files.
$ tree ${OUTPUT_FOLDER}
~/assets/scenario/conf/
|-- consul-server-0
| |-- agent-gossip-encryption.hcl
| |-- agent-server-acl.hcl
| |-- agent-server-networking.hcl
| |-- agent-server-specific-ui.hcl
| |-- agent-server-specific.hcl
| |-- agent-server-telemetry.hcl
| |-- agent-server-tls.hcl
| |-- consul-agent-ca.pem
| |-- consul-agent-key.pem
| |-- consul-agent.pem
| `-- consul.hcl
`-- secrets
|-- agent-gossip-encryption.hcl
|-- consul-agent-ca-key.pem
|-- consul-agent-ca.pem
|-- dc1-server-consul-0-key.pem
`-- dc1-server-consul-0.pem
2 directories, 16 files
Test configuration
Verify the configuration generated for consul-server-0
is valid. Despite the INFO
messages, the Consul configuration files are valid.
$ consul validate ${OUTPUT_FOLDER}/consul-server-0
skipping file ~/assets/scenario/conf/consul-server-0/consul-agent-ca.pem, extension must be .hcl or .json, or config format must be set
skipping file ~/assets/scenario/conf/consul-server-0/consul-agent-key.pem, extension must be .hcl or .json, or config format must be set
skipping file ~/assets/scenario/conf/consul-server-0/consul-agent.pem, extension must be .hcl or .json, or config format must be set
BootstrapExpect is set to 1; this is the same as Bootstrap mode.
bootstrap = true: do not enable unless necessary
Configuration is valid!
Copy configuration on Consul server node
Copy the configuration files to the consul-server-0
VM.
First, configure the Consul configuration directory.
$ export CONSUL_REMOTE_CONFIG_DIR=/etc/consul.d/
Then, remove existing configuration from the server.
$ ssh -i certs/id_rsa consul-server-0 "sudo rm -rf ${CONSUL_REMOTE_CONFIG_DIR}*"
Finally, use rsync
to copy the configuration into the server node.
$ rsync -av --no-g --no-t --no-p \
-e "ssh -i ~/certs/id_rsa" \
${OUTPUT_FOLDER}consul-server-0/ \
consul-server-0:${CONSUL_REMOTE_CONFIG_DIR}
Output is similar to the following:
sending incremental file list
./
agent-gossip-encryption.hcl
agent-server-acl.hcl
agent-server-networking.hcl
agent-server-specific-ui.hcl
agent-server-specific.hcl
agent-server-telemetry.hcl
agent-server-tls.hcl
consul-agent-ca.pem
consul-agent-key.pem
consul-agent.pem
consul.hcl
sent 6,877 bytes received 228 bytes 14,210.00 bytes/sec
total size is 6,024 speedup is 0.85
Start Consul server
Login to consul-server-0
from the bastion host.
$ ssh -i certs/id_rsa consul-server-0
##..
admin@consul-server-0:~
Make sure your user has write permissions in the Consul data directory.
$ sudo chmod g+w /opt/consul/
Finally, start the Consul server process.
$ consul agent -config-dir=/etc/consul.d/ > /tmp/consul-server.log 2>&1 &
The command starts the Consul server in the background to avoid a lock on the terminal.
You can access the Consul server log through the /tmp/consul-server.log
file.
$ cat /tmp/consul-server.log
==> Starting Consul agent...
Version: '1.20.2'
Build Date: '2025-01-03 14:38:40 +0000 UTC'
Node ID: 'bc1c1796-57f6-22c4-808c-4b8af45034da'
Node name: 'consul-server-0'
Datacenter: 'dc1' (Segment: '<all>')
Server: true (Bootstrap: true)
Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: 8443, gRPC: -1, gRPC-TLS: 8503, DNS: 53)
Cluster Addr: 172.18.0.3 (LAN: 8301, WAN: 8302)
Gossip Encryption: true
Auto-Encrypt-TLS: true
ACL Enabled: true
Reporting Enabled: false
ACL Default Policy: deny
HTTPS TLS: Verify Incoming: false, Verify Outgoing: true, Min Version: TLSv1_2
gRPC TLS: Verify Incoming: false, Min Version: TLSv1_2
Internal RPC TLS: Verify Incoming: true, Verify Outgoing: true (Verify Hostname: true), Min Version: TLSv1_2
==> Log data will now stream in as it occurs:
[WARN] agent: skipping file /etc/consul.d/consul-agent-ca.pem, extension must be .hcl or .json, or config format must be set
[WARN] agent: skipping file /etc/consul.d/consul-agent-key.pem, extension must be .hcl or .json, or config format must be set
[WARN] agent: skipping file /etc/consul.d/consul-agent.pem, extension must be .hcl or .json, or config format must be set
[WARN] agent: BootstrapExpect is set to 1; this is the same as Bootstrap mode.
[WARN] agent: bootstrap = true: do not enable unless necessary
[INFO] agent.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:bc1c1796-57f6-22c4-808c-4b8af45034da Address:172.18.0.3:8300}]"
[INFO] agent.server.raft: entering follower state: follower="Node at 172.18.0.3:8300 [Follower]" leader-address= leader-id=
[INFO] agent.server.serf.wan: serf: EventMemberJoin: consul-server-0.dc1 172.18.0.3
[INFO] agent.server.serf.lan: serf: EventMemberJoin: consul-server-0 172.18.0.3
[INFO] agent.router: Initializing LAN area manager
[DEBUG] agent.grpc.balancer: switching server: target=consul://dc1.bc1c1796-57f6-22c4-808c-4b8af45034da/server.dc1 from=<none> to=dc1-172.18.0.3:8300
[INFO] agent.server.autopilot: reconciliation now disabled
[INFO] agent.server: Adding LAN server: server="consul-server-0 (Addr: tcp/172.18.0.3:8300) (DC: dc1)"
[INFO] agent.server: Handled event for server in area: event=member-join server=consul-server-0.dc1 area=wan
[INFO] agent.server.cert-manager: initialized server certificate management
##...
[INFO] agent: Starting server: address=127.0.0.1:8500 network=tcp protocol=http
[INFO] agent: Starting server: address=[::]:8443 network=tcp protocol=https
[INFO] agent: Started gRPC listeners: port_name=grpc_tls address=127.0.0.1:8503 network=tcp
[INFO] agent: Retry join is supported for the following discovery methods: cluster=LAN discovery_methods="aliyun aws azure digitalocean gce hcp k8s linode mdns os packet scaleway softlayer tencentcloud triton vsphere"
[INFO] agent: Joining cluster...: cluster=LAN
[INFO] agent: (LAN) joining: lan_addresses=["consul-server-0"]
[INFO] agent: started state syncer
[INFO] agent: Consul agent running!
[INFO] agent: (LAN) joined: number_of_nodes=1
[INFO] agent: Join cluster completed. Synced with initial agents: cluster=LAN num_agents=1
##...
Exit the SSH session to return to the bastion host.
$ exit
logout
Connection to consul-server-0 closed.
admin@bastion:~$
Configure Consul CLI to interact with Consul server
To interact with the Consul server, you need to set up your terminal.
Make sure the scenario environment variables are still defined.
$ export CONSUL_DOMAIN=consul \
export CONSUL_DATACENTER=dc1 \
export OUTPUT_FOLDER=/home/admin/assets/scenario/conf/
Export the following environment variables to configure the Consul CLI to interact with the Consul server.
$ export CONSUL_HTTP_ADDR="https://consul-server-0:8443" \
export CONSUL_HTTP_SSL=true \
export CONSUL_CACERT="${OUTPUT_FOLDER}secrets/consul-agent-ca.pem" \
export CONSUL_TLS_SERVER_NAME="server.${CONSUL_DATACENTER}.${CONSUL_DOMAIN}"
Bootstrap ACLs
Execute the consult info
command to verify that the Consul CLI can reach your Consul server.
The output informs you that, while the Consul CLI can reach your Consul server, Consul's ACLs are blocking the request.
$ consul info
Error querying agent: Unexpected response code: 403 (Permission denied: anonymous token lacks permission 'agent:read' on "consul-server-0". The anonymous token is used implicitly when a request does not specify a token.)
Bootstrap the Consul ACL system and save the output in a file named acl-token-bootstrap.json
.
$ consul acl bootstrap --format json | tee ${OUTPUT_FOLDER}secrets/acl-token-bootstrap.json
{
"CreateIndex": 21,
"ModifyIndex": 21,
"AccessorID": "c779a34c-f978-93f8-30e3-733f480821a0",
"SecretID": "a3301c84-e67a-dd2b-39e9-d746e21b6766",
"Description": "Bootstrap Token (Global Management)",
"Policies": [
{
"ID": "00000000-0000-0000-0000-000000000001",
"Name": "global-management"
}
],
"Local": false,
"CreateTime": "2025-01-21T11:31:32.671810716Z",
"Hash": "X2AgaFhnQGRhSSF/h0m6qpX1wj/HJWbyXcxkEM/5GrY="
}
The command generates a global management token with full permissions over your datacenter.
The management token is the value associated with the SecretID
key.
Extract the management token from the file and set it to the CONSUL_HTTP_TOKEN
environment variable.
$ export CONSUL_HTTP_TOKEN=`cat ${OUTPUT_FOLDER}secrets/acl-token-bootstrap.json | jq -r ".SecretID"`
Now that the ACL system is bootstrapped, execute the consul info
command again to interact with the Consul server.
$ consul info
agent:
check_monitors = 0
check_ttls = 0
checks = 0
services = 0
build:
prerelease =
revision = 33e5727a
version = 1.20.2
version_metadata =
consul:
acl = enabled
bootstrap = true
known_datacenters = 1
leader = true
leader_addr = 172.18.0.3:8300
server = true
raft:
## ...
runtime:
## ...
serf_lan:
## ...
encrypted = true
## ...
members = 1
## ...
serf_wan:
## ...
encrypted = true
## ...
members = 1
## ...
Create server tokens
The Consul datacenter's ACL system is now fully bootstrapped, and the server agent is ready to receive requests. To complete the Consul server's configuration, create ACL tokens for the server agent to use.
In this section, the generate_consul_sever_tokens.sh
script automates the process of creating policies and tokens for your Consul server. This script generates two ACL tokens with different policies for Consul DNS service and for the server agent, and then applies them to the Consul server.
In the terminal with your bastion host, run the generate_consul_server_tokens.sh
script to create the ACL policies and tokens for your Consul server.
$ ~/ops/scenarios/00_base_scenario_files/supporting_scripts/generate_consul_server_tokens.sh
[generate_consul_server_tokens.sh] - - Generate Consul server tokens
+ --------------------
| Parameter Check
+ --------------------
[WARN] Script is running with the following values
[WARN] ----------
[WARN] CONSUL_DATACENTER = dc1
[WARN] CONSUL_DOMAIN = consul
[WARN] CONSUL_SERVER_NUMBER = 1
[WARN] ----------
[WARN] Generated configuration will be placed under:
[WARN] OUTPUT_FOLDER = ~/assets/scenario/conf/
[WARN] ----------
+ --------------------
| Prepare folder
+ --------------------
+ --------------------
| Create Consul ACL policies and tokens
+ --------------------
- Define policies
- [ acl-policy-dns.hcl ]
- [ acl-policy-server-node ]
- Configure CLI to communicate with Consul
- Create Consul ACL policies
- Create Consul ACL tokens
- Set tokens for consul-server-0
ACL token "agent" set successfully
ACL token "default" set successfully
After you create the server tokens, your Consul logs show the updated ACL tokens.
Interact with the Consul server
Use the CLI, API, or UI to retrieve and review information about the Consul datacenter.
Use the Consul CLI to retrieve members in your Consul datacenter.
$ consul members
Node Address Status Type Build Protocol DC Partition Segment
consul-server-0 172.18.0.3:8301 alive server 1.20.2 2 dc1 default <all>
Refer to the Consul CLI commands reference for the full list of available commands.
Interact with Consul KV
Consul includes a key/value (KV) store that you can use to manage your service's configuration. Even though you can use the KV store using the CLI, API, and UI, this tutorial only covers the CLI and API methods.
Select the tab for your preferred method.
Create a key named db_port
with a value of 5432
.
$ consul kv put consul/configuration/db_port 5432
Success! Data written to: consul/configuration/db_port
Then, retrieve the value.
$ consul kv get consul/configuration/db_port
5432
Interact with Consul DNS
Consul also provides you with a fully featured DNS server that you can use to resolve your services.
By default, Consul DNS service is configured to listen on port 8600
.
$ dig @consul-server-0 -p 8600 consul.service.consul
; <<>> DiG 9.18.28-1~deb12u2-Debian <<>> @consul-server-0 -p 53 consul.service.consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 58457
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;consul.service.consul. IN A
;; ANSWER SECTION:
consul.service.consul. 0 IN A 172.18.0.3
;; Query time: 0 msec
;; SERVER: 172.18.0.3#53(consul-server-0) (UDP)
;; WHEN: Tue Jan 21 11:31:33 UTC 2025
;; MSG SIZE rcvd: 66
Next steps
In this tutorial, you deployed a Consul server on a VM. After deploying Consul, you interacted with Consul using the CLI, API, and UI.
This deployment does not have Consul client agents running. Even when deployed without client agents, you can still:
- Use Consul's KV store as a centralized configuration management tool. You can use it with consul-template to configure your services automatically.
- Use the Consul server as a DNS server. You can use it to register and resolve external services in your network.
If you want to stop at this tutorial, you can destroy the infrastructure now.
From the ./self-managed/infrastruture/aws
folder of the repository, use
terraform
to destroy the infrastructure.
$ terraform destroy --auto-approve
In the next tutorial, you will deploy Consul clients on the VMs hosting your application. Then, you will register the services running on each server and set up health checks for each service. This enables service discovery using Consul's distributed health check system and DNS.
For more information about the topics covered in this tutorial, refer to the following resources: