Automate your network configuration with Consul-Terraform-Sync
HashiCorp's Network Infrastructure Automation Integration Program allows your network administrators to build integrations that automatically apply network and security infrastructure changes reacting to changes in the Consul service catalog.
Network Infrastructure Automation is carried out by Consul-Terraform-Sync, a multi-platform tool, that is able to connect to Consul catalog and monitor changes in the services' state and health. The tool leverages Terraform as the underlying automation tool, and utilizes the Terraform provider ecosystem to drive relevant change to your network infrastructure.
Consul-Terraform-Sync can be configured to execute one or more automation tasks that use variables based on the content of the Consul service catalog. Each task consists of a runbook automation written as a compatible Terraform module using resources and data sources for the underlying network infrastructure.
In this tutorial, you will learn how to configure Consul-Terraform-Sync to communicate with your Consul datacenter, react to service changes, and execute an example task.
Prerequisites
This tutorial features two learning paths:
- HashiCorp Cloud Platform (HCP) Consul deployment, AWS Elastic Kubernetes Service (EKS), and AWS EC2 instance
- Self-managed Kubernetes Consul deployment on AWS Elastic Kubernetes Service (EKS), and AWS EC2 instance
Select your learning path by clicking one of the following tabs.
- An HCP account configured for use with Terraform
- An AWS account configured for use with Terraform
- git >= 2.0
- aws-cli >= 2.0
- terraform >= 1.0
- kubectl >= 1.21
- consul-k8s = 0.44.0
- jq >= 1.6
Clone GitHub repository
Clone the GitHub repository containing the configuration files and resources.
Using the following GitHub repository, you will provision the following resources:
- An HCP HashiCorp Virtual Network (HVN)
- An HCP Consul server Cluster
- An AWS VPC
- An AWS Elastic Kubernetes Service (EKS)
- An AWS key pair
- An AWS EC2 instance with a Consul client and CTS installed
Change into the directory that contains the complete configuration files for this tutorial.
Deploy your infrastructure
The sub-directories inside are responsible for the following:
- infrastructure: an HVN and Consul cluster on HCP; an AWS VPC; and Elastic Kubernetes Service.
- ec2-instance-cts: a Consul client EC2 instance with CTS.
- hashicups-v1.0.2: HashiCups is a demo app used in this tutorial.
Deploy your HCP, VPC and EKS
You will now deploy the basic infrastructure that consists of an HCP cluster, a VPC and an EKS cluster. The following command downloads the necessary providers and initializes the backend.
Deploy the resources using the apply
command. Confirm the run by entering yes
.
Once you confirm, it will take a few minutes to complete the deploy. If the deploy was successful you should get the following output.
You basic infrastructure is now deployed, which includes your AWS VPC, AWS EKS,
HCP HVN, as well as your HCP Consul cluster. Next, configure the kubectl
tool
to communicate with your EKS Kubernetes cluster.
Prepare configuration to deploy Consul
Terraform has automatically retrieved the client configuration information from
HCP that you need to connect your EKS cluster client agents to your Consul
cluster. These are two files in your infrastructure
directory - a default
client configuration and a certificate. Both should be considered secrets, and
in production environments should be kept in a secure location.
Use ls
to confirm that both the client_config.json
and ca.pem
files are
available.
Next, export the Consul root token into a shell variable.
Create a consul namespace in your Kubernetes cluster. Your Consul secrets and resources will be created in this namespace.
Consul Service on HCP is secure by default. This means that you will need to configure client agents with the gossip encryption key, the Consul CA cert, and a root ACL token. You need to store these secrets in the Kubernetes secrets engine so that the Helm chart can reference and retrieve them during installation.
Use the
ca.pem
file in the current directory to create a Kubernetes secret to store the Consul CA certificate.The Consul gossip encryption key is embedded in the
client_config.json
file that you downloaded and extracted into your current directory. Issue the following command to create a Kubernetes secret that stores the Consul gossip key encryption key. The following command usesjq
to extract the value from theclient_config.json
file.The last secret you need to add is an ACL bootstrap token. You can use the one you set to your
CONSUL_HTTP_TOKEN
environment variable earlier. Issue the following command to create a Kubernetes secret to store the bootstrap ACL token.
Note
If you are configuring a production environment, you should create a client token with a minimum set of privileges. For an in depth review of how to configure ACLs for Consul, refer to the Secure Consul with Access Control Lists tutorial or the official documentation.
Next, extract some more configuration values from the client_config.json
file and set them to environment variables that can be used to generate your Helm
values file. Issue the following command to set the DATACENTER
environment variable.
Extract the private server URL from the client config so that it can be set
in the Helm values file as the externalServers:hosts
entry. This value will be
passed as the retry_join
option to the Consul clients.
Extract the public server URL from the client config so that it can be set
the Helm values file as the k8sAuthMethodHost
entry.
Note
The following script relies on your cluster matching your current-context
name. If you have created an alias for your context, or the current-context name
does not match the cluster name for any other reason, you must manually set the
KUBE_API_URL
to the API server URL of your EKS cluster. You can use
kubectl config view
to view your cluster, and retrieve API server URL.
Validate that all of your environment variables have been set.
Example output:
Note
If any of these environment variables are not correctly set, the following script will generate an incomplete Helm values file, and the Consul Helm installation will not succeed.
Note
The value for the global.name
configuration must be unique for each Kubernetes cluster where Consul clients are installed and configured to join Consul as a shared service, such as HCP Consul. You can change the global name through the global.name
value in the Helm chart.
Generate the Consul Helm chart with the following command.
1 2 3 4 5 6 7 8 9 10111213141516171819202122232425262728293031323334353637
Last, validate that the config file is populated correctly.
Deploy Consul in your EKS cluster
Make sure that the consul-k8s
tool is the correct version for this deployment - v0.44.0
.
Deploy Consul to your AWS EKS platform using the consul-k8s
tool. Confirm the run by entering y
.
Once you confirm, it will take a few minutes to complete the deploy. If the deploy was successful you should get a similar output.
Verify that Consul is running by inspecting the output of the consul members
command.
Next, you will update your existing coredns ConfigMap in the kube-system namespace to include a forward definition for Consul that points to the cluster IP of the Consul DNS service. The following command adds the correct section to the K8s configuration.
Trigger a restart of the CoreDNS deployment to apply your changes.
Verify that DNS forwarding works by resolving the consul.service.consul
domain.
Deploy the EC2 instance for Consul-Terraform-Sync
Deploy an EC2 instance that will run the Consul-Terraform-Sync service.
First, use the Terraform output values from the infrastructure
deployment as
input variables for the ec2-instance-cts
deployment. The following command
does that for you automatically.
You are now ready to deploy the EC2 instance running Consul and Consul-terraform-sync. Initialize Terraform, and then apply the Terraform configuration.
Deploy the resources using the apply
command. Confirm the run by entering yes
.
Once you confirm, it will take a few minutes to complete the deploy. If the deploy was successful you should get the following output.
To further interact with the Consul cluster, you will need an access token. Get the root token and make note of it for later.
Note
This tutorial uses the Consul root token which is not recommended for production use. Examine the Secure Consul with Access Control Lists (ACLs) tutorial to learn how to correctly create a token just for the Consul client agent on the EC2 instance.
You EC2 instance is now deployed. The Consul client agent is currently being
installed by the provisioning scripts. After a couple of minutes make sure that
the Consul service has started by running the consul members
command.
Deploy HashiCups to Kubernetes
Deploy the example app HashiCups to your K8s cluster.
Forward a port to the K8s cluster to verify that HashiCups has been deployed correctly.
Open the HashiCups app by pointing your browser to http://localhost:8082. You should find the HashiCups welcome page.
Next, stop the K8s port forwarding by entering CTRL+C to return to your terminal session.
Finally, make sure that the Consul catalog contains the HashiCups services and their sidecar proxies.
Configure Consul-Terraform-Sync
The Consul-Terraform-Sync daemon is configured using configuration files and supports HashiCorp Configuration Language (HCL) and JSON file formats.
A configuration file for Consul-Terraform-Sync is composed by several blocks, this section will guide you through the different blocks and provide you with example values. For the full list of available options check the documentation.
Access the EC2 instance that you previously deployed by running the following command.
Create a file named cts-config.hcl
.
Paste the following content into the cts-config.hcl
file.
Make sure to edit the token on line 15 to match with your root token.
Global configs
Top level options are reserved for configuring the Consul-Terraform-Sync daemon.
In this section you can configure the log level to use for Consul-Terraform-Sync logging as well as the port used by the demon to offer the API interface.
Other notable sections are:
The
buffer_period
section configures the default buffer period for all tasks to mitigate the affects of flapping services to downstream network devices.The
syslog
section specifies the syslog server for logging. This section can be useful when Consul-Terraform-Sync is configured as a daemon, for example in Linux using systemd.
Consul block
The consul block is used to configure Consul-Terraform-Sync connection with a Consul agent to perform queries to the Consul catalog and Consul KV pertaining to task execution.
You can use this block to configure connection parameters, such as the Consul address, and security parameters, like TLS certificates or ACL token, to secure the connection with Consul and make sure you provide the Consul-Terraform-Sync daemon with the right privileges to perform the required operations.
Configure CTS to communicate to your HCP Consul cluster using the Consul root token that you retrieved earlier.
Note
In a fully secured environment, with mTLS and ACLs enabled, you
can use this section to include the certificates, in addition to the token
required by Consul-Terraform-Sync to securely communicate with Consul.
You can also specify the token using environment variables, such as
CONSUL_HTTP_TOKEN
, to avoid having sensitive data in your configuration file.
Furthermore, the token used here should have only the minumum level of access
needed. Refer to the Secure Consul-Terraform-Sync for Production
guide
for more information on how to do that.
Performance considerations
As shown in the architectural diagram above it is recommended to run Consul-Terraform-Sync on a dedicated node running a Consul agent. This permits to have dedicated resources for network automation and to fine tune security and privilege separation between the network administrators and the other Consul agents.
Driver “terraform” block
The driver block configures the subprocess used by Consul-Terraform-Sync to propagate infrastructure change. The Terraform driver is a required configuration for Consul-Terraform-Sync to relay provider discovery and installation information to Terraform.
Using this section you can define the version of Terraform you want to use, in case you want to define strict requirements for it, the path where to find/install the Terraform binary or if you want the logs for Terraform to be included in the Consul-Terraform-Sync logs or persisted on disk. Finally, you can also define the working directory for Terraform to operate in.
Terraform state
By default, Consul-Terraform-Sync uses Consul to store
Terraform state files.
If no options are specified the same Consul instance configured in the consul
block is used. In case you want to use a different backend for Terraform or to
specify a different Consul datacenter as backend, you can use the backend
section of the configuration to define it. All standard backends supported by
Terraform are supported by Consul-Terraform-Sync. Check
Terraform backend
documentation to learn about available options.
Deprecated
The driver.terraform.working_dir
option is marked for
deprecation in Consul-Terraform-Sync 0.3.0. Use the global
working_dir
option instead.
Task block
A task is executed when any change of information for services the task is configured for is detected from the Consul catalog. Execution could include one or more changes to service values, like IP address, added or removed service instance, or tags.
You can check the full list of values that would cause a task to run in the Task Execution documentation.
Consul-Terraform-Sync will attempt to execute each task once upon startup to synchronize infrastructure with the current state of Consul. The daemon will stop and exit if any error occurs while preparing the automation environment or executing a task for the first time.
A task block configures the task to run as automation for the defined services.
The services can be explicitly defined in the task's condition
block.
You can specify multiple task blocks in case you need to configure multiple tasks.
For example, in the code snippet above, you are defining a task block that will
monitor and react to changes on frontend
and public-api
services and run
the module defined in the module
parameter when any change is detected.
For this tutorial, you will be using the Consul Print Module to configure Consul-Terraform-Sync. This Terraform module creates text files on your local machine containing Consul service information. It is a useful module to use to familiarize yourself with Consul Terraform Sync, without requiring deployed infrastructure and credentials.
Start Consul-Terraform-Sync
Once the configuration file is created it is possible to start
Consul-Terraform-Sync using the consul-terraform-sync
binary. The binary
requires the configuration to be passed either using the -config-file
or
-config-dir
.
Consul-Terraform-Sync provides different running modes, including some that can be useful to safely test your configuration and the changes that are going to be applied.
The default mode is named daemon mode, in this mode Consul-Terraform-Sync passes through a once-mode phase, in which it will try to run all the tasks once, and then turns into a long running process. During the once-mode phase, the daemon will exit with a non-zero status if it encounters an error. After successfully passing through once-mode phase, errors will be logged and the process is not expected to exit.
You may also start Consul-Terraform-Sync as a systemd process. To learn how to configure Consul-Terraform-Sync as a systemd process, checkout the Secure Consul-Terraform-Sync for Production tutorial.
Warning
If you are using Consul-Terraform-Sync version older than
v0.6.0, the start
parameter is not supported and must be removed.
When running in daemon mode Consul-Terraform-Sync will keep running in the foreground and prevent you from performing other operations in the same terminal. Also it will get terminated in case the terminal session is closed. Consider configuring it as a service and to run it at system startup to be able to survive machine reboots.
Review files created by Consul-Terraform-Sync daemon
After startup, Consul-Terraform-Sync will run Terraform inside the
working_directory
defined in the configuration. If no directory is defined, it
creates a folder, named sync-tasks
, in the directory from which the binary is
started.
Inside that folder, Terraform will create a workspace for each task defined in the configuration.
Here are the files of particular interest in this folder:
- The
main.tf
file contains the Terraform block, provider blocks, and a module block calling the module configured for the task. - The
variables.tf
file contains the services input variable which determines module compatibility with Consul-Terraform Sync and optionally the intermediate variables used to dynamically configure providers. - The
terraform.tfvars
file is where the services input variable is assigned values from the Consul catalog. It is periodically updated to reflect the current state of the configured set of services for the task. - The
terraform.tfvars.tmpl
file is used to template the information retrieved from Consul catalog into theterraform.tfvars
file.
For example, running Consul-Terraform-Sync against the current deployment will
produce the following terraform.tfvars
:
Note
Generated template and Terraform configuration files are crucial to the automation of tasks. Any manual changes to the files may not be preserved and could be overwritten by a subsequent update.
Review automation results
In this example the module prints the addresses of the matched services
into one file for each service matched. You will find two of those files -
frontend.txt
and public-api.txt
in the task workspace folder
learn-cts-example
.
In a real life scenario your module will react on service changes to trigger Terraform runs and deploy changes to your network infrastructure.
Scale-up the deployments of both the frontend
and public-api
services and
observe the changes that CTS does to the related text files. First inspect the
current content of both files.
Keep the current session to the EC2 instance open (CTS is running in the
background), and open a new terminal session on your own machine. Then scale-up
the K8s deployment of the frontend
service to two instances.
Return to your EC2 instance session and you will find that while CTS was running
in the background, it reacted to the scale-up of the frontend
deployment.
Press enter to clear your terminal and then check out the content of the text files for changes.
By scaling-up the frontend
deployment, CTS has reacted to the change and
refreshed the content of the text file to include the IP address of the second
instance of the service. Meanwhile, the public-api
service is unchanged.
Clean-up
Exit the session to your EC2 instance.
Destroy resources
Use the terraform destroy
command to clean up the resources you created.
First, destroy the CTS EC2 instance.
Remember to confirm by entering yes
.
Once you confirm, it will take a few minutes to complete the removal. If the command was successful you should get the following output.
Next, uninstall Consul with the help of the consul-k8s
tool. Confirm the
uninstall action and then the deletion of Consul data when prompted.
Next, destroy the AWS VPC and the EKS cluster.
Remember to confirm by entering yes
.
Once you confirm, it will take a few more minutes to complete the removal. Once the remove is successful, you should get the following output.
Next steps
In this tutorial you learned the basics of Network Infrastructure Automation with Consul-Terraform-Sync. You got an architectural overview of all the logical blocks involved in the process and what are the advantages of automating Day-2 operations for network administrative tasks. Finally, you tried first hand the Consul-Terraform-Sync binary with an example module that retrieved information from Consul service catalog and printed it on a text file in your working directory.
Specifically you:
- Installed Consul-Terraform-Sync
- Created a working configuration file for the binary
- Started Consul-Terraform-Sync
- Checked the files and data pulled from Consul-Terraform-Sync into the Terraform workspace
In the next tutorial you will learn the different run modes for Consul-Terraform-Sync and how to inspect task status using the REST API, and Terraform state when using Consul as backend for Terraform.
For more information on topics covered in this tutorial, check out the following resources.
- Read more about the Network Infrastructure Automation (NIA) with CTS in the Network Infrastructure Automation documentation
- Configure CTS as a long running service by completing the Secure Consul-Terraform-Sync for Production tutorial
- Learn more about service mesh by completing the Kubernetes or virtual machine tutorials
To learn even more about operating, observing, and monitoring your Consul service mesh, check out the following tutorials and collections.