A Complete Cloud Ecosystem built with Terraform, HCP Waypoint, HCP Vault, HCP Consul, and Nomad
In this use case, we will spin up our entire infrastructure with terraform apply
,
then launch the application to an EC2 container within our secured internal network with waypoint up
.
Welcome to the HashiCorp stack magic.
The ingredients for this special dish are Waypoint, Terraform, Consul, Nomad, and Vault. We chose to use HashiCorp managed versions of Waypoint, Consul, and Vault for one convenient HCP UI and to showcase the HashiCorp Virtual Network (HVN). A similar experience is of course possible with self-managed versions, and equally possible with Terraform automation, although it may require more familiarity with Terraform modules.
Prerequisites
To follow along with this use case on your own machine, you will need:
- A HashiCorp Cloud Platform (HCP) account.
- An AWS account with AWS Access Credentials configured locally:
- Docker registry
- Credentials to push to the Docker registry
As well as the following binaries installed locally:
It will help to have some familiarity with HashiCorp Configuration Language (HCL), which is the syntax used to write configuration files for Terraform and Waypoint.
In this use case, we will configure our infrastructure as code, which Terraform will run to provision the following:
- AWS Virtual Private Cloud and peer it with our HashiCorp Virtual Network (HVN)
- HCP Consul service mesh to manage services within the secure private network
- HCP Vault cluster to securely store credentials
- HashiCorp Nomad cluster on AWS EC2 instance to manage application containers
We will write application deployment configurations as code, which HCP Waypoint will run to build the container image and deploy the application to the infrastructure we set up with Terraform.
All it takes are two commands:
The Terraform and Waypoint configuration files and a sample application live in this repository. We will deploy a custom HashiCorp version of the popular game "2048". Please clone the repo if you'd like to follow along the tutorial on your local machine.
Create service principal and key
To leverage the Terraform integration and be able to deploy your HCP Consul using Terraform, you have to create a Service Principal and a key associated to it.
From the left menu select Access Control (IAM). On the left menu of the Access Control (IAM) page, click on the Service Principals tab, and on the page click on the create link.
Use the name you prefer to create the Service Principal (we used xx-2048
for
this use case) but remember to use Contributor
as the role.
Once the Service Principal is created, click on the service principal name on the page to view its details. From the detail page, click on + Generate key.
Note: Remember to copy the Client ID and secret; you won't be able to retrieve the secret later once you close the pop up.
Save the client ID and secret as the environment variables HCP_CLIENT_ID
and HCP_CLIENT_SECRET
.
Or, configure the provider with the client ID and secret by copy-pasting the values directly into provider config.
You can view existing keys and generate new ones on this page:
When client credentials are set, they are always used by the HCP Provider client, regardless of an existing user session.
Define the Virtual Network
VPC
You can find VPC configurations in vpc.tf
. There is more than one way to create this VPC, including manually in AWS.
The Amazon VPC can be managed with the AWS provider.
HVN
In the hcp.tf
configuration file,
we define a HashiCorp Virtual Network (HVN) named "xx-2048" and peer it to the VPC.
HVNs enable you to deploy HashiCorp Cloud products without having to manage the networking details.
Creating peering connections from your HVN allows you to connect and launch AWS resources from your HCP account. Each peering connection requires an HVN route to set its CIDR block.
Once the VPC is peered with the HVN, the resources deployed inside the VPC will be able to reach each other.
Define HCP Consul
HCP Consul enables you to quickly deploy Consul servers in AWS while offloading the operations burden to the SRE experts at HashiCorp.
Provisioning HCP Consul with Terraform is incredibly easy.
You can define the cluster in Terraform using the hcp_consul_cluster
resource,
as shown in hcp.tf
:
You can also provision a Consul cluster via the HCP UI.
Define HCP Vault
Defining an HCP Vault cluster via Terraform also requires minimal configuration,
using the hcp_vault_cluster
resource.
Note: We enabled the public endpoint for our Vault cluster for ease of demo, but deploying Vault in a publicly accessible way should be avoided if possible to reduce security exposure.
Vault Cluster Configurations
We will further configure Vault to communicate with Nomad. The Vault provider allows Terraform to read from, write to, and configure HashiCorp Vault:
You can find the detailed Vault configurations in vault-config.tf
Define Nomad Configurations
In nomad.tf
,
we configure the AWS Security Group, find a demo AMI, and configure the Nomad server, client, and auto-scaling.
The scripts user-data-client.sh
and user-data-server.sh
are run directly in the EC2 instances spun up by Terraform to provision Nomad.
Note: Since this is a use-case demo, we are not following Security best-practices. For example, my AWS Security Group allows all ingress and egress traffic. We also provision only one Nomad server and client cluster each, which does not insulate against downtime. For real production use cases, please properly configure your security group and have multiple clusters.
The launch template looks like this:
This is not the only way to define Nomad clusters in Terraform. For more practice deploying Nomad to EC2 with Terraform, follow this tutorial.
Deploy the Infrastructure!
Once you have finished defining infrastructure configurations (or cloned them from the repo), and set the environment variables for HCP and AWS access, you are now ready to deploy your infrastructure!
Once Terraform has been initialized, you can verify that the resources will
be created using the terraform plan
command.
Finally, this is the magic moment we promised:
Remember to confirm the run by entering yes
!
Set up HCP Waypoint Context
HCP Waypoint is currently in Beta, and a Terraform module to provision Waypoint is still under development.
Head into your HCP UI and follow this tutorial to set up your local environment.
Navigate to Waypoint in the HCP dashboard:
Then click on "Install New Runner", which will generate terminal inputs to set up the Waypoint context and install a Runner on your chosen platform.
In this case, we choose Nomad with a host volume named "wp_runner".
You will need to export the Nomad server address, which you can find on the AWS EC2 instance:
Export the nomad address to your local environment and install the runner after installing the HCP Waypoint context, and you're all set to use HCP Waypoint.
The waypoint.hcl
file defines application deployment configurations.
For this use case, we have a simple "build", "deploy", and "release" flow:
You can read more about Waypoint HCL fundamentals, or explore advanced deployment workflows using custom pipelines.
Now, run waypoint init
:
And the moment of magic...
waypoint up
Congratulations, your application is live!
HCP Waypoint does not provide a URL service.
You can also deploy the application by installing self-hosted Waypoint
and running waypoint up
with a different context.
Easy to Monitor Dashboards
After our terraform apply
and waypoint up
, you should have a full HCP HashiCorp stack set up automatically and application deployed.
HashiCorp Virtual Network
You can find "HashiCorp Virtual Networks" on the left sidebar and see that our HCP Consul and HCP Vault resources have been automatically provisioned via Terraform in the same VPC, and associated with the network. Magic!
Consul Dashboard
You can access the HCP Consul UI by navigating to the Consul tab in HCP, and clicking "Access Consul". For this use case, we configured a public IP address for the cluster, so you can access Consul with the public address. This is not recommended for a production use case.
As you can see, Consul service mesh keeps track of services in our VPC.
Nomad Dashboard
Nomad is a workload orchestration program, so services in the HVN are reflected as jobs. You can see the Nomad server, client, and Waypoint runner.
Have fun exploring! You can find more guidance in the Nomad UI documentation.
HashiCorp stack 2048
You can find a persistent version of the application we deployed here: HashiCorp Stack 2048 Game
Summary
Waypoint and Terraform Cloud automation magic, Easy deployments!
Our Engineers aren't paid to write haikus, but infrastructure automation inspires innovation.
HashiCorp customers already know how easy it is to provision and manage cloud infrastructure with HashiCorp tools, and Waypoint expands the HashiCorp stack by enabling deployment automation with custom pipelines. Hopefully, this use case showcases the synergy between HashiCorp products, and how easy they are to configure. Our mission is to help increase productivity, reduce risk and capital expenditure, and drive innovation within your organization. Thanks for taking the time to follow along this use case, and we are excited to see how you innovate with our tools!
Dive Deeper into the HashiCorp stack
We covered many HashiCorp tools in this use case. We focused on HCP product offerings, where HashiCorp manages the tooling for you. Provisioning is easy with the UI or Terraform, as shown in this demo. Of course, all our tools have Open Source versions and are available for self-management as well. We include below a few tutorials that dive deeper into pieces of the workflow shown in this use case, so you can explore this sweet HashiCorp stack synergy with self-managed Waypoint, Consul, Nomad, and Vault as well!