This tutorial will guide you through deploying a Nomad cluster with access control lists (ACLs) enabled on AWS. Consider checking out the cluster setup overview first as it covers the contents of the code repository used in this tutorial.
For this tutorial, you will need:
- Packer 1.7.7 or later installed locally
- Terraform 1.2.0 or later installed locally
- Nomad 1.3.3 or later installed locally
- An AWS account with credentials set as local environment variables and an AWS keypair
The cluster setup code repository contains configuration files for creating a Nomad cluster on AWS. It uses Consul for the initial setup of the Nomad servers and clients and enables ACLs for both Consul and Nomad.
Clone the code repository.
$ git clone https://github.com/hashicorp/learn-nomad-cluster-setup
Navigate to the cloned repository folder.
$ cd learn-nomad-cluster-setup
Check out the
v0.2 tag of the repository as a local branch named
$ git checkout v0.2 -b nomad-cluster
Navigate to the
$ cd aws
There are two main steps to creating the cluster: building an Amazon Machine Image (AMI) with Packer and provisioning the cluster infrastructure with Terraform. Both Packer and Terraform require that some configurations be set before they run and these configuration variables are defined in the
variables.hcl and open it in your text editor.
$ mv variables.hcl.example variables.hcl
.gitignore file in the example repo is set to ignore
variables.hcl so your configurations will not get pushed to your source code repository if you choose to do so. Do not commit sensitive data like credentials to your source code repository.
region variable with your AWS region of choice and save the file. This is the only variable that Packer requires. The remaining variables are for Terraform and you will update them after building the AMI.
# Packer variables (all are required) region = "us-east-1"
Make sure that your AWS access credentials are set as environment variables as Packer uses them to build and register the AMI in AWS.
Initialize Packer to download the required plugins.
packer init returns no output when it finishes successfully.
$ packer init image.pkr.hcl
Then, build the image and provide the variables file with the
Packer will print out a
Warning: Undefined variable message notifying you that some variables were set in
variables.hcl but not used, this is only a warning. The build will still complete sucessfully.
$ packer build -var-file=variables.hcl image.pkr.hcl Build 'amazon-ebs' finished after 14 minutes 32 seconds. # ... ==> Wait completed after 14 minutes 32 seconds ==> Builds finished. The artifacts of successful builds are: --> amazon-ebs: AMIs were created: us-east-1: ami-0445eeea5e1406960
variables.hcl in your text editor again.
key_name with your AWS SSH keypair name and
ami with the value output from the Packer build. Save the file. In this example, the value for
ami would be
Then, open your terminal and use the built-in
uuid() function of the Terraform console to generate two new UUIDs for the token's credentials.
$ terraform console > uuid() > "a90a52ae-bcb7-e38a-5fe9-6ac084b37078" > uuid() > "d14d6a73-a0f1-508d-6d64-6b0f79e5cb44" > exit
Copy these UUIDs and update the
nomad_consul_token_secret variables with the UUID values. Save the file.
In this example, the value for
nomad_consul_token_id would be
a90a52ae-bcb7-e38a-5fe9-6ac084b37078 and the value for
nomad_consul_token_secret would be
# ... # Terraform variables (all are required) key_name = "AWS_SSH_KEY_NAME" ami = "ami-0445eeea5e1406960" nomad_consul_token_id = "a90a52ae-bcb7-e38a-5fe9-6ac084b37078" nomad_consul_token_secret = "d14d6a73-a0f1-508d-6d64-6b0f79e5cb44" # These variables will default to the values shown # and do not need to be updated unless you want to # change them # allowlist_ip = "0.0.0.0/0" # name = "nomad" # server_instance_type = "t2.micro" # server_count = "3" # client_instance_type = "t2.micro" # client_count = "3"
The remaining variables in
variables.hcl are optional.
allowlist_ipis a CIDR range specifying which IP addresses are allowed to access the Consul and Nomad UIs on ports
4646as well as SSH on port
22. The default value of
0.0.0.0/0will allow traffic from everywhere.
We recommend that you update
allowlist_ip to your machine's IP address or a range of trusted IPs.
nameis a prefix for naming the AWS resources.
client_instance_typeare the virtual machine instance types for the cluster server and client nodes, respectively.
client_countare the number of nodes to create for the servers and clients, respectively.
Initialize Terraform to download required plugins and set up the workspace.
$ terraform init Initializing the backend... Initializing provider plugins... - Finding latest version of hashicorp/aws... - Installing hashicorp/aws v4.40.0... - Installed hashicorp/aws v4.40.0 (signed by HashiCorp) Terraform has created a lock file .terraform.lock.hcl to record the provider selections it made above. Include this file in your version control repository so that Terraform can guarantee to make the same selections by default when you run "terraform init" in the future. Terraform has been successfully initialized!
Provision the resources and provide the variables file with the
-var-file flag. Respond
yes to the prompt to confirm the operation. The provisioning takes several minutes. Once complete, the Consul and Nomad web interfaces will become available.
$ terraform apply -var-file=variables.hcl # ... Apply complete! Resources: 14 added, 0 changed, 0 destroyed. Outputs: IP_Addresses = <<EOT Client public IPs: 126.96.36.199, 188.8.131.52, 184.108.40.206 Server public IPs: 220.127.116.11, 18.104.22.168, 22.214.171.124 The Consul UI can be accessed at http://126.96.36.199:8500/ui with the bootstrap token: 8f94ee77-bc50-4ba1-bf75-132ed6b9366e EOT consul_bootstrap_token_secret = "8f94ee77-bc50-4ba1-bf75-132ed6b9366e" lb_address_consul_nomad = "http://188.8.131.52"
Verify the services are in a healthy state. Navigate to the Consul UI in your web browser with the link in the Terraform output.
Click on the Log in button and use the bootstrap token secret
consul_bootstrap_token_secret from the Terraform output to log in.
Click on the Nodes page from the sidebar navigation. There are six healthy nodes, including three Consul servers and three Consul clients created with Terraform.
nomad.token file already exists from a previous run, the script won't work until the token file has been deleted. Delete the file manually and re-run the script or use
rm nomad.token && ./post-script.sh.
It may take some time for the setup scripts to complete and for the Nomad user token to become available in the Consul KV store. If the
post-setup.sh script doesn't work the first time, wait a couple of minutes and try again.
$ ./post-setup.sh The Nomad user token has been saved locally to nomad.token and deleted from the Consul KV store. Set the following environment variables to access your Nomad cluster with the user token created during setup: export NOMAD_ADDR=$(terraform output -raw lb_address_consul_nomad):4646 export NOMAD_TOKEN=$(cat nomad.token) The Nomad UI can be accessed at http://184.108.40.206:4646/ui with the bootstrap token: a3376a1d-58ef-b21a-14cd-da31b2c14292
export commands from the output.
$ export NOMAD_ADDR=$(terraform output -raw lb_address_consul_nomad):4646 && \ export NOMAD_TOKEN=$(cat nomad.token)
Finally, verify connectivity to the cluster with
nomad node status
$ nomad node status ID DC Name Class Drain Eligibility Status a945787c dc1 ip-172-31-94-155 <none> false eligible ready 0582714a dc1 ip-172-31-81-146 <none> false eligible ready c52c4f14 dc1 ip-172-31-94-66 <none> false eligible ready
You can navigate to the Nomad UI in your web browser with the link in the
post-setup.sh script output. From there, log in with the bootstrap token saved in the
NOMAD_TOKEN environment variable by setting the Secret ID to the token's value and clicking on the Clients page from the sidebar navigation.
terraform destroy to remove the provisioned infrastructure. Respond
yes to the prompt to confirm removal.
$ terraform destroy -var-file=variables.hcl # ... aws_instance.server: Destruction complete after 30s aws_instance.server: Still destroying... [id=i-017defd36b45408c1, 30s elapsed] aws_instance.server: Destruction complete after 30s aws_iam_instance_profile.instance_profile: Destroying... [id=nomad20220613201917520400000002] aws_security_group.primary: Destroying... [id=sg-0ffdf8214d5fc85b2] aws_iam_instance_profile.instance_profile: Destruction complete after 0s aws_iam_role.instance_role: Destroying... [id=nomad20220613201916761200000001] aws_iam_role.instance_role: Destruction complete after 0s aws_security_group.primary: Destruction complete after 0s aws_security_group.server_lb: Destroying... [id=sg-016a74cc79f3f2826] aws_security_group.server_lb: Destruction complete after 1s Destroy complete! Resources: 14 destroyed.
Your AWS account still has the AMI and its S3-stored snapshots, which you may be charged for depending on your other usage. Delete the AMI and snapshots stored in your S3 buckets.
Remember to delete the AMI images and snapshots in the region where you created them. If you don’t update the
region variable in the
terraform.tfvars file, they are created in the
us-east-1 AWS account, deregister the AMI:
- Select the AMI
- Click the Actions button
- Click the Deregister AMI option
- Click the Deregister AMI button to confirm that you want to deregister the AMI when prompted
Delete the snapshots:
- Select the snapshot
- Click the Actions button
- Click the Delete Snapshot option
- Click the Delete button to confirm that you want to delete the snapshot when prompted
In this tutorial you created a Nomad cluster on AWS with Consul and ACLs enabled. From here, you may want to:
For more information, check out the following resources.