This tutorial will guide you through deploying a Nomad cluster with access control lists (ACLs) enabled on GCP. Consider checking out the cluster setup overview first as it covers the contents of the code repository used in this tutorial.
For this tutorial, you will need:
- Packer 1.7.7 or later installed locally
- Terraform 1.2.0 or later installed locally
- Nomad 1.3.3 or later installed locally
- A GCP account and the
gcloudCLI tool installed locally
Note: This tutorial creates GCP resources that may not qualify as part of the GCP free tier. Be sure to follow the Cleanup process at the end of this tutorial so you don't incur any additional unnecessary charges.
The cluster setup code repository contains configuration files for creating a Nomad cluster on GCP. It uses Consul for the initial setup of the Nomad servers and clients and enables ACLs for both Consul and Nomad.
Clone the code repository.
$ git clone https://github.com/hashicorp/learn-nomad-cluster-setup
Navigate to the cloned repository folder.
$ cd learn-nomad-cluster-setup
Check out the
v0.2 tag of the repository as a local branch named
$ git checkout v0.2 -b nomad-cluster
Navigate to the
$ cd gcp
Log in to GCP with
gcloud and follow the prompts to complete the login process.
$ gcloud auth login Your browser has been opened to visit: https://accounts.google.com/o/oauth2/auth?response_type=code[...] You are now logged in as [YOUR_GCP_ACCOUNT]. Your current project is [YOUR_CURRENT_PROJECT]. You can change this setting by running: $ gcloud config set project PROJECT_ID
zone configurations in
Tip: If you already have a project in your GCP account, these configurations will be set for you as part of the login step. If not, set them with the
gcloud config set command after creating a project.
project to the project ID of your preferred project.
$ gcloud config set project <GCP_PROJECT_ID>
region to the associated region.
$ gcloud config set compute/region <GCP_REGION>
zone to the associated zone. Note that the zone must be in the region set above.
$ gcloud config set compute/zone <GCP_ZONE>
There are two main steps to creating the cluster: building a Google Compute Engine image with Packer and provisioning the cluster infrastructure with Terraform. Both Packer and Terraform require that some configurations be set before they run and these configuration variables are defined in the
variables.hcl and open it in your text editor.
$ mv variables.hcl.example variables.hcl
.gitignore file in the example repo is set to ignore
variables.hcl so your configurations will not get pushed to your source code repository if you choose to do so. Do not commit sensitive data like credentials to your source code repository.
zone variables with the values from
gcloud by first listing the configurations and then copying the values for
variables.hcl. In this example, those would be
$ gcloud config list [compute] region = us-east1 zone = us-east1-b [core] account = [GCP_ACCOUNT] disable_usage_reporting = True project = hc-3ff63253e6a54756b207e4d4727
retry_join variable with the project ID by replacing the
GCP_PROJECT_ID placeholder in the value with the same project ID as the
project variable above. Save the file.
# Packer variables (all are required) project = "hc-3ff63253e6a54756b207e4d4727" region = "us-east1" zone = "us-east1-b" # Terraform variables (all are required) retry_join = "project_name=hc-3ff63253e6a54756b207e4d4727 provider=gce tag_value=auto-join" # ...
Initialize Packer to download the required plugins.
packer init returns no output when it finishes successfully.
$ packer init image.pkr.hcl
Then, build the image and provide the variables file with the
Tip: Packer will print out a
Warning: Undefined variable message notifying you that some variables were set in
variables.hcl but not used, this is only a warning. The build will still complete sucessfully.
$ packer build -var-file=variables.hcl image.pkr.hcl googlecompute.hashistack: output will be in this color. ==> googlecompute.hashistack: Checking image does not exist... ==> googlecompute.hashistack: Creating temporary RSA SSH key for instance... ==> googlecompute.hashistack: Using image: ubuntu-minimal-1804-bionic-v20221026 ==> googlecompute.hashistack: Creating instance... googlecompute.hashistack: Loading zone: us-east1-b # ... ==> googlecompute.hashistack: Creating image... ==> googlecompute.hashistack: Deleting disk... googlecompute.hashistack: Disk has been deleted! Build 'googlecompute.hashistack' finished after 4 minutes 31 seconds. ==> Wait completed after 4 minutes 31 seconds ==> Builds finished. The artifacts of successful builds are: --> googlecompute.hashistack: A disk image was created: hashistack-20221121163551
variables.hcl in your text editor again.
machine_image with the value output from the Packer build. In this example, the value would be
Then, open your terminal and use the built-in
uuid() function of the Terraform console to generate two new UUIDs for the token's credentials.
$ terraform console > uuid() > "a90a52ae-bcb7-e38a-5fe9-6ac084b37078" > uuid() > "d14d6a73-a0f1-508d-6d64-6b0f79e5cb44" > exit
Copy these UUIDs and update the
nomad_consul_token_secret variables with the UUID values. Save the file.
In this example, the value for
nomad_consul_token_id would be
a90a52ae-bcb7-e38a-5fe9-6ac084b37078 and the value for
nomad_consul_token_secret would be
# Packer variables (all are required) project = "hc-3ff63253e6a54756b207e4d4727" region = "us-east1" zone = "us-east1-b" # Terraform variables (all are required) retry_join = "project_name=hc-3ff63253e6a54756b207e4d4727 provider=gce tag_value=auto-join" machine_image = "hashistack-20221121163551" nomad_consul_token_id = "a90a52ae-bcb7-e38a-5fe9-6ac084b37078" nomad_consul_token_secret = "d14d6a73-a0f1-508d-6d64-6b0f79e5cb44" # ...
The remaining variables in
variables.hcl are optional.
allowlist_ipis a CIDR range specifying which IP addresses are allowed to access the Consul and Nomad UIs on ports
4646as well as SSH on port
22. The default value of
0.0.0.0/0will allow traffic from everywhere.
Note: We recommend that you update
allowlist_ip to your machine's IP address or a range of trusted IPs.
nameis a prefix for naming the GCP resources.
client_instance_typeare the virtual machine instance types for the cluster server and client nodes, respectively.
client_countare the number of nodes to create for the servers and clients, respectively.
Initialize Terraform to download required plugins and set up the workspace.
$ terraform init Initializing the backend... Initializing provider plugins... - Finding latest version of hashicorp/google... - Installing hashicorp/google v4.43.1... - Installed hashicorp/google v4.43.1 (signed by HashiCorp) Terraform has created a lock file .terraform.lock.hcl to record the provider selections it made above. Include this file in your version control repository so that Terraform can guarantee to make the same selections by default when you run "terraform init" in the future. Terraform has been successfully initialized!
Provision the resources and provide the variables file with the
-var-file flag. Respond
yes to the prompt to confirm the operation. The provisioning takes several minutes. Once complete, the Consul and Nomad web interfaces will become available.
$ terraform apply -var-file=variables.hcl # ... Plan: 10 to add, 0 to change, 0 to destroy. Changes to Outputs: + IP_Addresses = (known after apply) + consul_bootstrap_token_secret = "123e4567-e89b-12d3-a456-426614174000" + lb_address_consul_nomad = (known after apply) Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: Yes # ... Apply complete! Resources: 10 added, 0 changed, 0 destroyed. Outputs: IP_Addresses = <<EOT Client public IPs: 184.108.40.206, 220.127.116.11, 18.104.22.168 Server public IPs: 22.214.171.124, 126.96.36.199, 188.8.131.52 The Consul UI can be accessed at http://184.108.40.206:8500/ui with the bootstrap token: d14d6a73-a0f1-508d-6d64-6b0f79e5cb44 EOT consul_bootstrap_token_secret = "d14d6a73-a0f1-508d-6d64-6b0f79e5cb44" lb_address_consul_nomad = "http://220.127.116.11"
Verify the services are in a healthy state. Navigate to the Consul UI in your web browser with the link in the Terraform output.
Click on the Log in button and use the bootstrap token secret
consul_bootstrap_token_secret from the Terraform output to log in.
Click on the Nodes page from the sidebar navigation. There are six healthy nodes, including three Consul servers and three Consul clients created with Terraform.
Warning: If the
nomad.token file already exists from a previous run, the script won't work until the token file has been deleted. Delete the file manually and re-run the script or use
rm nomad.token && ./post-script.sh.
Note: It may take some time for the setup scripts to complete and for the Nomad user token to become available in the Consul KV store. If the
post-setup.sh script doesn't work the first time, wait a couple of minutes and try again.
$ ./post-setup.sh The Nomad user token has been saved locally to nomad.token and deleted from the Consul KV store. Set the following environment variables to access your Nomad cluster with the user token created during setup: export NOMAD_ADDR=$(terraform output -raw lb_address_consul_nomad):4646 export NOMAD_TOKEN=$(cat nomad.token) The Nomad UI can be accessed at http://nomad-server-lb-2108864054.us-east-1.elb.amazonaws.com:4646/ui with the bootstrap token: a3376a1d-58ef-b21a-14cd-da31b2c14292
export commands from the output.
$ export NOMAD_ADDR=$(terraform output -raw lb_address_consul_nomad):4646 && \ export NOMAD_TOKEN=$(cat nomad.token)
Finally, verify connectivity to the cluster with
nomad node status
$ nomad node status ID DC Name Class Drain Eligibility Status a945787c dc1 ip-172-31-94-155 <none> false eligible ready 0582714a dc1 ip-172-31-81-146 <none> false eligible ready c52c4f14 dc1 ip-172-31-94-66 <none> false eligible ready
You can navigate to the Nomad UI in your web browser with the link in the
post-setup.sh script output. From there, log in with the bootstrap token saved in the
NOMAD_TOKEN environment variable by setting the Secret ID to the token's value and clicking on the Clients page from the sidebar navigation.
terraform destroy to remove the provisioned infrastructure. Respond
yes to the prompt to confirm removal.
$ terraform destroy -var-file=variables.hcl # ... Plan: 0 to add, 0 to change, 10 to destroy. Changes to Outputs: - IP_Addresses = <<-EOT Client public IPs: 18.104.22.168, 22.214.171.124, 126.96.36.199 Server public IPs: 188.8.131.52, 184.108.40.206, 220.127.116.11 The Consul UI can be accessed at http://18.104.22.168:8500/ui with the bootstrap token: d14d6a73-a0f1-508d-6d64-6b0f79e5cb44 EOT -> null - consul_bootstrap_token_secret = "d14d6a73-a0f1-508d-6d64-6b0f79e5cb44" -> null - lb_address_consul_nomad = "http://22.214.171.124" -> null Do you really want to destroy all resources? Terraform will destroy all your managed infrastructure, as shown above. There is no undo. Only 'yes' will be accepted to confirm. Enter a value: yes # ... google_compute_instance.server: Destruction complete after 51s google_compute_instance.client: Destruction complete after 51s google_compute_instance.server: Destruction complete after 51s google_compute_instance.client: Destruction complete after 51s google_compute_instance.server: Destruction complete after 51s google_compute_instance.client: Destruction complete after 51s google_compute_network.hashistack: Destruction complete after 52s Destroy complete! Resources: 10 destroyed.
In this tutorial you created a Nomad cluster on GCP with Consul and ACLs enabled. From here, you may want to:
For more information, check out the following resources.