Boundary
Dynamic host catalogs on GCP
- 20min
- |
- BoundaryBoundary
Deployment
Dynamic updates to host catalogs is an important feature that sets Boundary apart from traditional access methods that rely on manual target configuration. Dynamic host catalogs enable tight integrations with major cloud providers for seamlessly onboarding cloud tenant identities, roles, and targets.
Enabling automated discovery of target hosts and services ensures that hosts and host catalogs are consistently up-to-date. This critical workflow offers access-on-demand and eliminates the need to manually configure targets for dynamic, cloud-based infrastructure.
This tutorial demonstrates configuring a dynamic host catalog using Google Cloud Platform (GCP).
Dynamic hosts catalog overview
- Get setup
- Dynamic host catalogs background
- Set up the cloud VMs
- Build a GCP host catalog
- Verify catalog membership
Prerequisites
A Boundary binary greater than 0.19.0 in your
PATH
.This tutorial assumes you can connect to an HCP Boundary cluster, a Boundary Enterprise cluster, or launch Boundary in dev mode.
A Google Cloud Platform account. This tutorial requires the creation of new cloud resources and will incur costs associated with the deployment and management of these resources.
Installing the gcloud CLI is required for this tutorial.The executable must be available within your
PATH
.Installing Terraform 0.14.9 or greater is required to deploy the lab environment for this tutorial. The binary must be available in your
PATH
.
Get setup
In this tutorial, you will test dynamic host catalog integrations using HCP Boundary, a Boundary Enterprise cluster, or by running a Boundary controller locally using Boundary Community Edition and dev mode.
Select a Deployment model for the tutorial in the upper-right corner of the screen:
- HCP Boundary
- Dev mode
- Enterprise
The HCP Quickstart tutorials gives an overview of getting started with an HCP Boundary cluster.
If you have an HCP Boundary cluster deployed, you can review the Access HCP Boundary tutorial for an overview of configuring your local machine to authenticate with your HCP cluster.
This tutorial provides CLI, Admin UI, and Terraform workflows for setting up dynamic updates to host catalogs with GCP. The workflow you select is for configuring Boundary, after you deploy the lab environment with Terraform.
To proceed with the UI workflow:
Open the Admin UI
Enter the admin username and password you created when you deployed the new instance and click Authenticate.
You are now logged into your HCP Boundary instance's Global scope in the Admin UI. This is the default scope for all new Boundary clusters.
To proceed with the CLI workflow:
Log into the Boundary Admin UI using your admin credentials.
In the Boundary web UI, click Orgs in the left navigation menu to return to the global scope, and then click Auth Methods.
Click the copy icon for the Password auth method.
In your terminal set an environment variable named
BOUNDARY_AUTH_METHOD_ID
to the copied ID.$ export BOUNDARY_AUTH_METHOD_ID=<auth-method-id>
Setting the auth method ID as an environment variable defines the password auth method as the default when you log in with the CLI.
Close the Boundary web UI.
Return to the HCP web Portal Boundary page, then click the copy icon for the Cluster URL in the Getting started with Boundary section.
In your terminal, set the
BOUNDARY_ADDR
environment variable to the copied URL.$ export BOUNDARY_ADDR=<YOUR_BOUNDARY_ADDRESS>
Log in with the administrator credentials you created when you deployed the HCP Boundary instance. Ensure that the
BOUNDARY_ADDR
andBOUNDARY_AUTH_METHOD_ID
environment variables are set, then runboundary authenticate
. Enter your password at thePlease enter the password (it will be hidden):
prompt.$ boundary authenticate Please enter the login name (it will be hidden): Please enter the password (it will be hidden): Authentication information: Account ID: acctpw_VOeNSFX8pQ Auth Method ID: ampw_wxzojlKJLN Expiration Time: Mon, 13 Feb 2023 12:35:32 MST User ID: u_1vUkf5fPs9 The token was successfully stored in the chosen keyring and is not displayed here.
You are now logged into your HCP Boundary instance's Global scope via the CLI. This is the default scope for all new HCP Boundary clusters.
To proceed with the Terraform workflow you will need to set up the Boundary Terraform provider.
You will need the following configuration values:
addr
(from HCP portal)auth_method_id
(from Boundary Admin UI)password_auth_method_login_name
(from initial Boundary Cluster creation)password_auth_method_password
(from initial Boundary Cluster creation)scope_id
(global scope from Boundary Admin UI)
The password_auth_method_login_name
and password_auth_method_password
values are created when first setting up HCP Boundary, and you can gather the others from the HCP portal or the Boundary Admin UI.
Dynamic host catalogs background
In a cloud operating model, infrastructure resources are highly dynamic and ephemeral. Boundary does not require an on-target agent or daemon to discover target virtual machine hosts, which are challenging to maintain at scale. Instead, Boundary relies on an external entity, such as manual configuration by an administrator or IaC (infrastructure as code) application like Terraform, to ensure host definitions route to the appropriate network location. Many other secure access solutions follow this pattern.
Dynamic host catalog plugins are an alternative way to automate the discovery and configuration of Boundary hosts and targets by delegating the host registry and their connection information to a cloud infrastructure provider. Administrators supply credentials for the catalog provider and a set of tag-based rules for discovering resources in the catalog. For example, "this catalog contains VM instance types in GCP’s us-east1 region within the Engineering subscription". This model does not rely on IaC target discovery or agent-based target discovery.
Boundary uses Go-Plugin to implement a plugin model for expanding the dynamic host catalog ecosystem. Plugins enable a future ecosystem of partner and community contributed integrations across each step in the Boundary access workflow.
Host tag filtering
To maintain a dynamic host catalog, you should tag hosts in a logical way that enables sorting into host sets identifiable by filters.
For example, this tutorial configures hosts on GCP using the following tags:
Boundary hosts will be sorted into any host catalogs and host sets you configure using these filtering attributes.
GCP credential types
You can select from three types of credential configurations for setting up access to your GCP account for Boundary:
Select a credential type to continue.
Service accounts are special user accounts used to authenticate applications or services, rather than individual users. They allow automated access to GCP resources and APIs without requiring users to directly manage credentials.
To set up service account credentials for this tutorial, you will:
- Deploy the host VMs using the provided sample code.
- Enable the IAM and Service Account Credentials APIs in your project.
- Create a service account and download the private key.
- Configure a Boundary dynamic host catalog.
Service account impersonation allows an authenticated principal (like a user or another service account) to act on behalf of a service account. It grants temporary or elevated access to the permissions of a service account without permanently changing IAM roles.
To set up service account impersonation for this tutorial, you will:
- Deploy the host VMs using the provided sample code.
- Enable the IAM and Service Account Credentials APIs in your project.
- Create a base service account for Boundary.
- Create a target service account to impersonate.
- Configure a Boundary dynamic host catalog.
Note
To configure Application Default Credentials (ADC), you should run Boundary on a GCP VM. For this tutorial, a Boundary worker deployed on GCP connects to your Boundary controller to grant access to host VMs using ADC. In other environments, you can run a Boundary controller or worker on GCP to grant access using ADC.
Application Default Credentials (ADC) enables a Boundary worker to authenticate to GCP.
In this tutorial, you will configure ADC for a virtual machine (VM) running on Google Compute Engine that you configure as a Boundary worker.
To set up service account credentials for this tutorial, you will:
- Deploy the host VMs and worker VM using the provided sample code.
- Enable the IAM and Service Account Credentials APIs in your project.
- Configure the worker VM and connect it to Boundary.
- Assign the service account to your worker VM.
- Configure a Boundary dynamic host catalog.
Set up cloud VMs
Warning
This tutorial deploys cloud VMs to test host catalog plugin configuration. You are responsible for any costs incurred by following the steps in this tutorial. Recommendations for destroying the cloud resources created in this tutorial are in the Cleanup and teardown section.
You need an GCP account to set up the Boundary GCP host plugin.
This tutorial enables configuration of the test VM hosts using Terraform.
You need access to a GCP account and sample project to set up the GCP hosts plugin for Boundary. If you don't have an account, sign up for GCP. A free account is suitable for the steps outlined in this tutorial, but please note that you are responsible for any charges incurred by following the steps in this tutorial.
The prerequisites for setting up the learning environment are:
- Terraform 0.14.9 or greater or greater is installed
- An active GCP account
- The GCP CLI is installed and available in your
PATH
.
Terraform needs to perform the following tasks to set up the lab environment:
- Deploy and tag the host set Virtual Machines in GCP.
- Configure an SSH key for the host VMs (optional).
- Configure networking permissions for the host VMs.
Configure the gcloud CLI
Authenticate to your GCP account.
$ gcloud auth login
Check the configured project for the CLI:
$ gcloud config get-value project
hc-26fb1119fccb4f0081b121xxxxx
If the correct project is defined, take no action. To change the active project, execute gcloud config set project YOUR_PROJECT
.
This command may return your project name, such as test-project
.
If you have the project name but still need the ID, execute the following command to get the project ID:
$ gcloud projects list --filter="name:test-project" --format="value(projectId)"
hc-26fb1119fccb4f0081b121xxxxx
Export the project ID as the GCP_PROJECT_ID
environment variable:
$ export GCP_PROJECT_ID="hc-26fb1119fccb4f0081b121xxxxx"
Authenticate to your GCP account using the application-default login. This enables your shell session to interact with GCP using their SDK. You must authenticate using this method to deploy Terraform.
$ gcloud auth application-default login
Configure the lab environment
This tutorial assumes you are working out of the home directory ~/
, but you can use any working directory you want for the following steps.
Clone the example code for this tutorial into your working directory.
$ git clone https://github.com/hashicorp-education/learn-boundary-cloud-host-catalogs
Navigate into the gcp/terraform
directory.
$ cd learn-boundary-cloud-host-catalogs/gcp/terraform
Examine the Terraform configuration main.tf
file. It configures the google
provider and sets up the credentials Boundary will use to authenticate to GCP.
Configure an SSH credential (required for ADC)
You can optionally configure an SSH credential to enable authentication to the host VMs configured for this tutorial. This step is not required to set up a dynamic host catalog.
Note
You must configure an SSH credential for the ADC workflow.
If you want to log into the host VMs after provisioning them with Terraform, create an SSH credential using the following documentation. If you do not want to create a keypair, skip to the Configure Terraform section.
Refer to the Create SSH keys GCP documentation to create a new keypair.
Name the username gcpuser
. For example, on a Linux machine with user admin
:
$ ssh-keygen -t rsa -f /home/admin/.ssh/gcpuser -C gcpuser
Follow the steps in the GCP SSH keys documentation linked above to create a keyfile with another operating system, or using the GCP Console UI.
After you create the new keypair and have access to the private key locally, continue to the next section.
Configure Terraform
Open the main.tf
file in your code editor.
Locate the following resources and update the values for the desired GCP project
ID, GCP region
, GCP zone
, and ssh_pub_key_file
path.
Note
Providing the ssh_pub_key_file
is optional for this workflow.
variable "project_id" {
default = "hc-26fb1119fccb4f0081b121xxxxx"
}
variable "region" {
default = "us-central1"
}
variable "zone" {
default = "us-central1-a"
}
variable "ssh_pub_key_file" {
## Optional SSH public key file path for access to the VMs
## This is required if using GCP Application Default Credentials (ADC)
description = "Path to SSH public key for the VM"
default = "/Users/username/.ssh/gcpuser.pub"
}
Save the main.tf
file.
Now initialize the Terraform plan.
$ terraform init
Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/google...
- Installing hashicorp/google v6.38.0...
- Installed hashicorp/google v6.38.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Deploy the virtual machine hosts
Now you will configure and deploy the host VMs to test the dynamic host catalog integration.
Deploy the Terraform configuration using terraform apply
.
$ terraform apply --auto-approve
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
+ create
Terraform will perform the following actions:
# google_compute_address.vm_ip[0] will be created
+ resource "google_compute_address" "vm_ip" {
+ address = (known after apply)
+ address_type = "EXTERNAL"
+ creation_timestamp = (known after apply)
+ effective_labels = {
+ "goog-terraform-provisioned" = "true"
}
+ id = (known after apply)
+ label_fingerprint = (known after apply)
+ name = "boundary-vm-1-ip"
+ network_tier = (known after apply)
+ prefix_length = (known after apply)
+ project = "hc-26fb1119fccb4f0081b121xxxxx"
+ purpose = (known after apply)
+ region = "us-central1"
+ self_link = (known after apply)
+ subnetwork = (known after apply)
+ terraform_labels = {
+ "goog-terraform-provisioned" = "true"
}
+ users = (known after apply)
}
# google_compute_address.vm_ip[1] will be created
+ resource "google_compute_address" "vm_ip" {
+ address = (known after apply)
...
... snip ...
...
Plan: 11 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ vm_public_ips = [
+ (known after apply),
+ (known after apply),
+ (known after apply),
+ (known after apply),
]
google_compute_address.vm_ip[2]: Creating...
google_compute_network.network: Creating...
google_compute_address.vm_ip[0]: Creating...
google_compute_address.vm_ip[1]: Creating...
google_compute_address.vm_ip[3]: Creating...
google_compute_address.vm_ip[0]: Creation complete after 4s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-1-ip]
google_compute_address.vm_ip[1]: Creation complete after 4s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-2-ip]
google_compute_network.network: Still creating... [10s elapsed]
google_compute_address.vm_ip[2]: Still creating... [10s elapsed]
google_compute_address.vm_ip[3]: Still creating... [10s elapsed]
google_compute_network.network: Creation complete after 12s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/global/networks/boundary-vm-network]
google_compute_firewall.allow_ssh: Creating...
google_compute_subnetwork.subnet: Creating...
google_compute_address.vm_ip[3]: Creation complete after 12s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-4-ip]
google_compute_address.vm_ip[2]: Creation complete after 12s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-3-ip]
google_compute_subnetwork.subnet: Still creating... [10s elapsed]
google_compute_firewall.allow_ssh: Still creating... [10s elapsed]
google_compute_firewall.allow_ssh: Creation complete after 11s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/global/firewalls/boundary-vm-allow-ssh]
google_compute_subnetwork.subnet: Still creating... [20s elapsed]
google_compute_subnetwork.subnet: Creation complete after 21s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/subnetworks/boundary-vm-subnet]
google_compute_instance.vm[0]: Creating...
google_compute_instance.vm[1]: Creating...
google_compute_instance.vm[2]: Creating...
google_compute_instance.vm[3]: Creating...
google_compute_instance.vm[3]: Still creating... [10s elapsed]
google_compute_instance.vm[0]: Still creating... [10s elapsed]
google_compute_instance.vm[1]: Still creating... [10s elapsed]
google_compute_instance.vm[2]: Still creating... [10s elapsed]
google_compute_instance.vm[2]: Creation complete after 18s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-3-production]
google_compute_instance.vm[0]: Still creating... [20s elapsed]
google_compute_instance.vm[1]: Still creating... [20s elapsed]
google_compute_instance.vm[3]: Still creating... [20s elapsed]
google_compute_instance.vm[3]: Creation complete after 27s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-4-production]
google_compute_instance.vm[0]: Creation complete after 28s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-1-dev]
google_compute_instance.vm[1]: Creation complete after 29s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-2-dev]
Apply complete! Resources: 11 added, 0 changed, 0 destroyed.
Outputs:
vm_public_ips = [
"35.223.89.185",
"35.226.254.30",
"35.202.149.44",
"34.172.0.177",
]
You can reference the Terraform outputs at any time by executing terraform output
.
Locate the following resources and update the values for the desired GCP project
ID, GCP region
, GCP zone
, and ssh_pub_key_file
path.
Note
Providing the ssh_pub_key_file
is optional for this workflow.
variable "project_id" {
default = "hc-26fb1119fccb4f0081b121xxxxx"
}
variable "region" {
default = "us-central1"
}
variable "zone" {
default = "us-central1-a"
}
variable "ssh_pub_key_file" {
## Optional SSH public key file path for access to the VMs
## This is required if using GCP Application Default Credentials (ADC)
description = "Path to SSH public key for the VM"
default = "/Users/username/.ssh/gcpuser.pub"
}
Save the main.tf
file.
Now initialize the Terraform plan.
$ terraform init
Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/google...
- Installing hashicorp/google v6.38.0...
- Installed hashicorp/google v6.38.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Deploy the virtual machine hosts
Now you will configure and deploy the host VMs to test the dynamic host catalog integration.
Deploy the Terraform configuration using terraform apply
.
$ terraform apply --auto-approve
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
+ create
Terraform will perform the following actions:
# google_compute_address.vm_ip[0] will be created
+ resource "google_compute_address" "vm_ip" {
+ address = (known after apply)
+ address_type = "EXTERNAL"
+ creation_timestamp = (known after apply)
+ effective_labels = {
+ "goog-terraform-provisioned" = "true"
}
+ id = (known after apply)
+ label_fingerprint = (known after apply)
+ name = "boundary-vm-1-ip"
+ network_tier = (known after apply)
+ prefix_length = (known after apply)
+ project = "hc-26fb1119fccb4f0081b121xxxxx"
+ purpose = (known after apply)
+ region = "us-central1"
+ self_link = (known after apply)
+ subnetwork = (known after apply)
+ terraform_labels = {
+ "goog-terraform-provisioned" = "true"
}
+ users = (known after apply)
}
# google_compute_address.vm_ip[1] will be created
+ resource "google_compute_address" "vm_ip" {
+ address = (known after apply)
...
... snip ...
...
Plan: 11 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ vm_public_ips = [
+ (known after apply),
+ (known after apply),
+ (known after apply),
+ (known after apply),
]
google_compute_address.vm_ip[2]: Creating...
google_compute_network.network: Creating...
google_compute_address.vm_ip[0]: Creating...
google_compute_address.vm_ip[1]: Creating...
google_compute_address.vm_ip[3]: Creating...
google_compute_address.vm_ip[0]: Creation complete after 4s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-1-ip]
google_compute_address.vm_ip[1]: Creation complete after 4s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-2-ip]
google_compute_network.network: Still creating... [10s elapsed]
google_compute_address.vm_ip[2]: Still creating... [10s elapsed]
google_compute_address.vm_ip[3]: Still creating... [10s elapsed]
google_compute_network.network: Creation complete after 12s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/global/networks/boundary-vm-network]
google_compute_firewall.allow_ssh: Creating...
google_compute_subnetwork.subnet: Creating...
google_compute_address.vm_ip[3]: Creation complete after 12s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-4-ip]
google_compute_address.vm_ip[2]: Creation complete after 12s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-3-ip]
google_compute_subnetwork.subnet: Still creating... [10s elapsed]
google_compute_firewall.allow_ssh: Still creating... [10s elapsed]
google_compute_firewall.allow_ssh: Creation complete after 11s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/global/firewalls/boundary-vm-allow-ssh]
google_compute_subnetwork.subnet: Still creating... [20s elapsed]
google_compute_subnetwork.subnet: Creation complete after 21s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/subnetworks/boundary-vm-subnet]
google_compute_instance.vm[0]: Creating...
google_compute_instance.vm[1]: Creating...
google_compute_instance.vm[2]: Creating...
google_compute_instance.vm[3]: Creating...
google_compute_instance.vm[3]: Still creating... [10s elapsed]
google_compute_instance.vm[0]: Still creating... [10s elapsed]
google_compute_instance.vm[1]: Still creating... [10s elapsed]
google_compute_instance.vm[2]: Still creating... [10s elapsed]
google_compute_instance.vm[2]: Creation complete after 18s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-3-production]
google_compute_instance.vm[0]: Still creating... [20s elapsed]
google_compute_instance.vm[1]: Still creating... [20s elapsed]
google_compute_instance.vm[3]: Still creating... [20s elapsed]
google_compute_instance.vm[3]: Creation complete after 27s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-4-production]
google_compute_instance.vm[0]: Creation complete after 28s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-1-dev]
google_compute_instance.vm[1]: Creation complete after 29s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-2-dev]
Apply complete! Resources: 11 added, 0 changed, 0 destroyed.
Outputs:
vm_public_ips = [
"35.223.89.185",
"35.226.254.30",
"35.202.149.44",
"34.172.0.177",
]
You can reference the Terraform outputs at any time by executing terraform output
.
Locate the following resources and update the values for the desired region
, zone
, ssh_pub_key_file
path, and project
ID.
Note
You must provide an ssh_pub_key_file
for the ADC workflow. You will use the key to access a Boundary worker VM later on.
main.tf
variable "gcp_project_id" {
## Replace with your GCP project ID, such as "hc-26fb1119fccb4f0081b121xxxxx"
default = "hc-26fb1119fccb4f0081b121xxxxx"
}
variable "region" {
default = "us-central1"
}
variable "zone" {
default = "us-central1-a"
}
variable "ssh_pub_key_file" {
## Optional SSH public key file path for access to the VMs
## This is required if using GCP Application Default Credentials (ADC)
description = "Path to SSH public key for the VM"
default = "/Users/username/.ssh/gcpuser.pub"
}
Scroll down to the bottom of the configuration file.
Uncomment the following lines:
main.tf
resource "google_compute_address" "worker_ip" {
name = "boundary-worker-ip"
region = var.gcp_region
}
resource "google_compute_instance" "worker" {
name = "boundary-worker"
machine_type = "e2-standard-2"
zone = var.gcp_zone
allow_stopping_for_update = true
tags = [
"boundary-worker",
]
labels = {
name = "boundary-worker"
service-type = "worker"
}
boot_disk {
initialize_params {
image = var.vm_image
type = "pd-standard"
}
auto_delete = true
}
network_interface {
network = google_compute_network.network.id
subnetwork = google_compute_subnetwork.subnet.id
access_config {
nat_ip = google_compute_address.worker_ip.address
}
}
## The following block is only for ADC configuration, after setting up
## the application-default-credentials.tf file.
## Uncomment the following lines after configuring the application-default-credentials.tf file.
# service_account {
# email = google_service_account.boundary_service_account.email
# scopes = ["compute-ro"]
# }
metadata = {ssh-keys = "${var.ssh_username}:${file(var.ssh_pub_key_file)}"}
}
output "worker_public_ip" {
value = google_compute_instance.worker.network_interface[0].access_config[0].nat_ip
}
output "worker_ssh_command" {
value = "ssh ${var.ssh_username}@${google_compute_instance.worker.network_interface[0].access_config[0].nat_ip} -i /path/to/gcpuser/private/key"
}
Leave the service_account
attribute commented out, for now.
Save the main.tf
file.
Now initialize the Terraform plan.
$ terraform init
Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/google...
- Installing hashicorp/google v6.38.0...
- Installed hashicorp/google v6.38.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Deploy the virtual machine hosts
Now you will configure and deploy the host VMs to test the dynamic host catalog integration.
Deploy the Terraform configuration using terraform apply
.
$ terraform apply --auto-approve
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
+ create
Terraform will perform the following actions:
# google_compute_address.vm_ip[0] will be created
+ resource "google_compute_address" "vm_ip" {
+ address = (known after apply)
+ address_type = "EXTERNAL"
+ creation_timestamp = (known after apply)
+ effective_labels = {
+ "goog-terraform-provisioned" = "true"
}
+ id = (known after apply)
+ label_fingerprint = (known after apply)
+ name = "boundary-1-dev-ip"
+ network_tier = (known after apply)
+ prefix_length = (known after apply)
+ project = "hc-26fb1119fccb4f0081b121xxxxx"
+ purpose = (known after apply)
+ region = "us-central1"
+ self_link = (known after apply)
+ subnetwork = (known after apply)
+ terraform_labels = {
+ "goog-terraform-provisioned" = "true"
}
+ users = (known after apply)
}
# google_compute_address.vm_ip[1] will be created
+ resource "google_compute_address" "vm_ip" {
+ address = (known after apply)
...
... snip ...
...
Plan: 13 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ ssh_commands = [
+ (known after apply),
+ (known after apply),
+ (known after apply),
+ (known after apply),
]
+ vm_public_ips = [
+ (known after apply),
+ (known after apply),
+ (known after apply),
+ (known after apply),
]
+ worker_public_ip = (known after apply)
+ worker_ssh_commands = (known after apply)
google_compute_address.vm_ip[2]: Creating...
google_compute_address.worker_ip: Creating...
google_compute_address.vm_ip[1]: Creating...
google_compute_network.network: Creating...
google_compute_address.vm_ip[0]: Creating...
google_compute_address.vm_ip[3]: Creating...
google_compute_network.network: Still creating... [10s elapsed]
google_compute_address.worker_ip: Still creating... [10s elapsed]
google_compute_address.vm_ip[3]: Still creating... [10s elapsed]
google_compute_address.vm_ip[0]: Still creating... [10s elapsed]
google_compute_address.vm_ip[2]: Still creating... [10s elapsed]
google_compute_address.vm_ip[1]: Still creating... [10s elapsed]
google_compute_address.worker_ip: Creation complete after 11s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-worker-ip]
google_compute_address.vm_ip[2]: Creation complete after 11s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-3-production-ip]
google_compute_address.vm_ip[1]: Creation complete after 11s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-2-dev-ip]
google_compute_address.vm_ip[3]: Creation complete after 12s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-4-production-ip]
google_compute_address.vm_ip[0]: Creation complete after 12s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-1-dev-ip]
google_compute_network.network: Still creating... [20s elapsed]
google_compute_network.network: Creation complete after 22s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/global/networks/boundary-vm-network]
google_compute_firewall.allow_ssh: Creating...
google_compute_subnetwork.subnet: Creating...
google_compute_subnetwork.subnet: Still creating... [10s elapsed]
google_compute_firewall.allow_ssh: Still creating... [10s elapsed]
google_compute_firewall.allow_ssh: Creation complete after 11s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/global/firewalls/boundary-vm-allow-ssh]
google_compute_subnetwork.subnet: Still creating... [20s elapsed]
google_compute_subnetwork.subnet: Creation complete after 21s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/subnetworks/boundary-vm-subnet]
google_compute_instance.worker: Creating...
google_compute_instance.vm[3]: Creating...
google_compute_instance.vm[0]: Creating...
google_compute_instance.vm[2]: Creating...
google_compute_instance.vm[1]: Creating...
google_compute_instance.vm[3]: Still creating... [10s elapsed]
google_compute_instance.worker: Still creating... [10s elapsed]
google_compute_instance.vm[0]: Still creating... [10s elapsed]
google_compute_instance.vm[2]: Still creating... [10s elapsed]
google_compute_instance.vm[1]: Still creating... [10s elapsed]
google_compute_instance.vm[1]: Still creating... [20s elapsed]
google_compute_instance.vm[2]: Still creating... [20s elapsed]
google_compute_instance.vm[3]: Still creating... [20s elapsed]
google_compute_instance.vm[0]: Still creating... [20s elapsed]
google_compute_instance.worker: Still creating... [20s elapsed]
google_compute_instance.vm[1]: Still creating... [30s elapsed]
google_compute_instance.vm[0]: Still creating... [30s elapsed]
google_compute_instance.worker: Still creating... [30s elapsed]
google_compute_instance.vm[2]: Still creating... [30s elapsed]
google_compute_instance.vm[3]: Still creating... [30s elapsed]
google_compute_instance.vm[1]: Still creating... [40s elapsed]
google_compute_instance.vm[3]: Still creating... [40s elapsed]
google_compute_instance.worker: Still creating... [40s elapsed]
google_compute_instance.vm[0]: Still creating... [40s elapsed]
google_compute_instance.vm[2]: Still creating... [40s elapsed]
google_compute_instance.vm[2]: Creation complete after 49s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-3-production]
google_compute_instance.vm[1]: Creation complete after 49s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-2-dev]
google_compute_instance.vm[0]: Still creating... [50s elapsed]
google_compute_instance.vm[3]: Still creating... [50s elapsed]
google_compute_instance.worker: Still creating... [50s elapsed]
google_compute_instance.vm[3]: Creation complete after 59s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-4-production]
google_compute_instance.vm[0]: Creation complete after 59s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-1-dev]
google_compute_instance.worker: Creation complete after 59s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-worker]
Apply complete! Resources: 13 added, 0 changed, 0 destroyed.
Outputs:
vm_public_ips = [
"34.30.127.200",
"35.238.222.243",
"34.172.9.52",
"34.132.201.101",
]
worker_public_ip = "34.68.65.22"
worker_ssh_command = "ssh gcpuser@34.68.65.22 -i /path/to/gcpuser/private/key"
You can reference the Terraform outputs at any time by executing terraform output
.
Configure GCP credentials
Boundary uses dynamic host catalogs to automatically discover GCP Compute Engine VM instances and add them as hosts. Boundary needs GCP credentials to maintain an up-to-date catalog registry.
You need to enable the IAM and IAM Service Account Credentials APIs to set up a credential method for Boundary.
Enable the IAM API
Navigate to the IAM API page.
If the API is already enabled, take no action.
Click Enable.
Enable the IAM Service Account Credentials API
Navigate to the IAM Service Account Credentials API page.
If the API is already enabled, take no action.
Click Enable.
Configure a credential type
You can authenticate Boundary to GCP using a service account, service impersonation, or GCP Application Default Credentials (ADC).
Select a credential type to continue.
Service accounts are special user accounts used to authenticate applications or services, rather than individual users. They allow automated access to GCP resources and APIs without requiring users to directly manage credentials.
To set up service account credentials for this tutorial, you will:
- Create a service account and download the private key.
- Format the private key for Boundary.
- Configure a Boundary dynamic host catalog.
Create a service account
You can configure a service account using the GCP cloud console UI, the gcloud CLI, or using Terraform.
Select a workflow to continue.
Create a new service account:
- Navigate to the IAM & Admin Service Accounts page.
- Click the name of the project you deployed your Boundary hosts to.
- Click Create service account.
- Fill in a service account name, such as
Boundary service account
. The Service account ID should be automatically created. You can optionally add a service account description. - Click Create and continue.
- Under the Permissions section, click the Select a role dropdown. Enter
roles/compute.viewer
into the filter, and select the Compute Viewer role. - Click the + Add another role button. Click the Select a role dropdown. Enter
roles/iam.serviceAccountKeyAdmin
into the filter, and select the Service Account Key Admin role. - Click Done.
- Verify that
Boundary service account
exists on the Service Accounts page.
Create the service account private key:
- From the Service Account page, click on the
Boundary service account
. Navigate to the Keys page. - Click the Add key dropdown, and select Create new key.
- Select the JSON key type, then click Create.
- Copy the Key ID field (such as
b990f6a2246bd12fd08d0bd4f6e5bc294d98da1a
). Save this value to use when setting up the host catalog later on. - The private key file is automatically downloaded to your local machine (such as
hc-d0932372bdc04876af2bbe8561e-987893b7c2d7.json
). - Follow the instructions below to format the private key.
The private key file may contain extra /n
characters and fields that can cause an error later on. Boundary needs the private key file to only contain the private key entry. You can remove these extra characters yourself, or use a tool like jq.
Remove an extra /n
characters using jq
by opening your terminal session and navigating to the directory where the private key was downloaded, such as ~/Downloads/
. Execute the following command, replacing my-gcp-private-key
with the name of your private key file:
$ jq -r '.private_key' my-gcp-private-key.json
When finished, the private key should have the following format:
-----BEGIN PRIVATE KEY-----
MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCp5TKBnZv7S6MA
qngKyf0EPDHsAfmqdRTGFF+Da1otkzldUK7fIYirAvf//9ZtT53OG6OJrYIwEO1r
UH7xjxvtXgv2uQ1++FnNDFkFTw1QThwoWCM6vKE97/N27/yx9c6CxTi4HVQBN8Pp
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
QBySlHc740JarQoo6eatThwPmDJzVp3x3wfYOHoD/N8AHi0axJ0UHZnRjeWyBx8f
7A37FQB36llLzsiERQQ3LI9Z7VHDiryOGvuxAQvWovzoamfXgKFk0Q8IwncZYHTV
UhzcSnq/AgMBAAECggEARSz+AhmnA8yZy7EdXKM+4sURvJtXSWkPstFjzJJe7vSl
pFGwSkkQqTT1vqYwbGTBB8VoMqxTuHeD/DCT545SHDWxYF2b2amMgvl2m7tC3AJZ
47FzcryQWLRFaRWxSdKgqc1c2VaTuEU4/33HWTtKEAeyR+aViDP+slHau/Yg9qn0
pGoyzWwQ8+/qofmMiNTNfM2qbPa735WC7VpsusSaxNIaiGbneGWPPkuKYVVUmS2X
2qPaIinuUXLfWSl5DsNZc6Mw5m+Q4JyBjmY+y2dQ/9e9biMbw1C3fhkiGZHQgtqm
cJfhgu1r4yVu9xxjS9hN/VyLPMAjb05Dg1Z03pXGZQKBgQDcsUAbkxO0Ylkhzqel
0KbVXaXiSXsdmAsjLv98jpuV+OxlLoesP8hC8lQt0SNp7Hme6rBO4in6UazBuR82
j9TNeMA1pjXLbvfxEn3B+8iUur+kasEHoeb1DHQmBrWo7WnwPdyi+66jn7EiLq3+
5wKM/uyHJwCNEy43smCxaVa95QKBgQDFE3wjtRHu26ur3Xe8hUx/44H+NQNPVgow
jJpZJ8vVOOvKIQ5vQf3fk5jBa03eGkp0MnKVu73NdCUKhr4DK6AZwX4+gPF3qJlS
M7YOFWkhbigUgRzazo1UJEgnrdRjnfrpA97olcnW4rSH+Il4CF610qejrSvjTe9m
7NhZheSr0wKBgFtOEgHWha55ifrMrtuRSZS42+qVEBScVO9HgHgd4AzaIaNy7rq6
4LWh4GXcQtSN+3teCXd5Znij1d+IIXvHYfloXc1UaKkzzey1A8Z/zuqJoMP7TsVD
nHQBpQQefoXXQ58bWO8tRYF4jiZgPahaFtoSlfUMk9PJ/bMZX5vGwxZpAoGAfK7q
MFEjqmHyh8aTNYOENblDiggSMwR1Z+fc0yE5dYoQq44kasFulB/2WhDAcA9kIYW1
NwRTfgPIV5ON7cWRAhqH+5Vqr9DMR9SNjvV+0Pa3htl03v4lLiHSQMBaijfuAbRA
OBhkXX6Kxye4GWf6O8Ct7QDnrmSlXRHlgyYR2Z8CgYEAuP1x/2MTBZYfP9lRQLim
NAfUMhxXiCfEhFkahdpEIjJgK+ekaosF24EP/merZPqqsi5Su/0cedgXr2Nk7yWE
AXolFjCKuXr0ti7tILp5vj9wH0Dy9/GVoKmNsygAiOd/c2OEgtwToJd1XmqCBtus
h+q+6phCiGqPTVCvJa0xxrk=
-----END PRIVATE KEY-----
Copy this private key value and save it for setting up the credential store later.
Create a new service account:
Open a new shell session and navigate back to the
learn-boundary-cloud-host-catalogs
directory.Authenticate to GCP to use the gcloud CLI.
$ gcloud auth login
Export your GCP project ID as the
GCP_PROJECT_ID
environment variable.$ export GCP_PROJECT_ID="hc-26fb1119fccb4f0081b121xxxxx"
Create a new service account called
boundary-service-account
.$ gcloud iam service-accounts create boundary-service-account \ --display-name="boundary-service-account"
Add the
roles/compute.viewer
role to the new service account.$ gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \ --member="serviceAccount:boundary-service-account@$GCP_PROJECT_ID.iam.gserviceaccount.com" \ --role="roles/compute.viewer"
Create the service account private key:
Create a new key within your current directory.
$ gcloud iam service-accounts keys create boundary-service-account-key-raw.json \ --iam-account=boundary-service-account@$GCP_PROJECT_ID.iam.gserviceaccount.com
Format the json key for Boundary.
The private key file may contain extra
/n
characters and fields that can cause an error later on. Boundary needs the private key file to only contain the private key entry. You can remove these extra characters yourself, or use a tool like jq.Remove an extra
/n
characters usingjq
by opening your terminal session and navigating to the directory where the private key was downloaded. Execute the following command, replacingboundary-service-account-key-raw.json
with the name of your private key file:$ jq -r '.private_key' boundary-service-account-key-raw.json | tee boundary-service-account-key
When finished, the private key should have the following format:
$ cat boundary-service-account-key -----BEGIN PRIVATE KEY----- MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCp5TKBnZv7S6MA qngKyf0EPDHsAfmqdRTGFF+Da1otkzldUK7fIYirAvf//9ZtT53OG6OJrYIwEO1r UH7xjxvtXgv2uQ1++FnNDFkFTw1QThwoWCM6vKE97/N27/yx9c6CxTi4HVQBN8Pp xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx QBySlHc740JarQoo6eatThwPmDJzVp3x3wfYOHoD/N8AHi0axJ0UHZnRjeWyBx8f 7A37FQB36llLzsiERQQ3LI9Z7VHDiryOGvuxAQvWovzoamfXgKFk0Q8IwncZYHTV UhzcSnq/AgMBAAECggEARSz+AhmnA8yZy7EdXKM+4sURvJtXSWkPstFjzJJe7vSl pFGwSkkQqTT1vqYwbGTBB8VoMqxTuHeD/DCT545SHDWxYF2b2amMgvl2m7tC3AJZ 47FzcryQWLRFaRWxSdKgqc1c2VaTuEU4/33HWTtKEAeyR+aViDP+slHau/Yg9qn0 pGoyzWwQ8+/qofmMiNTNfM2qbPa735WC7VpsusSaxNIaiGbneGWPPkuKYVVUmS2X 2qPaIinuUXLfWSl5DsNZc6Mw5m+Q4JyBjmY+y2dQ/9e9biMbw1C3fhkiGZHQgtqm cJfhgu1r4yVu9xxjS9hN/VyLPMAjb05Dg1Z03pXGZQKBgQDcsUAbkxO0Ylkhzqel 0KbVXaXiSXsdmAsjLv98jpuV+OxlLoesP8hC8lQt0SNp7Hme6rBO4in6UazBuR82 j9TNeMA1pjXLbvfxEn3B+8iUur+kasEHoeb1DHQmBrWo7WnwPdyi+66jn7EiLq3+ 5wKM/uyHJwCNEy43smCxaVa95QKBgQDFE3wjtRHu26ur3Xe8hUx/44H+NQNPVgow jJpZJ8vVOOvKIQ5vQf3fk5jBa03eGkp0MnKVu73NdCUKhr4DK6AZwX4+gPF3qJlS M7YOFWkhbigUgRzazo1UJEgnrdRjnfrpA97olcnW4rSH+Il4CF610qejrSvjTe9m 7NhZheSr0wKBgFtOEgHWha55ifrMrtuRSZS42+qVEBScVO9HgHgd4AzaIaNy7rq6 4LWh4GXcQtSN+3teCXd5Znij1d+IIXvHYfloXc1UaKkzzey1A8Z/zuqJoMP7TsVD nHQBpQQefoXXQ58bWO8tRYF4jiZgPahaFtoSlfUMk9PJ/bMZX5vGwxZpAoGAfK7q MFEjqmHyh8aTNYOENblDiggSMwR1Z+fc0yE5dYoQq44kasFulB/2WhDAcA9kIYW1 NwRTfgPIV5ON7cWRAhqH+5Vqr9DMR9SNjvV+0Pa3htl03v4lLiHSQMBaijfuAbRA OBhkXX6Kxye4GWf6O8Ct7QDnrmSlXRHlgyYR2Z8CgYEAuP1x/2MTBZYfP9lRQLim NAfUMhxXiCfEhFkahdpEIjJgK+ekaosF24EP/merZPqqsi5Su/0cedgXr2Nk7yWE AXolFjCKuXr0ti7tILp5vj9wH0Dy9/GVoKmNsygAiOd/c2OEgtwToJd1XmqCBtus h+q+6phCiGqPTVCvJa0xxrk= -----END PRIVATE KEY-----
Rename the service-account.tf.bak
file in the learn-boundary-cloud-host-catalogs/gcp/terraform/
directory to service-account.tf
.
Open the service-account.tf
file in your text editor:
service-account.tf
## Create a new service account for Boundary
resource "google_service_account" "boundary_service_account" {
account_id = "boundary-service-account"
display_name = "Boundary Service Account"
}
## Grant the compute viewer role to the service account
resource "google_project_iam_member" "boundary_compute_viewer" {
project = var.gcp_project_id
role = "roles/compute.viewer"
member = google_service_account.boundary_service_account.member
}
## Create a key for the service account and save it locally
resource "google_service_account_key" "boundary_service_account_key" {
service_account_id = google_service_account.boundary_service_account.id
public_key_type = "TYPE_X509_PEM_FILE"
private_key_type = "TYPE_GOOGLE_CREDENTIALS_FILE"
key_algorithm = "KEY_ALG_RSA_2048"
}
Ensure this file matches the above configuration.
These Terraform resources use the hashicorp/google
provider to configure a service account for Boundary with the roles/compute.viewer
role, and then save a copy of the service account key to your local machine.
Review the following resources to learn more:
Apply the new configuration to set up the service account.
$ terraform apply --auto-approve
google_compute_address.vm_ip[2]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-3-ip]
google_compute_address.vm_ip[3]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-4-ip]
google_compute_address.vm_ip[0]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-1-ip]
google_compute_address.vm_ip[1]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-2-ip]
google_compute_network.network: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/global/networks/boundary-vm-network]
google_compute_firewall.allow_ssh: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/global/firewalls/boundary-vm-allow-ssh]
google_compute_subnetwork.subnet: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/subnetworks/boundary-vm-subnet]
google_compute_instance.vm[3]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-4-production]
google_compute_instance.vm[1]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-2-dev]
google_compute_instance.vm[2]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-3-production]
google_compute_instance.vm[0]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-1-dev]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
+ create
Terraform will perform the following actions:
# google_project_iam_member.boundary_compute_viewer will be created
+ resource "google_project_iam_member" "boundary_compute_viewer" {
+ etag = (known after apply)
+ id = (known after apply)
+ member = "serviceAccount:boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com"
+ project = "hc-26fb1119fccb4f0081b121xxxxx"
+ role = "roles/compute.viewer"
}
# google_service_account.boundary_service_account will be created
+ resource "google_service_account" "boundary_service_account" {
+ account_id = "boundary-service-account"
+ disabled = false
+ display_name = "Boundary Service Account"
+ email = "boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com"
+ id = (known after apply)
+ member = "serviceAccount:boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com"
+ name = (known after apply)
+ project = "hc-26fb1119fccb4f0081b121xxxxx"
+ unique_id = (known after apply)
}
# google_service_account_key.boundary_service_account_key will be created
+ resource "google_service_account_key" "boundary_service_account_key" {
+ id = (known after apply)
+ key_algorithm = "KEY_ALG_RSA_2048"
+ name = (known after apply)
+ private_key = (sensitive value)
+ private_key_type = "TYPE_GOOGLE_CREDENTIALS_FILE"
+ public_key = (known after apply)
+ public_key_type = "TYPE_X509_PEM_FILE"
+ service_account_id = (known after apply)
+ valid_after = (known after apply)
+ valid_before = (known after apply)
}
Plan: 3 to add, 0 to change, 0 to destroy.
google_service_account.boundary_service_account: Creating...
google_service_account.boundary_service_account: Still creating... [10s elapsed]
google_service_account.boundary_service_account: Creation complete after 11s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/serviceAccounts/boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com]
google_project_iam_member.boundary_compute_viewer: Creating...
google_service_account_key.boundary_service_account_key: Creating...
google_project_iam_member.boundary_compute_viewer: Creation complete after 8s [id=hc-26fb1119fccb4f0081b121xxxxx/roles/compute.viewer/serviceAccount:boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com]
google_service_account_key.boundary_service_account_key: Still creating... [10s elapsed]
google_service_account_key.boundary_service_account_key: Creation complete after 12s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/serviceAccounts/boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com/keys/8792ecca68798d7c2dbd517ff3e0e16bd2e62319]
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
Outputs:
vm_public_ips = [
"35.223.89.185",
"35.226.254.30",
"35.202.149.44",
"34.172.0.177",
]
Service account impersonation allows an authenticated principal (like a user or another service account) to act on behalf of a service account. It grants temporary or elevated access to the permissions of a service account without permanently changing IAM roles.
To set up service account impersonation for this tutorial, you will:
- Create a target service account to impersonate.
- Create a base service account for Boundary.
- Configure a Boundary dynamic host catalog.
Create the target and base service accounts
The target service account is the account that the base Boundary service account impersonates later on.
The target service account queries Compute Engine using the roles/compute.viewer
role. The base service account uses the service.AccountTokenCreator
role to create tokens for the target service account. Boundary needs a key for the base service account, which requests a list of GCP VM hosts by impersonating the target service account.
You can configure the service accounts using the GCP cloud console UI, the gcloud CLI, or using Terraform.
Select a workflow to continue.
Create a new target service account:
- Navigate to the IAM & Admin Service Accounts page.
- Click the name of the project you deployed your Boundary hosts to.
- Click Create service account.
- Fill in a service account name, such as
Boundary target SA
. The Service account ID should be automatically created. You can optionally add a service account description. - Click Create and continue.
- Under the Permissions section, click the Select a role dropdown. Enter
roles/compute.viewer
into the filter, and select the Compute Viewer role. - Click Done.
- Verify that
Boundary target SA
exists on the Service Accounts page. - Copy the Email field of the Boundary target service account, such as
boundary-target-sa@hc-66fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com
.
Create a new base service account:
- Navigate to the IAM & Admin Service Accounts page.
- Click the name of the project you deployed your Boundary hosts to.
- Click Create service account.
- Fill in a service account name, such as
Boundary base SA
. The Service account ID should be automatically created. You can optionally add a service account description. - Click Create and continue.
- Under the Permissions section, click the Select a role dropdown. Enter
roles/iam.serviceAccountTokenCreator
into the filter, and select the Service Account Token Creator role. - (Optional) You can optionally rotate the service account key by adding the
roles/iam.serviceAccountKeyAdmin
role. This tutorial does not enable credential rotation. Refer to the GCP dynamic hosts documentation to learn more about enabling credential rotation. - Click Continue.
- Under the Principals with access section, click the Service account users role field. Enter in the email address of the Boundary target service account that you copied earlier.
- Click Done.
- Verify that
Boundary base SA
exists on the Service Accounts page. - Copy the Email field of the Boundary base service account, such as
boundary-base-sa@hc-66fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com
.
Obtain an access key for the base service account:
- Navigate to the Service accounts page.
- Select the name of the project you deployed your Boundary hosts to.
- Click on the Boundary base SA service account.
- Select the Keys tab.
- Click the Add key button, then select Create new key.
- Select the JSON key type, then click Create.
- Copy the Key ID value (such as
b990f6a2246bd12fd08d0bd4f6e5bc294d98da1a
). Save this value to use when setting up the host catalog later on. - The private key file is automatically downloaded to your local machine (such as
hc-d0932372bdc04876af2bbe8561e-987893b7c2d7.json
).
The private key file may contain extra /n
characters and fields that can cause an error later on. Boundary needs the private key file to only contain the private key entry. You can remove these extra characters yourself, or use a tool like jq.
Remove an extra /n
characters using jq
by opening your terminal session and navigating to the directory where the private key was downloaded, such as ~/Downloads/
. Execute the following command, replacing my-gcp-private-key
with the name of your private key file:
$ jq -r '.private_key' my-gcp-private-key.json
When finished, the private key should have the following format:
-----BEGIN PRIVATE KEY-----
MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCp5TKBnZv7S6MA
qngKyf0EPDHsAfmqdRTGFF+Da1otkzldUK7fIYirAvf//9ZtT53OG6OJrYIwEO1r
UH7xjxvtXgv2uQ1++FnNDFkFTw1QThwoWCM6vKE97/N27/yx9c6CxTi4HVQBN8Pp
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
QBySlHc740JarQoo6eatThwPmDJzVp3x3wfYOHoD/N8AHi0axJ0UHZnRjeWyBx8f
7A37FQB36llLzsiERQQ3LI9Z7VHDiryOGvuxAQvWovzoamfXgKFk0Q8IwncZYHTV
UhzcSnq/AgMBAAECggEARSz+AhmnA8yZy7EdXKM+4sURvJtXSWkPstFjzJJe7vSl
pFGwSkkQqTT1vqYwbGTBB8VoMqxTuHeD/DCT545SHDWxYF2b2amMgvl2m7tC3AJZ
47FzcryQWLRFaRWxSdKgqc1c2VaTuEU4/33HWTtKEAeyR+aViDP+slHau/Yg9qn0
pGoyzWwQ8+/qofmMiNTNfM2qbPa735WC7VpsusSaxNIaiGbneGWPPkuKYVVUmS2X
2qPaIinuUXLfWSl5DsNZc6Mw5m+Q4JyBjmY+y2dQ/9e9biMbw1C3fhkiGZHQgtqm
cJfhgu1r4yVu9xxjS9hN/VyLPMAjb05Dg1Z03pXGZQKBgQDcsUAbkxO0Ylkhzqel
0KbVXaXiSXsdmAsjLv98jpuV+OxlLoesP8hC8lQt0SNp7Hme6rBO4in6UazBuR82
j9TNeMA1pjXLbvfxEn3B+8iUur+kasEHoeb1DHQmBrWo7WnwPdyi+66jn7EiLq3+
5wKM/uyHJwCNEy43smCxaVa95QKBgQDFE3wjtRHu26ur3Xe8hUx/44H+NQNPVgow
jJpZJ8vVOOvKIQ5vQf3fk5jBa03eGkp0MnKVu73NdCUKhr4DK6AZwX4+gPF3qJlS
M7YOFWkhbigUgRzazo1UJEgnrdRjnfrpA97olcnW4rSH+Il4CF610qejrSvjTe9m
7NhZheSr0wKBgFtOEgHWha55ifrMrtuRSZS42+qVEBScVO9HgHgd4AzaIaNy7rq6
4LWh4GXcQtSN+3teCXd5Znij1d+IIXvHYfloXc1UaKkzzey1A8Z/zuqJoMP7TsVD
nHQBpQQefoXXQ58bWO8tRYF4jiZgPahaFtoSlfUMk9PJ/bMZX5vGwxZpAoGAfK7q
MFEjqmHyh8aTNYOENblDiggSMwR1Z+fc0yE5dYoQq44kasFulB/2WhDAcA9kIYW1
NwRTfgPIV5ON7cWRAhqH+5Vqr9DMR9SNjvV+0Pa3htl03v4lLiHSQMBaijfuAbRA
OBhkXX6Kxye4GWf6O8Ct7QDnrmSlXRHlgyYR2Z8CgYEAuP1x/2MTBZYfP9lRQLim
NAfUMhxXiCfEhFkahdpEIjJgK+ekaosF24EP/merZPqqsi5Su/0cedgXr2Nk7yWE
AXolFjCKuXr0ti7tILp5vj9wH0Dy9/GVoKmNsygAiOd/c2OEgtwToJd1XmqCBtus
h+q+6phCiGqPTVCvJa0xxrk=
-----END PRIVATE KEY-----
Create the target service account:
Open a new shell session and navigate back to the
learn-boundary-cloud-host-catalogs
directory.Authenticate to GCP to use the gcloud CLI.
$ gcloud auth login
Export your GCP project ID as the
GCP_PROJECT_ID
environment variable.$ export GCP_PROJECT_ID="hc-26fb1119fccb4f0081b121xxxxx"
Create a new service account called
boundary-target-sa
.$ gcloud iam service-accounts create boundary-target-sa \ --display-name="boundary-target-sa"
Add the
roles/compute.viewer
role to the target service account.$ gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \ --member="serviceAccount:boundary-target-sa@$GCP_PROJECT_ID.iam.gserviceaccount.com" \ --role="roles/compute.viewer"
Create the base service account:
Create a new service account called
boundary-base-sa
.$ gcloud iam service-accounts create boundary-base-sa \ --display-name="boundary-base-sa"
Add the
roles/iam.serviceAccountTokenCreator
role to the base service account.$ gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \ --member="serviceAccount:boundary-base-sa@$GCP_PROJECT_ID.iam.gserviceaccount.com" \ --role="roles/iam.serviceAccountTokenCreator"
Create the base service account private key:
Create a new key within your current directory.
$ gcloud iam service-accounts keys create boundary-base-sa-key-raw.json \ --iam-account=boundary-base-sa@$GCP_PROJECT_ID.iam.gserviceaccount.com
Format the json key for Boundary.
The private key file may contain extra
/n
characters and fields that can cause an error later on. Boundary needs the private key file to only contain the private key entry. You can remove these extra characters yourself, or use a tool like jq.Remove an extra
/n
characters usingjq
by opening your terminal session and navigating to the directory where the private key was downloaded. Execute the following command, replacingboundary-base-sa-key-raw.json
with the name of your private key file:$ jq -r '.private_key' boundary-base-sa-key-raw.json | tee boundary-base-sa-key
When finished, the private key should have the following format:
$ cat boundary-base-sa-key -----BEGIN PRIVATE KEY----- MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCp5TKBnZv7S6MA qngKyf0EPDHsAfmqdRTGFF+Da1otkzldUK7fIYirAvf//9ZtT53OG6OJrYIwEO1r UH7xjxvtXgv2uQ1++FnNDFkFTw1QThwoWCM6vKE97/N27/yx9c6CxTi4HVQBN8Pp xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx QBySlHc740JarQoo6eatThwPmDJzVp3x3wfYOHoD/N8AHi0axJ0UHZnRjeWyBx8f 7A37FQB36llLzsiERQQ3LI9Z7VHDiryOGvuxAQvWovzoamfXgKFk0Q8IwncZYHTV UhzcSnq/AgMBAAECggEARSz+AhmnA8yZy7EdXKM+4sURvJtXSWkPstFjzJJe7vSl pFGwSkkQqTT1vqYwbGTBB8VoMqxTuHeD/DCT545SHDWxYF2b2amMgvl2m7tC3AJZ 47FzcryQWLRFaRWxSdKgqc1c2VaTuEU4/33HWTtKEAeyR+aViDP+slHau/Yg9qn0 pGoyzWwQ8+/qofmMiNTNfM2qbPa735WC7VpsusSaxNIaiGbneGWPPkuKYVVUmS2X 2qPaIinuUXLfWSl5DsNZc6Mw5m+Q4JyBjmY+y2dQ/9e9biMbw1C3fhkiGZHQgtqm cJfhgu1r4yVu9xxjS9hN/VyLPMAjb05Dg1Z03pXGZQKBgQDcsUAbkxO0Ylkhzqel 0KbVXaXiSXsdmAsjLv98jpuV+OxlLoesP8hC8lQt0SNp7Hme6rBO4in6UazBuR82 j9TNeMA1pjXLbvfxEn3B+8iUur+kasEHoeb1DHQmBrWo7WnwPdyi+66jn7EiLq3+ 5wKM/uyHJwCNEy43smCxaVa95QKBgQDFE3wjtRHu26ur3Xe8hUx/44H+NQNPVgow jJpZJ8vVOOvKIQ5vQf3fk5jBa03eGkp0MnKVu73NdCUKhr4DK6AZwX4+gPF3qJlS M7YOFWkhbigUgRzazo1UJEgnrdRjnfrpA97olcnW4rSH+Il4CF610qejrSvjTe9m 7NhZheSr0wKBgFtOEgHWha55ifrMrtuRSZS42+qVEBScVO9HgHgd4AzaIaNy7rq6 4LWh4GXcQtSN+3teCXd5Znij1d+IIXvHYfloXc1UaKkzzey1A8Z/zuqJoMP7TsVD nHQBpQQefoXXQ58bWO8tRYF4jiZgPahaFtoSlfUMk9PJ/bMZX5vGwxZpAoGAfK7q MFEjqmHyh8aTNYOENblDiggSMwR1Z+fc0yE5dYoQq44kasFulB/2WhDAcA9kIYW1 NwRTfgPIV5ON7cWRAhqH+5Vqr9DMR9SNjvV+0Pa3htl03v4lLiHSQMBaijfuAbRA OBhkXX6Kxye4GWf6O8Ct7QDnrmSlXRHlgyYR2Z8CgYEAuP1x/2MTBZYfP9lRQLim NAfUMhxXiCfEhFkahdpEIjJgK+ekaosF24EP/merZPqqsi5Su/0cedgXr2Nk7yWE AXolFjCKuXr0ti7tILp5vj9wH0Dy9/GVoKmNsygAiOd/c2OEgtwToJd1XmqCBtus h+q+6phCiGqPTVCvJa0xxrk= -----END PRIVATE KEY-----
Rename the service-account-impersonation.tf.bak
file in the learn-boundary-cloud-host-catalogs/gcp/terraform/
directory to service-account-impersonation.tf
.
Open the service-account-impersonation.tf
file in your text editor:
service-account-impersonation.tf
## Create a new service account for Boundary
resource "google_service_account" "boundary_target_service_account" {
account_id = "boundary-target-sa"
display_name = "Boundary Target Service Account"
}
resource "google_service_account" "boundary_base_service_account" {
account_id = "boundary-base-sa"
display_name = "Boundary Base Service Account"
}
## Grant the compute viewer role to the target service account
resource "google_project_iam_member" "boundary_compute_viewer" {
project = var.gcp_project_id
role = "roles/compute.viewer"
member = google_service_account.boundary_target_service_account.member
}
## Grant the iam.serviceAccountTokenCreator role to the base service account
resource "google_project_iam_member" "boundary_SA_token_creator" {
project = var.gcp_project_id
role = "roles/iam.serviceAccountTokenCreator"
member = google_service_account.boundary_base_service_account.member
}
## Create a key for the base service account
resource "google_service_account_key" "boundary_base_service_account_key" {
service_account_id = google_service_account.boundary_base_service_account.id
public_key_type = "TYPE_X509_PEM_FILE"
private_key_type = "TYPE_GOOGLE_CREDENTIALS_FILE"
key_algorithm = "KEY_ALG_RSA_2048"
}
Ensure this file matches the above configuration.
These Terraform resources use the hashicorp/google
provider to configure a service account for Boundary with the roles/compute.viewer
role, and then save a copy of the service account key to your local machine.
Review the following resources to learn more:
Apply the new configuration to set up the service accounts.
$ terraform apply --auto-approve
google_compute_network.network: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/global/networks/boundary-vm-network]
google_compute_address.vm_ip[3]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-4-ip]
google_compute_address.vm_ip[0]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-1-ip]
google_compute_address.vm_ip[2]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-3-ip]
google_compute_address.vm_ip[1]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-2-ip]
google_compute_subnetwork.subnet: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/subnetworks/boundary-vm-subnet]
google_compute_firewall.allow_ssh: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/global/firewalls/boundary-vm-allow-ssh]
google_compute_instance.vm[2]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-3-production]
google_compute_instance.vm[3]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-4-production]
google_compute_instance.vm[0]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-1-dev]
google_compute_instance.vm[1]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-2-dev]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
+ create
Terraform will perform the following actions:
# google_project_iam_member.boundary_SA_token_creator will be created
+ resource "google_project_iam_member" "boundary_SA_token_creator" {
+ etag = (known after apply)
+ id = (known after apply)
+ member = "serviceAccount:boundary-base-sa2@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com"
+ project = "hc-26fb1119fccb4f0081b121xxxxx"
+ role = "roles/iam.serviceAccountTokenCreator"
}
# google_project_iam_member.boundary_compute_viewer will be created
+ resource "google_project_iam_member" "boundary_compute_viewer" {
...
... snip ...
...
Plan: 5 to add, 0 to change, 0 to destroy.
google_service_account.boundary_base_service_account: Creating...
google_service_account.boundary_target_service_account: Creating...
google_service_account.boundary_base_service_account: Still creating... [10s elapsed]
google_service_account.boundary_target_service_account: Still creating... [10s elapsed]
google_service_account.boundary_base_service_account: Creation complete after 14s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/serviceAccounts/boundary-base-sa2@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com]
google_project_iam_member.boundary_SA_token_creator: Creating...
google_service_account_key.boundary_base_service_account_key: Creating...
google_service_account.boundary_target_service_account: Still creating... [20s elapsed]
google_service_account.boundary_target_service_account: Creation complete after 20s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/serviceAccounts/boundary-target-sa2@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com]
google_project_iam_member.boundary_compute_viewer: Creating...
google_project_iam_member.boundary_SA_token_creator: Creation complete after 8s [id=hc-26fb1119fccb4f0081b121xxxxx/roles/iam.serviceAccountTokenCreator/serviceAccount:boundary-base-sa2@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com]
google_service_account_key.boundary_base_service_account_key: Still creating... [10s elapsed]
google_service_account_key.boundary_base_service_account_key: Creation complete after 11s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/serviceAccounts/boundary-base-sa2@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com/keys/0a968341bbd0f25888dbd5fba459f13530b6ac1e]
google_project_iam_member.boundary_compute_viewer: Creation complete after 7s [id=hc-26fb1119fccb4f0081b121xxxxx/roles/compute.viewer/serviceAccount:boundary-target-sa2@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com]
Apply complete! Resources: 5 added, 0 changed, 0 destroyed.
Outputs:
vm_public_ips = [
"35.238.85.188",
"34.10.51.164",
"35.232.152.114",
"34.9.95.9",
]
Application Default Credentials (ADC) automatically retrieves and uses credentials for authenticating against GCP APIs and services. It simplifies the process of setting up authentication for applications running on GCP, allowing them to automatically discover and use credentials without direct configuration. You can also configure ADC to authenticate a local development environment using an Identity Provider (IdP) like Google or Okta.
In this tutorial, you configure ADC for a virtual machine (VM) running on Google Compute Engine.
To set up ADC for this tutorial, you will:
- Create a service account with the
roles/compute.viewer
role. - Attach the role to the Boundary worker VM.
- Configure the worker VM as a Boundary worker.
- Configure a Boundary dynamic host catalog.
Configure ADC
GCP Application Default Credentials can be configured in a number of ways, but Boundary expects that a VM configured for ADC has a service account attached that grants the roles/compute.viewer
role.
Service accounts are special user accounts used to authenticate applications or services, rather than individual users. They allow automated access to GCP resources and APIs without requiring users to directly manage credentials.
Create a service account
You can configure a service account using the GCP cloud console UI, the gcloud CLI, or using Terraform.
Select a workflow to continue.
Create a new service account:
- Navigate to the IAM & Admin Service Accounts page.
- Click the name of the project you deployed your Boundary hosts to.
- Click Create service account.
- Fill in a service account name, such as
Boundary worker service account
. The Service account ID should be automatically created. You can optionally add a service account description. - Click Create and continue.
- Under the Permissions section, click the Select a role dropdown. Enter
roles/compute.viewer
into the filter, and select the Compute Viewer role. - Click Done.
- Verify that
Boundary worker service account
exists on the Service Accounts page.
Attach the service account to the Boundary worker VM:
- Navigate to the Compute Engine instances page.
- Click on the
boundary-worker
instance. - Click the three dots in the upper-right of the page, labeled More actions with the mouse hovered on them.
- Click Stop in the More actions menu.
- Wait for the instance to stop. Refresh the page if necessary.
- Click the Edit button at the top of the boundary-worker page.
- Scroll down to the Identity and API access section.
- Click the Service accounts dropdown and select the Boundary worker service account.
- Under Access scopes, select Set access for each API. Click on Compute Engine and select Read Only.
- Click the Save button.
- Click the More actions three dots menu in the upper-right of the page.
- Select Start/Resume from the menu.
Create a new service account:
Open a new shell session and navigate back to the
learn-boundary-cloud-host-catalogs
directory.Authenticate to GCP to use the gcloud CLI.
$ gcloud auth login
Export your GCP project ID as the
GCP_PROJECT_ID
environment variable.$ export GCP_PROJECT_ID="hc-26fb1119fccb4f0081b121xxxxx"
Export the GCP zone you set in the lab environment's
main.tf
file as theGCP_ZONE
environment variable (us-central1-a
by default):$ export GCP_ZONE="us-central1-a"
Create a new service account called
boundary-service-account
.$ gcloud iam service-accounts create boundary-service-account \ --display-name="boundary-service-account"
Add the
roles/compute.viewer
role to the new service account.$ gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \ --member="serviceAccount:boundary-service-account@$GCP_PROJECT_ID.iam.gserviceaccount.com" \ --role="roles/compute.viewer"
Assign the service account to the Boundary worker VM:
Stop the
boundary-worker
VM instance:$ gcloud compute instances stop boundary-worker --zone=$GCP_ZONE Stopping instance(s) boundary-worker...done. Updated [https://compute.googleapis.com/compute/v1/projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-worker].
Attach
boundary-service-account
to the worker VM, and set its access scope to read-only access for Compute Engine methods:$ gcloud compute instances set-service-account boundary-worker \ --service-account="boundary-service-account@$GCP_PROJECT_ID.iam.gserviceaccount.com" \ --scopes="https://www.googleapis.com/auth/compute.readonly" \ --zone=$GCP_ZONE
Restart the
boundary-worker
VM instance:$ gcloud compute instances start boundary-worker --zone=$GCP_ZONE Starting instance(s) boundary-worker...done. Updated [https://compute.googleapis.com/compute/v1/projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-worker]. Instance internal IP is 10.1.0.3 Instance external IP is 34.68.65.22
While the worker external IP likely remained the same, you should check the output of the command and note the worker's IP address for configuring the worker in the next step.
Rename the application-default-credentials.tf.bak
file in the learn-boundary-cloud-host-catalogs/gcp/terraform/
directory to application-default-credentials.tf
.
Open the application-default-credentials.tf
file in your text editor:
application-default-credentials.tf
## Create a new service account for Boundary
resource "google_service_account" "boundary_service_account" {
account_id = "boundary-service-account"
display_name = "Boundary Service Account"
}
## Grant the compute viewer role to the service account
resource "google_project_iam_member" "boundary_compute_viewer" {
project = var.gcp_project_id
role = "roles/compute.viewer"
member = google_service_account.boundary_service_account.member
}
Ensure this file matches the above configuration.
These Terraform resources use the hashicorp/google
provider to configure a service account for Boundary with the roles/compute.viewer
role, and then save a copy of the service account key to your local machine.
Review the following resources to learn more:
Assign the service account to the worker
You need to assign the new service account to the worker instance.
Open the main.tf
file and locate the google_compute_instance.worker
resource.
Uncomment the service_account
attribute.
The worker configuration should match the following:
main.tf
resource "google_compute_instance" "worker" {
name = "boundary-worker"
machine_type = "e2-standard-2"
zone = var.gcp_zone
allow_stopping_for_update = true
tags = [
"boundary-worker",
]
labels = {
name = "boundary-worker"
service-type = "worker"
}
boot_disk {
initialize_params {
image = var.vm_image
type = "pd-standard"
}
auto_delete = true
}
network_interface {
network = google_compute_network.network.id
subnetwork = google_compute_subnetwork.subnet.id
access_config {
nat_ip = google_compute_address.worker_ip.address
}
}
service_account {
email = google_service_account.boundary_service_account.email
scopes = ["compute-ro"]
}
metadata = {ssh-keys = "${var.ssh_username}:${file(var.ssh_pub_key_file)}"}
}
Save this file.
The Terraform google_compute_instance
resource sets the service_account
attribute to the boundary-service-account
and defines the assignment scope to Compute Engine read-only, allowing Boundary to request a list of VMs matching the parameters you will define later. Because you are retroactively assigning the service account, the resource must have the allow_stopping_for_update
attribute set to true
to stop the instance and assign the new service account.
Review the google_service_account
resource to learn more:
Apply the new configuration to set up the service account and assign it to the worker.
$ terraform apply --auto-approve
google_compute_address.vm_ip[3]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-4-ip]
google_compute_address.vm_ip[2]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-3-ip]
google_compute_network.network: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/global/networks/boundary-vm-network]
google_compute_address.vm_ip[0]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-1-ip]
google_compute_address.worker_ip: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-worker-ip]
google_compute_address.vm_ip[1]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-2-ip]
google_compute_subnetwork.subnet: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/subnetworks/boundary-vm-subnet]
google_compute_firewall.allow_ssh: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/global/firewalls/boundary-vm-allow-ssh]
google_compute_instance.vm[0]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-1-dev]
google_compute_instance.worker: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-worker]
google_compute_instance.vm[3]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-4-production]
google_compute_instance.vm[2]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-3-production]
google_compute_instance.vm[1]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-2-dev]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
+ create
~ update in-place
Terraform will perform the following actions:
# google_compute_instance.worker will be updated in-place
~ resource "google_compute_instance" "worker" {
id = "projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-worker"
name = "boundary-worker"
tags = [
"boundary-worker",
]
# (24 unchanged attributes hidden)
+ service_account {
+ email = "boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com"
+ scopes = [
+ "https://www.googleapis.com/auth/compute.readonly",
]
}
# (4 unchanged blocks hidden)
}
# google_project_iam_member.boundary_compute_viewer will be created
+ resource "google_project_iam_member" "boundary_compute_viewer" {
+ etag = (known after apply)
+ id = (known after apply)
+ member = "serviceAccount:boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com"
+ project = "hc-26fb1119fccb4f0081b121xxxxx"
+ role = "roles/compute.viewer"
}
# google_service_account.boundary_service_account will be created
+ resource "google_service_account" "boundary_service_account" {
+ account_id = "boundary-service-account"
+ disabled = false
+ display_name = "Boundary Service Account"
+ email = "boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com"
+ id = (known after apply)
+ member = "serviceAccount:boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com"
+ name = (known after apply)
+ project = "hc-26fb1119fccb4f0081b121xxxxx"
+ unique_id = (known after apply)
}
Plan: 2 to add, 1 to change, 0 to destroy.
google_service_account.boundary_service_account: Creating...
google_service_account.boundary_service_account: Still creating... [10s elapsed]
google_service_account.boundary_service_account: Creation complete after 12s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/serviceAccounts/boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com]
google_project_iam_member.boundary_compute_viewer: Creating...
google_compute_instance.worker: Modifying... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-worker]
google_project_iam_member.boundary_compute_viewer: Creation complete after 8s [id=hc-26fb1119fccb4f0081b121xxxxx/roles/compute.viewer/serviceAccount:boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com]
google_compute_instance.worker: Still modifying... [id=projects/hc-26fb1119fccb4f0081b121xxxxx...s-central1-a/instances/boundary-worker, 10s elapsed]
google_compute_instance.worker: Still modifying... [id=projects/hc-26fb1119fccb4f0081b121xxxxx...s-central1-a/instances/boundary-worker, 20s elapsed]
google_compute_instance.worker: Still modifying... [id=projects/hc-26fb1119fccb4f0081b121xxxxx...s-central1-a/instances/boundary-worker, 30s elapsed]
google_compute_instance.worker: Still modifying... [id=projects/hc-26fb1119fccb4f0081b121xxxxx...s-central1-a/instances/boundary-worker, 40s elapsed]
google_compute_instance.worker: Still modifying... [id=projects/hc-26fb1119fccb4f0081b121xxxxx...s-central1-a/instances/boundary-worker, 50s elapsed]
google_compute_instance.worker: Modifications complete after 54s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-worker]
Apply complete! Resources: 2 added, 1 changed, 0 destroyed.
Outputs:
vm_public_ips = [
"34.56.9.47",
"34.41.140.216",
"34.61.22.18",
"34.172.1.189",
]
worker_public_ip = "34.121.82.120"
worker_ssh_command = "ssh gcpuser@34.121.82.120 -i /path/to/gcpuser/private/key"
Configure the worker
A Boundary worker provides an ingress point for Boundary to query GCP for a list of VMs to include in the dynamic host catalog.
You should have deployed the worker already in the Deploy the hosts section.
Next, you will download and configure Boundary on the worker VM.
To configure a self-managed worker, you need the following details:
- Boundary address or cluster URL.
- Auth Method ID (from the Boundary Admin Console)
- Admin login name and password
Visit the Getting Started on HCP tutorial if you need to locate any of these values.
Log in and download the Boundary binary
Locate the boundary_worker VM instance IP address in the terminal session where you deployed the host VMs using Terraform. You can display the Terraform outputs using terraform output
:
$ terraform output
ssh_commands = [
"ssh gcpuser@108.59.80.54",
"ssh gcpuser@34.28.194.125",
"ssh gcpuser@34.63.75.6",
"ssh gcpuser@35.239.175.23",
]
vm_public_ips = [
"108.59.80.54",
"34.28.194.125",
"34.63.75.6",
"35.239.175.23",
]
worker_public_ip = "34.58.253.161"
worker_ssh_command = "ssh gcpuser@34.121.82.120 -i /path/to/gcpuser/private/key"
Locate the worker_public_ip
. Use this IP to log into the worker VM and passing the gcpuser
private key file.
For example, using SSH on a linux machine with username admin
:
$ ssh gcpuser@34.58.253.161 -i /home/admin/.ssh/gcpuser
The authenticity of host '34.71.34.59 (34.71.34.59)' can't be established.
ED25519 key fingerprint is SHA256:ihRAkLmgviB82Ul8eVaI/nk6/G3Y5/n+JzWM/v9a0Hs.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '34.71.34.59' (ED25519) to the list of known hosts.
$ gcpuser@boundary-worker ~/
Note
The above example is for demonstrative purposes. You will need to supply your worker instance ssh_username, public IP address, and private key to connect. You can check these values by executing terraform output
in the shell session where you deployed the lab environment. You can check this GCP docs page to learn more about connecting to a Linux instance using SSH.
Download and install the Boundary Enterprise binary.
Note
The binary version should match the version of the HCP control plane. Check the control plane's version in the HCP Boundary portal, and download the appropriate version using wget. The example below installs the 0.19.2 version of the boundary binary, versioned as 0.19.2+ent
.
Enter the following command to install the latest version of the Boundary Enterprise binary on the worker.
$ sudo yum install -y yum-utils ;\
sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo ;\
sudo yum -y install boundary-enterprise
Once installed, verify the version of the boundary binary.
$ boundary version
Version information:
Build Date: 2025-01-24T17:59:34Z
Git Revision: 120b23b6f191075b7d01ab480340958007e6b023
Metadata: ent
Version Number: 0.19.0+ent
Ensure the Version Number matches the version of the HCP Boundary control plane. They should match to get the latest HCP Boundary features.
Write the worker config
Create a new folder to store your Boundary config file.
This tutorial creates the boundary/
directory in the gcpuser user home directory to store the worker config. If you do not have permission to create this directory, create the folder elsewhere.
$ mkdir /home/gcpuser/boundary/ && cd /home/gcpuser/boundary/
Create a new file named /home/gcpuser/boundary/worker.hcl
.
$ touch /home/gcpuser/boundary/worker.hcl
Open the file with a text editor, such as Vi.
Paste the following configuration into the worker config file:
/home/gcpuser/boundary/worker.hcl
disable_mlock = true
hcp_boundary_cluster_id = "<cluster_id>"
listener "tcp" {
address = "0.0.0.0:9202"
purpose = "proxy"
}
worker {
public_addr = "<worker_public_addr>"
auth_storage_path = "/home/gcpuser/boundary/worker"
tags {
type = ["gcp-worker"]
}
}
Update the following values in the worker.hcl
file:
<cluster_id>
on line 3 should be replaced with the HCP Boundary Cluster ID, such asc3a7a20a-f663-40f3-a8e3-1b2f69b36254
.<worker_public_addr>
on line 11 should be replaced with the public IP address of the worker, such as107.22.128.152
.
You can determine the <cluster-id>
on line 3 from the UUID in the HCP Boundary Cluster URL. For example, if you Cluster URL is:
https://c3a7a20a-f663-40f3-a8e3-1b2f69b36254.boundary.hashicorp.cloud
then the cluster ID is c3a7a20a-f663-40f3-a8e3-1b2f69b36254
.
The public_addr
should match the public IP or DNS name of your worker VM
instance.
Save this file.
To see all valid config options, refer to the worker configuration docs.
Start the worker
With the worker config defined, start the worker server. Provide the full path
to the worker config file (such as /home/gcpuser/boundary/worker.hcl
). The example below also backgrounds the process by adding an &
at the end of the boundary server
command.
$ boundary server -config="/home/gcpuser/boundary/worker.hcl" &
==> Boundary server configuration:
Cgo: disabled
Listener 1: tcp (addr: "0.0.0.0:9202", max_request_duration: "1m30s", purpose: "proxy")
Log Level: info
Mlock: supported: true, enabled: false
Version: Boundary v0.19.2+ent
Version Sha: 120b23b6f191075b7d01ab480340958007e6b023
Worker Auth Current Key Id: knoll-unengaged-twisting-kite-envelope-dock-liftoff-legend
Worker Auth Registration Request: GzusqckarbczHoLGQ4UA25uSR7RQJqCjDfxGSJZvEpwQpE7HzYvpDJ88a4QMP3cUUeBXhS5oTgck3ZvZ3nrZWD3HxXzgq4wNScpy7WE7JmNrrGNLNEFeqqMcyhjqGJVvg2PqiZA6arL6zYLNLNCEFtRhcvG5LLMeHc3bthkrbwLg7R7TNswTjDJWmwh4peYpnKuQ9qHEuTK9fapmw4fdvRTiTbrq78ju4asvLByFTCTR3nbk62Tc15iANYsUAn9JLSxjgRXTsuTBkp4QoqBqz89pEi258Wd1ywcACBHRT3
Worker Auth Storage Path: /home/gcpuser/boundary/worker
Worker Public Proxy Addr: 52.90.177.171:9202
==> Boundary server started! Log data will stream in below:
{"id":"l0UQKrAg7b","source":"https://hashicorp.com/boundary/ip-172-31-86-85/worker","specversion":"1.0","type":"system","data":{"version":"v0.1","op":"worker.(Worker).StartControllerConnections","data":{"msg":"Setting HCP Boundary cluster address 6f40d99c-ed7a-4f22-ae52-931a5bc79c03.proxy.boundary.hashicorp.cloud:9202 as upstream address"}},"datacontentype":"application/cloudevents","time":"2023-01-10T04:34:52.616180263Z"}
The worker will start and begin attempting to connect to the upstream Controller, printing a log message "worker is not authenticated to an upstream, not sending status".
The worker also outputs its authorization request as Worker Auth Registration
Request. This will also be saved to a file, auth_request_token
, defined by the
auth_storage_path
in the worker config.
If you scroll to the top of the worker log output you will find the Worker Auth Registration Request:
value on line 12. This value can
also be located in the /boundary/auth_request_token
file.
Copy the auth_request_token
value.
Leave the worker running, and exit the worker SSH session.
Register the worker
Authenticate to HCP Boundary as the admin user.
Log in to the HCP portal.
From the HCP Portal's Boundary page, click Open Admin UI - a new page will open.
Enter the admin username and password you created when you deployed the new instance and click Authenticate.
Once logged in, navigate to the Workers page.
Notice that only HCP workers are listed.
Click New.
You can use the new workers page to construct the contents of the worker.hcl
file.
Do not fill in any of the worker fields.
Providing the following details will construct the worker config file contents for you:
- Boundary cluster ID
- Worker public address
- Config file path
- Worker tags
The instructions on this page describe how to install the Boundary Enterprise binary and deploy the worker config file.
Because you already deployed the worker, you can ignore the worker config file builder.
Scroll down to the bottom of the New Worker page. Paste the Worker Auth Registration Request key you copied earlier.
Click Register Worker.
Click Done and notice the new worker on the Workers page.
Open a new terminal session on your local machine.
Set the BOUNDARY_ADDR
and BOUNDARY_AUTH_METHOD_ID
environment variables.
$ export BOUNDARY_ADDR="https://c3a7a20a-f663-40f3-a8e3-1b2f69b36254.boundary.hashicorp.cloud"
$ export BOUNDARY_AUTH_METHOD_ID="ampw_KfLAjMS2CG"
Log into the CLI as the admin user, using the admin login name and admin password when prompted.
$ boundary authenticate
Please enter the login name (it will be hidden):
Please enter the password (it will be hidden):
Authentication information:
Account ID: acctpw_VOeNSFX8pQ
Auth Method ID: ampw_ZbB6UXpW3B
Expiration Time: Thu, 15 Aug 2023 12:35:32 MST
User ID: u_ogz79sV4sT
The token was successfully stored in the chosen keyring and is not displayed here.
Export the Worker Auth Request Token value as an environment variable.
$ export WORKER_TOKEN="<Worker Auth Registration Request Value>"
You can use the token to issue a create worker request that authorizes the worker to Boundary.
Execute the following command to create a new worker:
$ boundary workers create worker-led -worker-generated-auth-token=$WORKER_TOKEN
Worker information:
Active Connection Count: 0
Created Time: Mon, 12 Aug 2024 19:40:57 MDT
ID: w_IPfR7jBVri
Local Storage State: unknown
Type: pki
Updated Time: Mon, 12 Aug 2024 19:40:57 MDT
Version: 1
Scope:
ID: global
Name: global
Type: global
Authorized Actions:
update
delete
add-worker-tags
set-worker-tags
remove-worker-tags
no-op
read
Host catalog plugins
For Boundary, the process for creating a dynamic host catalog has two steps:
- Create a plugin-type host catalog
- Create a host set that defines membership using filters
You set up a plugin-type host catalog using the cloud provider account details. Then you can configure a host set using a filter that selects hosts for membership based on the labels defined when you set up the hosts.
Host set filter expressions are defined by the GCP plugin provider. The GCP plugin uses simple filter queries to specify labels associated with hosts based on labels.name=value
.
For example, a host set filter that selects all hosts tagged with
"service-type": "database"
is written as:
labels.service-type=database
You can also filter for the VM status. Another common filter to return instances that are running is:
status=RUNNING
Resources within GCP can generally be filtered by label names and values, and filters can use either/or selectors for label values. This process is described in the Boundary GCP Host Plugin documentation.
To learn more about GCP filters for listing resources, visit the
instances.list
method documentation page.
Create a GCP host catalog
The details you need to set up a host catalog depend on the GCP credential type you are using.
Select a credential type to continue.
Authenticate to the Boundary Admin UI as the admin user.
Once logged in, select the org and project you want to create a dynamic host catalog for. If you are testing plugins, consider creating a new project for testing purposes.
Navigate to the Host Catalogs page. Click New Host Catalog.
Select Dynamic for the host catalog type. Select from the static or dynamic credential type tabs to learn how you should fill out the new catalog form.
Complete the following fields:
Name:
GCP Catalog
Description:
GCP host catalog
Type:
Dynamic
Provider:
GCP
Project ID:
hc-26fb1119fccb4f0081b121xxxxx
(Add your project ID)Zone:
us-central1-a
(or other zone used for this lab)Client Email:
boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com
(Replace this with your service account email)Private Key ID:
b990f6a2246bd12fd08d0bd4f6e5bc294d98da1a
(Replace this with your Private Key ID, saved when you Created the service account)Private Key:
-----BEGIN PRIVATE KEY----- MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCp5TKBnZv7S6MA qngKyf0EPDHsAfmqdRTGFF+Da1otkzldUK7fIYirAvf//9ZtT53OG6OJrYIwEO1r UH7xjxvtXgv2uQ1++FnNDFkFTw1QThwoWCM6vKE97/N27/yx9c6CxTi4HVQBN8Pp xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx QBySlHc740JarQoo6eatThwPmDJzVp3x3wfYOHoD/N8AHi0axJ0UHZnRjeWyBx8f 7A37FQB36llLzsiERQQ3LI9Z7VHDiryOGvuxAQvWovzoamfXgKFk0Q8IwncZYHTV UhzcSnq/AgMBAAECggEARSz+AhmnA8yZy7EdXKM+4sURvJtXSWkPstFjzJJe7vSl pFGwSkkQqTT1vqYwbGTBB8VoMqxTuHeD/DCT545SHDWxYF2b2amMgvl2m7tC3AJZ 47FzcryQWLRFaRWxSdKgqc1c2VaTuEU4/33HWTtKEAeyR+aViDP+slHau/Yg9qn0 pGoyzWwQ8+/qofmMiNTNfM2qbPa735WC7VpsusSaxNIaiGbneGWPPkuKYVVUmS2X 2qPaIinuUXLfWSl5DsNZc6Mw5m+Q4JyBjmY+y2dQ/9e9biMbw1C3fhkiGZHQgtqm cJfhgu1r4yVu9xxjS9hN/VyLPMAjb05Dg1Z03pXGZQKBgQDcsUAbkxO0Ylkhzqel 0KbVXaXiSXsdmAsjLv98jpuV+OxlLoesP8hC8lQt0SNp7Hme6rBO4in6UazBuR82 j9TNeMA1pjXLbvfxEn3B+8iUur+kasEHoeb1DHQmBrWo7WnwPdyi+66jn7EiLq3+ 5wKM/uyHJwCNEy43smCxaVa95QKBgQDFE3wjtRHu26ur3Xe8hUx/44H+NQNPVgow jJpZJ8vVOOvKIQ5vQf3fk5jBa03eGkp0MnKVu73NdCUKhr4DK6AZwX4+gPF3qJlS M7YOFWkhbigUgRzazo1UJEgnrdRjnfrpA97olcnW4rSH+Il4CF610qejrSvjTe9m 7NhZheSr0wKBgFtOEgHWha55ifrMrtuRSZS42+qVEBScVO9HgHgd4AzaIaNy7rq6 4LWh4GXcQtSN+3teCXd5Znij1d+IIXvHYfloXc1UaKkzzey1A8Z/zuqJoMP7TsVD nHQBpQQefoXXQ58bWO8tRYF4jiZgPahaFtoSlfUMk9PJ/bMZX5vGwxZpAoGAfK7q MFEjqmHyh8aTNYOENblDiggSMwR1Z+fc0yE5dYoQq44kasFulB/2WhDAcA9kIYW1 NwRTfgPIV5ON7cWRAhqH+5Vqr9DMR9SNjvV+0Pa3htl03v4lLiHSQMBaijfuAbRA OBhkXX6Kxye4GWf6O8Ct7QDnrmSlXRHlgyYR2Z8CgYEAuP1x/2MTBZYfP9lRQLim NAfUMhxXiCfEhFkahdpEIjJgK+ekaosF24EP/merZPqqsi5Su/0cedgXr2Nk7yWE AXolFjCKuXr0ti7tILp5vj9wH0Dy9/GVoKmNsygAiOd/c2OEgtwToJd1XmqCBtus h+q+6phCiGqPTVCvJa0xxrk= -----END PRIVATE KEY-----
(make sure the key is formatted correctly using
jq
, as described in the Create a service account section)Disable credential rotation:
true
(Check this box)
Click Save.
Gather plugin details
To set up a dynamic host catalog using the Boundary GCP hosts plugin and a service account, you need the following:
- GCP Zone
- GCP Project ID
- Client email
- Private key ID
- Private key file path
This tutorial disables credential rotation for simplicity. Refer to the GCP plugin documentation to learn more about the permissions needed to enable service account key credential rotation.
You should already have the GCP zone, project ID, and path to your private key ready.
Locate the client email for the boundary-service-account
.
$ gcloud iam service-accounts list \
--filter="name:boundary-service-account" \
--format="value(email)"
This command returns the value of your service account email, such as boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com
.
Locate the private key ID for the boundary-service-account-key
associated with the client email. Replace the iam-account
parameter with the service account client email from the last step.
$ gcloud iam service-accounts keys list \
--iam-account=boundary-service-account@hc-26fb1119fccb4f0081b121xxxx.iam.gserviceaccount.com \
--format="value(name)" --filter="keyType=USER_MANAGED"
This command returns the value of your service account private key ID, such as c6c0c7ccd59021e282837bc06365f49892198c53
.
Set the plugin details as environment variables within your shell session. Use the path to the reformatted private key file, not the original json key.
$ export GCP_ZONE=us-central1-a \
export GCP_GCP_PROJECT_ID=hc-26fb1119fccb4f0081b121xxxxx \
export CLIENT_EMAIL="boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com" \
export PRIVATE_KEY_ID="c6c0c7ccd59021e282837bc06365f49892198c53" \
export PRIVATE_KEY_FILE_PATH="/Users/username/learn-boundary-cloud-host-catalogs/gcp/terraform/boundary-service-account-key"
Check that you set the values. If the zone or project ID are not defined, set them before moving on.
$ echo $GCP_ZONE; echo $GCP_GCP_PROJECT_ID; echo $CLIENT_EMAIL; echo $PRIVATE_KEY_ID; echo $PRIVATE_KEY_FILE_PATH
us-central1-a
hc-26fb1119fccb4f0081b121xxxxx
boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com
c6c0c7ccd59021e282837bc06365f49892198c53
/Users/username/learn-boundary-cloud-host-catalogs/gcp/terraform/boundary-service-account-key.json
Authenticate to Boundary as the admin user. This user must have permission to create and manage host catalogs within your cluster and project.
Export your HCP cluster address as the BOUDARY_ADDR
environment variable.
$ export BOUNDARY_ADDR="https://237bdcda-6f22-4ce3-b7b5-92b039exxxxx.boundary.hashicorp.cloud/"
Authenticate to Boundary using your admin credentials.
$ boundary authenticate
Please enter the login name (it will be hidden):
Please enter the password (it will be hidden):
Authentication information:
Account ID: acctpw_Nf8BhC64Up
Auth Method ID: ampw_YSXPfaQrOn
Expiration Time: Wed, 25 Jun 2025 16:12:08 MDT
User ID: u_SmbPEXyx7m
The token name "default" was successfully stored in the chosen keyring and is not displayed here.
Create a host catalog
Select a project to create the new host catalog in, or create a new org and project. If you need to list your existing projects, use the boundary scopes list -recursive
command.
To create a new org for testing host catalogs, use the following command:
$ boundary scopes create -name "GCP infrastructure"
Scope information:
Created Time: Wed, 18 Jun 2025 16:24:33 MDT
ID: o_EYIrQH0g3H
Name: GCP infrastructure
Updated Time: Wed, 18 Jun 2025 16:24:33 MDT
Version: 1
Scope (parent):
ID: global
Name: global
Type: global
Authorized Actions:
no-op
read
update
delete
attach-storage-policy
detach-storage-policy
To create a new project, pass the org ID for the GCP infrastructure org (such as o_EYIrQH0g3H
) to the following command:
$ boundary scopes create -name "GCP hosts" -scope-id o_EYIrQH0g3H
Scope information:
Created Time: Wed, 18 Jun 2025 16:27:26 MDT
ID: p_D5xQlbkvtL
Name: GCP hosts
Updated Time: Wed, 18 Jun 2025 16:27:26 MDT
Version: 1
Scope (parent):
ID: o_EYIrQH0g3H
Name: GCP infrastructure
Parent Scope ID: global
Type: org
Authorized Actions:
update
delete
no-op
read
Export the project ID as the BOUNDARY_PROJECT_ID
environment variable.
$ export BOUNDARY_PROJECT_ID="p_D5xQlbkvtL"
Create a new plugin-type host catalog with a -plugin-name
of gcp
,
providing the service account private key ID and private key path using the -secret
flag. These values should map to the environment variables defined above. Additionally, ensure that you set the disable_credential_rotation=true
, GCP zone
, and GCP project_id
attributes using the -attr
flag.
$ boundary host-catalogs create plugin \
-scope-id $BOUNDARY_PROJECT_ID \
-plugin-name gcp \
-attr disable_credential_rotation=true \
-attr zone=$GCP_ZONE \
-attr project_id=env://GCP_PROJECT_ID \
-attr client_email=env://CLIENT_EMAIL \
-secret private_key_id=env://PRIVATE_KEY_ID \
-secret private_key=file://$PRIVATE_KEY_FILE_PATH
Command flags:
-plugin-name
: This corresponds to the host catalog plugin's name, such asgcp
oraws
.disable_credential_rotation
: This tutorial uses a static secret by setting this value totrue
.zone
: The GCP zone of the instances that you want to add to the host catalog.project_id
: The project ID of any instances that you want to add to the host catalog.client_email
: The unique email address that is used to identify the service account. It is required when you authenticate using the service account.private_key_id
: The unique identifier of the private key. It is required when you authenticate using the service account.private_key
: The private key used to obtain an OAuth 2.0 access token. The key must be PEM encoded. It is required when you authenticate using the service account.
Note
Although credentials are stored encrypted within Boundary, by default this plugin attempts to rotate credentials supplied through the secrets
object during a create or update call to the host catalog resource. You should disable credential rotation for this tutorial.
Sample output:
$ boundary host-catalogs create plugin \
-scope-id $BOUNDARY_PROJECT_ID \
-plugin-name gcp \
-attr disable_credential_rotation=true \
-attr zone=$GCP_ZONE \
-attr project_id=env://GCP_PROJECT_ID \
-attr client_email=env://CLIENT_EMAIL \
-secret private_key_id=env://PRIVATE_KEY_ID \
-secret private_key=file://$PRIVATE_KEY_FILE_PATH
Host Catalog information:
Created Time: Wed, 18 Jun 2025 17:11:52 MDT
ID: hcplg_DFJUQeiL4i
Plugin ID: pl_Jm8LEmKXFt
Secrets HMAC: 8y3ses71KwaudA6ZW9ECKiUukQgacW3vy6cvQUeubwfZ
Type: plugin
Updated Time: Wed, 18 Jun 2025 17:11:52 MDT
Version: 1
Scope:
ID: p_D5xQlbkvtL
Name: GCP hosts
Parent Scope ID: o_EYIrQH0g3H
Type: project
Plugin:
ID: pl_Jm8LEmKXFt
Name: gcp
Attributes:
client_email: boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com
disable_credential_rotation: true
project_id: hc-26fb1119fccb4f0081b121xxxxx
zone: us-central1-a
Authorized Actions:
delete
no-op
read
update
Authorized Actions on Host Catalog's Collections:
host-sets:
list
create
hosts:
list
Copy the host catalog ID from the output (hcplg_DFJUQeiL4i
in this example) and store it in the BOUNDARY_HOST_CATALOG_ID
environment variable.
$ export BOUNDARY_HOST_CATALOG_ID=hcplg_DFJUQeiL4i
Rename the boundary-service-account.tf.bak
file in the learn-boundary-cloud-host-catalogs/gcp/terraform/
directory to boundary-service-account.tf
.
Open the boundary-service-account.tf
file in your text editor and examine the following lines:
provider "boundary" {
addr = var.boundary_addr
auth_method_login_name = var.boundary_login_name
auth_method_password = var.boundary_login_password
}
variable "boundary_addr" {
type = string
}
variable "boundary_login_name" {
type = string
}
variable "boundary_login_password" {
type = string
}
These resources configure the boundary
provider to accept variables for the cluster's address, login name, and login password.
Now, export these values as Terraform variables in your current shell session. Replace the values prefixed with YOUR_
with your actual login credentials.
$ export TF_VAR_boundary_addr="YOUR_BOUNDARY_ADDR";
export TF_VAR_boundary_login_name="YOUR_BOUNDARY_LOGIN_NAME";
export TF_VAR_boundary_login_password="YOUR_BOUNDARY_LOGIN_PASSWORD"
For example:
$ export TF_VAR_boundary_addr="https://64d19cc2-2ef4-403a-b7f7-0529fd209c8f.boundary.hashicorp.cloud";
export TF_VAR_boundary_login_name="admin";
export TF_VAR_boundary_login_password="password"
Check that you defined these variables correctly in your shell session:
$ echo TF_VAR_boundary_addr; echo $TF_VAR_boundary_login_name; echo $TF_VAR_boundary_login_password
Examine the following resources that create a Boundary test org and scope for testing the GCP host catalog:
resource "boundary_scope" "gcp_test_org" {
name = "GCP Infrastructure"
description = "Test org for GCP resources"
scope_id = "global"
auto_create_admin_role = true
auto_create_default_role = true
}
resource "boundary_scope" "gcp_project" {
name = "GCP hosts"
description = "Test project for GCP host catalogs"
scope_id = boundary_scope.gcp_test_org.id
auto_create_admin_role = true
}
Gather plugin details
To use the Boundary GCP hosts plugin with a service account, you need the following details:
- GCP zone
- GCP project ID
- GCP client email (service account email)
- GCP private key ID (service account private key ID)
- GCP private key (service account private key)
These values are defined in the Terraform configuration in the service-account.tf
file, and do not need to be redefined.
In the boundary-service-account.tf
file, examine the boundary_host_catalog_plugin
resource. This creates a new plugin-type host catalog. Set the plugin_name
to gcp
, and provide the GCP service account key ID and service account key using the secrets_json
attribute. These values should map to the environment variables defined above. Set the GCP zone
, GCP project_id
, service account client_email
, and disable_credential_rotation
to true
using attributes_json
.
After defining the host catalog, an output resource prints the host catalog ID.
resource "boundary_host_catalog_plugin" "gcp_host_catalog" {
name = "GCP Catalog"
description = "GCP Host Catalog"
scope_id = boundary_scope.gcp_project.id
plugin_name = "gcp"
# recommended to pass in GCP secrets using a file() or using environment variables
attributes_json = jsonencode({
"zone" = var.gcp_zone,
"project_id" = var.gcp_project_id,
"client_email" = google_service_account.boundary_service_account.email,
"disable_credential_rotation" = true
})
secrets_json = jsonencode({
"private_key_id" = google_service_account.boundary_service_account.id,
"private_key" = jsondecode(base64decode(google_service_account_key.boundary_service_account_key.private_key)).private_key
})
}
output "gcp_host_catalog_id" {
value = boundary_host_catalog_plugin.gcp_host_catalog.id
}
To learn more about defining host catalogs plugins, refer to the boundary_host_catalog_plugin in the Terraform registry.
Note
Although credentials are stored encrypted within Boundary, by default this plugin attempts to rotate credentials during a create or update call to the host catalog resource. You should disable credential rotation explicitly if you don't configure credential rotation.
Upgrade the Terraform dependencies to add the Boundary provider:
$ terraform init -upgrade
Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/google...
- Finding latest version of hashicorp/local...
- Finding latest version of hashicorp/boundary...
- Using previously-installed hashicorp/google v6.40.0
- Installing hashicorp/boundary v1.2.0...
- Installed hashicorp/boundary v1.2.0 (signed by HashiCorp)
Terraform has made some changes to the provider dependency selections recorded
in the .terraform.lock.hcl file. Review those changes and commit them to your
version control system if they represent changes you intended to make.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Apply the new Terraform configuration to create the host catalog.
$ terraform apply --auto-approve
google_service_account.boundary_service_account: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/serviceAccounts/boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com]
google_compute_network.network: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/global/networks/boundary-vm-network]
google_compute_address.vm_ip[1]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-2-ip]
google_compute_address.vm_ip[2]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-3-ip]
google_compute_address.vm_ip[3]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-4-ip]
google_compute_address.vm_ip[0]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-1-ip]
google_project_iam_member.boundary_compute_viewer: Refreshing state... [id=hc-26fb1119fccb4f0081b121xxxxx/roles/compute.viewer/serviceAccount:boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com]
google_service_account_key.boundary_service_account_key: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/serviceAccounts/boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com/keys/8792ecca68798d7c2dbd517ff3e0e16bd2e62319]
google_compute_firewall.allow_ssh: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/global/firewalls/boundary-vm-allow-ssh]
google_compute_subnetwork.subnet: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/subnetworks/boundary-vm-subnet]
google_compute_instance.vm[2]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-3-production]
google_compute_instance.vm[1]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-2-dev]
google_compute_instance.vm[3]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-4-production]
google_compute_instance.vm[0]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-1-dev]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
+ create
Terraform will perform the following actions:
# boundary_host_catalog_plugin.gcp_host_catalog will be created
+ resource "boundary_host_catalog_plugin" "gcp_host_catalog" {
+ attributes_json = jsonencode(
{
+ client_email = "boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com"
+ disable_credential_rotation = true
+ project_id = "hc-26fb1119fccb4f0081b121xxxxx"
+ zone = "us-central1-a"
}
)
+ description = "GCP Host Catalog"
+ id = (known after apply)
+ internal_force_update = (known after apply)
+ internal_hmac_used_for_secrets_config_hmac = (known after apply)
+ internal_secrets_config_hmac = (known after apply)
+ name = "GCP Catalog"
+ plugin_id = (known after apply)
+ plugin_name = "gcp"
+ scope_id = (known after apply)
+ secrets_hmac = (known after apply)
+ secrets_json = (sensitive value)
}
# boundary_scope.gcp_project will be created
+ resource "boundary_scope" "gcp_project" {
+ auto_create_admin_role = true
+ description = "Test project for GCP host catalogs"
+ id = (known after apply)
+ name = "GCP hosts"
+ scope_id = (known after apply)
}
# boundary_scope.gcp_test_org will be created
+ resource "boundary_scope" "gcp_test_org" {
+ auto_create_admin_role = true
+ auto_create_default_role = true
+ description = "Test org for GCP resources"
+ id = (known after apply)
+ name = "GCP Infrastructure"
+ scope_id = "global"
}
Plan: 3 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ gcp_host_catalog_id = (known after apply)
boundary_scope.gcp_test_org: Creating...
boundary_scope.gcp_test_org: Creation complete after 0s [id=o_Hf5FFlt4u4]
boundary_scope.gcp_project: Creating...
boundary_scope.gcp_project: Creation complete after 1s [id=p_oeF6xHS6Qt]
boundary_host_catalog_plugin.gcp_host_catalog: Creating...
boundary_host_catalog_plugin.gcp_host_catalog: Creation complete after 0s [id=hcplg_hm99CHB7wD]
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
Outputs:
gcp_host_catalog_id = "hcplg_hm99CHB7wD"
vm_public_ips = [
"35.223.89.185",
"35.226.254.30",
"35.202.149.44",
"34.172.0.177",
]
Copy the host catalog ID from the output (hcplg_hm99CHB7wD
in this example) and
store it in the BOUNDARY_HOST_CATALOG_ID
environment variable.
$ export BOUNDARY_HOST_CATALOG_ID=hcplg_hm99CHB7wD
Authenticate to the Boundary Admin UI as the admin user.
Once logged in, select the org and project you want to create a dynamic host catalog for. If you are testing plugins, consider creating a new project for testing purposes.
Navigate to the Host Catalogs page. Click New Host Catalog.
Select Dynamic for the host catalog type. Select from the static or dynamic credential type tabs to learn how you should fill out the new catalog form.
Complete the following fields:
Name:
GCP Catalog
Description:
GCP host catalog
Type:
Dynamic
Provider:
GCP
Project ID:
hc-26fb1119fccb4f0081b121xxxxx
(Add your project ID)Zone:
us-central1-a
(or other zone used for this lab)Client Email:
boundary-base-SA@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com
(Replace this with your base service account email)Target Service Account ID:
boundary-target-SA@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com
(Replace this with your target service account email)Private Key ID:
b990f6a2246bd12fd08d0bd4f6e5bc294d98da1a
(Replace this with your Private Key ID, saved when you Created the service account)Private Key:
-----BEGIN PRIVATE KEY----- MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCp5TKBnZv7S6MA qngKyf0EPDHsAfmqdRTGFF+Da1otkzldUK7fIYirAvf//9ZtT53OG6OJrYIwEO1r UH7xjxvtXgv2uQ1++FnNDFkFTw1QThwoWCM6vKE97/N27/yx9c6CxTi4HVQBN8Pp xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx QBySlHc740JarQoo6eatThwPmDJzVp3x3wfYOHoD/N8AHi0axJ0UHZnRjeWyBx8f 7A37FQB36llLzsiERQQ3LI9Z7VHDiryOGvuxAQvWovzoamfXgKFk0Q8IwncZYHTV UhzcSnq/AgMBAAECggEARSz+AhmnA8yZy7EdXKM+4sURvJtXSWkPstFjzJJe7vSl pFGwSkkQqTT1vqYwbGTBB8VoMqxTuHeD/DCT545SHDWxYF2b2amMgvl2m7tC3AJZ 47FzcryQWLRFaRWxSdKgqc1c2VaTuEU4/33HWTtKEAeyR+aViDP+slHau/Yg9qn0 pGoyzWwQ8+/qofmMiNTNfM2qbPa735WC7VpsusSaxNIaiGbneGWPPkuKYVVUmS2X 2qPaIinuUXLfWSl5DsNZc6Mw5m+Q4JyBjmY+y2dQ/9e9biMbw1C3fhkiGZHQgtqm cJfhgu1r4yVu9xxjS9hN/VyLPMAjb05Dg1Z03pXGZQKBgQDcsUAbkxO0Ylkhzqel 0KbVXaXiSXsdmAsjLv98jpuV+OxlLoesP8hC8lQt0SNp7Hme6rBO4in6UazBuR82 j9TNeMA1pjXLbvfxEn3B+8iUur+kasEHoeb1DHQmBrWo7WnwPdyi+66jn7EiLq3+ 5wKM/uyHJwCNEy43smCxaVa95QKBgQDFE3wjtRHu26ur3Xe8hUx/44H+NQNPVgow jJpZJ8vVOOvKIQ5vQf3fk5jBa03eGkp0MnKVu73NdCUKhr4DK6AZwX4+gPF3qJlS M7YOFWkhbigUgRzazo1UJEgnrdRjnfrpA97olcnW4rSH+Il4CF610qejrSvjTe9m 7NhZheSr0wKBgFtOEgHWha55ifrMrtuRSZS42+qVEBScVO9HgHgd4AzaIaNy7rq6 4LWh4GXcQtSN+3teCXd5Znij1d+IIXvHYfloXc1UaKkzzey1A8Z/zuqJoMP7TsVD nHQBpQQefoXXQ58bWO8tRYF4jiZgPahaFtoSlfUMk9PJ/bMZX5vGwxZpAoGAfK7q MFEjqmHyh8aTNYOENblDiggSMwR1Z+fc0yE5dYoQq44kasFulB/2WhDAcA9kIYW1 NwRTfgPIV5ON7cWRAhqH+5Vqr9DMR9SNjvV+0Pa3htl03v4lLiHSQMBaijfuAbRA OBhkXX6Kxye4GWf6O8Ct7QDnrmSlXRHlgyYR2Z8CgYEAuP1x/2MTBZYfP9lRQLim NAfUMhxXiCfEhFkahdpEIjJgK+ekaosF24EP/merZPqqsi5Su/0cedgXr2Nk7yWE AXolFjCKuXr0ti7tILp5vj9wH0Dy9/GVoKmNsygAiOd/c2OEgtwToJd1XmqCBtus h+q+6phCiGqPTVCvJa0xxrk= -----END PRIVATE KEY-----
(make sure the key is formatted correctly using
jq
, as described in the Create a service account section)Disable credential rotation:
true
(Check this box)
Click Save.
Gather plugin details
To set up a dynamic host catalog using the Boundary GCP hosts plugin and a service account, you need the following:
- GCP Zone
- GCP Project ID
- Target service account ID (email)
- Client email
- Private key ID
- Private key file path
This tutorial disables credential rotation for simplicity. Refer to the GCP plugin documentation to learn more about the permissions needed to enable service account key credential rotation.
You should already have the GCP zone, project ID, and path to your private key ready.
Locate the target service account ID for the boundary-target-sa
, which is the email associated with the service account.
$ gcloud iam service-accounts list \
--filter="name:boundary-target-sa" \
--format="value(email)"
This command returns the value of your service account email, such as boundary-target-sa@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com
.
Locate the client email for the boundary-base-sa
.
$ gcloud iam service-accounts list \
--filter="name:boundary-base-sa" \
--format="value(email)"
This command returns the value of your service account email, such as boundary-base-sa@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com
.
Locate the private key ID for the boundary-base-sa-key
associated with the client email. Replace the iam-account
parameter with the base service account client email from the last step.
$ gcloud iam service-accounts keys list \
--iam-account=boundary-base-sa@hc-26fb1119fccb4f0081b121xxxx.iam.gserviceaccount.com \
--format="value(name)" --filter="keyType=USER_MANAGED"
This command returns the value of your service account private key ID, such as c6c0c7ccd59021e282837bc06365f49892198c53
.
Set the plugin details as environment variables within your shell session. Use the path to the reformatted private key file, not the original json key.
$ export GCP_ZONE=us-central1-a \
export GCP_PROJECT_ID=hc-26fb1119fccb4f0081b121xxxxx \
export CLIENT_EMAIL="boundary-base-sa@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com" \
export TARGET_EMAIL="boundary-target-sa@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com" \
export PRIVATE_KEY_ID="c6c0c7ccd59021e282837bc06365f49892198c53" \
export PRIVATE_KEY_FILE_PATH="/Users/username/learn-boundary-cloud-host-catalogs/gcp/terraform/boundary-base-sa-key"
Check that you set the values. If the zone or project ID are not defined, set them before moving on.
$ echo $GCP_ZONE; echo $GCP_PROJECT_ID; echo $CLIENT_EMAIL; echo $TARGET_EMAIL; echo $PRIVATE_KEY_ID; echo $PRIVATE_KEY_FILE_PATH
us-central1-a
hc-26fb1119fccb4f0081b121xxxxx
boundary-base-sa@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com
c6c0c7ccd59021e282837bc06365f49892198c53
boundary-target-sa@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com
c6c0c7ccd59021e282837bc06365f49892198c53
/Users/username/learn-boundary-cloud-host-catalogs/gcp/terraform/boundary-base-sa-key.json
Authenticate to Boundary as the admin user. This user must have permission to create and manage host catalogs within your cluster and project.
Export your HCP cluster address as the BOUDARY_ADDR
environment variable.
$ export BOUNDARY_ADDR="https://237bdcda-6f22-4ce3-b7b5-92b039exxxxx.boundary.hashicorp.cloud/"
Authenticate to Boundary using your admin credentials.
$ boundary authenticate
Please enter the login name (it will be hidden):
Please enter the password (it will be hidden):
Authentication information:
Account ID: acctpw_Nf8BhC64Up
Auth Method ID: ampw_YSXPfaQrOn
Expiration Time: Wed, 25 Jun 2025 16:12:08 MDT
User ID: u_SmbPEXyx7m
The token name "default" was successfully stored in the chosen keyring and is not displayed here.
Create a host catalog
Select a project to create the new host catalog in, or create a new org and project. If you need to list your existing projects, use the boundary scopes list -recursive
command.
To create a new org for testing host catalogs, use the following command:
$ boundary scopes create -name "GCP infrastructure"
Scope information:
Created Time: Wed, 18 Jun 2025 16:24:33 MDT
ID: o_EYIrQH0g3H
Name: GCP infrastructure
Updated Time: Wed, 18 Jun 2025 16:24:33 MDT
Version: 1
Scope (parent):
ID: global
Name: global
Type: global
Authorized Actions:
no-op
read
update
delete
attach-storage-policy
detach-storage-policy
To create a new project, pass the org ID for the GCP infrastructure org (such as o_EYIrQH0g3H
) to the following command:
$ boundary scopes create -name "GCP hosts" -scope-id o_EYIrQH0g3H
Scope information:
Created Time: Wed, 18 Jun 2025 16:27:26 MDT
ID: p_D5xQlbkvtL
Name: GCP hosts
Updated Time: Wed, 18 Jun 2025 16:27:26 MDT
Version: 1
Scope (parent):
ID: o_EYIrQH0g3H
Name: GCP infrastructure
Parent Scope ID: global
Type: org
Authorized Actions:
update
delete
no-op
read
Export the project ID as the BOUNDARY_PROJECT_ID
environment variable.
$ export BOUNDARY_PROJECT_ID="p_D5xQlbkvtL"
Create a new plugin-type host catalog with a -plugin-name
of gcp
,
providing the service account private key ID and private key path using the -secret
flag. These values should map to the environment variables defined above. Additionally, ensure that you set the disable_credential_rotation=true
, GCP zone
, and GCP project_id
attributes using the -attr
flag.
$ boundary host-catalogs create plugin \
-scope-id $BOUNDARY_PROJECT_ID \
-plugin-name gcp \
-attr disable_credential_rotation=true \
-attr zone=$GCP_ZONE \
-attr project_id=env://GCP_PROJECT_ID \
-attr client_email=env://CLIENT_EMAIL \
-attr target_service_account_id=env://TARGET_EMAIL \
-secret private_key_id=env://PRIVATE_KEY_ID \
-secret private_key=file://$PRIVATE_KEY_FILE_PATH
Command flags:
-plugin-name
: This corresponds to the host catalog plugin's name, such asgcp
oraws
.disable_credential_rotation
: This tutorial uses a static secret by setting this value totrue
.zone
: The GCP zone of the instances that you want to add to the host catalog.project_id
: The project ID of any instances that you want to add to the host catalog.client_email
: The unique email address that is used to identify the service account. It is required when you authenticate using the service account.target_email
: The email address of the service account to impersonate. It is required when you authenticate using service account impersonation.private_key_id
: The unique identifier of the private key. It is required when you authenticate using the service account.private_key
: The private key used to obtain an OAuth 2.0 access token. The key must be PEM encoded. It is required when you authenticate using the service account.
Note
Although credentials are stored encrypted within Boundary, by default this plugin attempts to rotate credentials supplied through the secrets
object during a create or update call to the host catalog resource. You should disable credential rotation for this tutorial.
Sample output:
$ boundary host-catalogs create plugin \
-scope-id $BOUNDARY_PROJECT_ID \
-plugin-name gcp \
-attr disable_credential_rotation=true \
-attr zone=$GCP_ZONE \
-attr project_id=env://GCP_PROJECT_ID \
-attr client_email=env://CLIENT_EMAIL \
-secret private_key_id=env://PRIVATE_KEY_ID \
-secret private_key=file://$PRIVATE_KEY_FILE_PATH
Host Catalog information:
Created Time: Mon, 23 Jun 2025 16:11:53 MDT
ID: hcplg_wEUiIgqT1U
Plugin ID: pl_Jm8LEmKXFt
Secrets HMAC: C4Xsn8jkKiAWxLCpjdeM9kcubzEgSFuTQXP6qXAe1ntQ
Type: plugin
Updated Time: Mon, 23 Jun 2025 16:11:53 MDT
Version: 1
Scope:
ID: p_2NFdNkSLCT
Name: GCP hosts
Parent Scope ID: o_uXFf3TMH8t
Type: project
Plugin:
ID: pl_Jm8LEmKXFt
Name: gcp
Attributes:
client_email: boundary-base-sa@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com
disable_credential_rotation: true
project_id: hc-26fb1119fccb4f0081b121xxxxx
target_service_account_id: boundary-target-sa@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com
zone: us-central1-a
Authorized Actions:
no-op
read
update
delete
Authorized Actions on Host Catalog's Collections:
host-sets:
list
create
hosts:
list
Copy the host catalog ID from the output (hcplg_wEUiIgqT1U
in this example) and store it in the BOUNDARY_HOST_CATALOG_ID
environment variable.
$ export BOUNDARY_HOST_CATALOG_ID=hcplg_wEUiIgqT1U
Rename the boundary-service-account-impersonation.tf.bak
file in the learn-boundary-cloud-host-catalogs/gcp/terraform/
directory to boundary-service-account-impersonation.tf
.
Open the boundary-service-account-impersonation.tf
file in your text editor and examine the following lines:
provider "boundary" {
addr = var.boundary_addr
auth_method_login_name = var.boundary_login_name
auth_method_password = var.boundary_login_password
}
variable "boundary_addr" {
type = string
}
variable "boundary_login_name" {
type = string
}
variable "boundary_login_password" {
type = string
}
These resources configure the boundary
provider to accept variables for the cluster's address, login name, and login password.
Now, export these values as Terraform variables in your current shell session. Replace the values prefixed with YOUR_
with your actual login credentials.
$ export TF_VAR_boundary_addr="YOUR_BOUNDARY_ADDR";
export TF_VAR_boundary_login_name="YOUR_BOUNDARY_LOGIN_NAME";
export TF_VAR_boundary_login_password="YOUR_BOUNDARY_LOGIN_PASSWORD"
For example:
$ export TF_VAR_boundary_addr="https://64d19cc2-2ef4-403a-b7f7-0529fd209c8f.boundary.hashicorp.cloud";
export TF_VAR_boundary_login_name="admin";
export TF_VAR_boundary_login_password="password"
Check that you defined these variables correctly in your shell session:
$ echo TF_VAR_boundary_addr; echo $TF_VAR_boundary_login_name; echo $TF_VAR_boundary_login_password
The following resources to create a Boundary test org and scope for testing the GCP host catalog.
resource "boundary_scope" "gcp_test_org" {
name = "GCP Infrastructure"
description = "Test org for GCP resources"
scope_id = "global"
auto_create_admin_role = true
auto_create_default_role = true
}
resource "boundary_scope" "gcp_project" {
name = "GCP hosts"
description = "Test project for GCP host catalogs"
scope_id = boundary_scope.gcp_test_org.id
auto_create_admin_role = true
}
Gather plugin details
To use the Boundary GCP hosts plugin with service account impersonation, you need the following details:
- GCP zone
- GCP project ID
- GCP client email (base service account email)
- GCP target service account ID (target service account email)
- GCP private key ID (base service account private key ID)
- GCP private key (base service account private key)
These values are defined in the Terraform configuration in the service-account-impersonation.tf
file, and do not need to be redefined.
In the boundary-service-account-impersonation.tf
file, examine the boundary_host_catalog_plugin
resource. This creates a new plugin-type host catalog. Set the plugin_name
to gcp
, and provide the GCP base service account key ID and service account key using the secrets_json
attribute. These values should map to the environment variables defined above. Set the GCP zone
, GCP project_id
, base service account client_email
, target service account ID (target service account email), and disable_credential_rotation
to true
using attributes_json
.
After defining the host catalog, an output resource prints the host catalog ID.
resource "boundary_host_catalog_plugin" "gcp_host_catalog" {
name = "GCP Catalog"
description = "GCP Host Catalog"
scope_id = boundary_scope.gcp_project.id
plugin_name = "gcp"
# recommended to pass in GCP secrets using a file() or using environment variables
attributes_json = jsonencode({
"zone" = var.gcp_zone,
"project_id" = var.gcp_project_id,
"client_email" = google_service_account.boundary_base_service_account.email,
"target_service_account_id" = google_service_account.boundary_target_service_account.email,
"disable_credential_rotation" = true
})
secrets_json = jsonencode({
"private_key_id" = google_service_account.boundary_base_service_account.id,
"private_key" = jsondecode(base64decode(google_service_account_key.boundary_base_service_account_key.private_key)).private_key
})
}
output "gcp_host_catalog_id" {
value = boundary_host_catalog_plugin.gcp_host_catalog.id
}
To learn more about defining host catalogs plugins, refer to the boundary_host_catalog_plugin in the Terraform registry.
Note
Although credentials are stored encrypted within Boundary, by default this plugin attempts to rotate credentials during a create or update call to the host catalog resource. You should disable credential rotation explicitly if you don't configure credential rotation.
Upgrade the Terraform dependencies to add the Boundary provider:
$ terraform init -upgrade
Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/google...
- Finding latest version of hashicorp/local...
- Finding latest version of hashicorp/boundary...
- Using previously-installed hashicorp/google v6.40.0
- Installing hashicorp/boundary v1.2.0...
- Installed hashicorp/boundary v1.2.0 (signed by HashiCorp)
Terraform has made some changes to the provider dependency selections recorded
in the .terraform.lock.hcl file. Review those changes and commit them to your
version control system if they represent changes you intended to make.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Apply the new Terraform configuration to create the host catalog.
$ terraform apply --auto-approve
google_compute_address.vm_ip[3]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-4-ip]
google_compute_address.vm_ip[0]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-1-ip]
google_compute_network.network: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/global/networks/boundary-vm-network]
google_service_account.boundary_target_service_account: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/serviceAccounts/boundary-target-sa2@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com]
google_compute_address.vm_ip[2]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-3-ip]
google_service_account.boundary_base_service_account: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/serviceAccounts/boundary-base-sa2@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com]
google_compute_address.vm_ip[1]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-2-ip]
google_project_iam_member.boundary_SA_token_creator: Refreshing state... [id=hc-26fb1119fccb4f0081b121xxxxx/roles/iam.serviceAccountTokenCreator/serviceAccount:boundary-base-sa2@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com]
google_service_account_key.boundary_base_service_account_key: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/serviceAccounts/boundary-base-sa2@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com/keys/0a968341bbd0f25888dbd5fba459f13530b6ac1e]
google_project_iam_member.boundary_compute_viewer: Refreshing state... [id=hc-26fb1119fccb4f0081b121xxxxx/roles/compute.viewer/serviceAccount:boundary-target-sa2@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com]
google_compute_firewall.allow_ssh: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/global/firewalls/boundary-vm-allow-ssh]
google_compute_subnetwork.subnet: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/subnetworks/boundary-vm-subnet]
google_compute_instance.vm[1]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-2-dev]
google_compute_instance.vm[0]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-1-dev]
google_compute_instance.vm[2]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-3-production]
google_compute_instance.vm[3]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-4-production]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
+ create
Terraform will perform the following actions:
# boundary_host_catalog_plugin.gcp_host_catalog will be created
+ resource "boundary_host_catalog_plugin" "gcp_host_catalog" {
+ attributes_json = jsonencode(
{
+ client_email = "boundary-base-sa2@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com"
+ disable_credential_rotation = true
+ project_id = "hc-26fb1119fccb4f0081b121xxxxx"
+ target_service_account_id = "boundary-target-sa2@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com"
+ zone = "us-central1-a"
}
)
+ description = "GCP Host Catalog"
+ id = (known after apply)
+ internal_force_update = (known after apply)
+ internal_hmac_used_for_secrets_config_hmac = (known after apply)
+ internal_secrets_config_hmac = (known after apply)
+ name = "GCP Catalog"
+ plugin_id = (known after apply)
+ plugin_name = "gcp"
+ scope_id = (known after apply)
+ secrets_hmac = (known after apply)
+ secrets_json = (sensitive value)
}
# boundary_scope.gcp_project will be created
+ resource "boundary_scope" "gcp_project" {
+ auto_create_admin_role = true
+ description = "Test project for GCP host catalogs"
+ id = (known after apply)
+ name = "GCP hosts"
+ scope_id = (known after apply)
}
# boundary_scope.gcp_test_org will be created
+ resource "boundary_scope" "gcp_test_org" {
+ auto_create_admin_role = true
+ auto_create_default_role = true
+ description = "Test org for GCP resources"
+ id = (known after apply)
+ name = "GCP Infrastructure"
+ scope_id = "global"
}
Plan: 3 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ gcp_host_catalog_id = (known after apply)
boundary_scope.gcp_test_org: Creating...
boundary_scope.gcp_test_org: Creation complete after 0s [id=o_ULxRiWUluK]
boundary_scope.gcp_project: Creating...
boundary_scope.gcp_project: Creation complete after 1s [id=p_wez6rEVyln]
boundary_host_catalog_plugin.gcp_host_catalog: Creating...
boundary_host_catalog_plugin.gcp_host_catalog: Creation complete after 0s [id=hcplg_sIWdcZElCu]
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
Outputs:
gcp_host_catalog_id = "hcplg_sIWdcZElCu"
vm_public_ips = [
"35.238.85.188",
"34.10.51.164",
"35.232.152.114",
"34.9.95.9",
]
Copy the host catalog ID from the output (hcplg_sIWdcZElCu
in this example) and
store it in the BOUNDARY_HOST_CATALOG_ID
environment variable.
$ export BOUNDARY_HOST_CATALOG_ID=hcplg_sIWdcZElCu
Gather plugin details
To set up a dynamic host catalog using the Boundary GCP hosts plugin and ADC, you need the following:
- GCP Zone
- GCP Project ID
- Worker filter
This tutorial disables credential rotation for simplicity. Refer to the GCP plugin documentation to learn more about the permissions needed to enable service account key credential rotation.
You should already have the zone and project ID ready.
The worker filter identifies which worker has permission to view the Compute Engine VMs using the boundary-service-account
assigned to it. When you configure the host catalog, by default Boundary uses any available worker to attempt to refresh the catalog, unless you specify which worker should be used. This means you need to specify which worker has access to your GCP hosts.
When you create a host catalog, it's important that the worker filter you define matches a filter expression for your worker. In the Configure the worker section, you deployed a worker config file with the following tag:
tags {
type = ["gcp-worker"]
}
An appropriate filter expression to select this worker can select its tags:
"gcp-worker" in "/tags/type"
You can also select the correct worker using other filter expressions. To learn more, refer to the Worker tags documentation.
Create a host catalog
Authenticate to the Boundary Admin UI as the admin user.
Once logged in, select the org and project you want to create a dynamic host catalog for. You can also create a new org and project, such as:
- Org:
GCP infrastructure
- Project:
GCP hosts
Navigate to the Host Catalogs page. Click New Host Catalog.
Select Dynamic for the host catalog type. Select from the static or dynamic credential type tabs to learn how you should fill out the new catalog form.
Complete the following fields:
- Name:
GCP Catalog
- Description:
GCP host catalog
- Type:
Dynamic
- Provider:
GCP
- Project ID:
hc-26fb1119fccb4f0081b121xxxxx
(Add your project ID) - Zone:
us-central1-a
(or other zone used for this lab) - Worker Filter:
"gcp-worker" in "/tags/type"
- Disable credential rotation:
true
(Check this box)
Click Save.
- Name:
Gather plugin details
To set up a dynamic host catalog using the Boundary GCP hosts plugin and ADC, you need the following:
- GCP Zone
- GCP Project ID
- Worker filter
This tutorial disables credential rotation for simplicity. Refer to the GCP plugin documentation to learn more about the permissions needed to enable service account key credential rotation.
You should already have the zone and GCP project ID ready.
Set these as environment variables within your shell session.
$ export GCP_ZONE="us-central1-a" \
export GCP_PROJECT_ID="hc-26fb1119fccb4f0081b121xxxxx"
Check that you set the values.
$ echo $GCP_ZONE; echo $GCP_PROJECT_ID
us-central1-a
hc-26fb1119fccb4f0081b121xxxxx
When you create a host catalog, you need to define a worker filter that matches a filter expression for your worker. In the Configure the worker section, you deployed a worker config file with the following tag:
tags {
type = ["gcp-worker"]
}
The worker filter identifies which worker has permission to view the Compute Engine VMs using the boundary-service-account
assigned to it. When you configure the host catalog, by default Boundary uses any available worker to attempt to refresh the catalog, unless you specify which worker should be used. This means you need to specify which worker has access to your GCP hosts.
An appropriate filter expression to select this worker can select its tags:
"gcp-worker" in "/tags/type"
You can also select the correct worker using other filter expressions. To learn more, refer to the Worker tags documentation.
Authenticate to Boundary as the admin user. This user must have permission to create and manage host catalogs within your cluster and project.
Export your HCP cluster address as the BOUDARY_ADDR
environment variable.
$ export BOUNDARY_ADDR="https://237bdcda-6f22-4ce3-b7b5-92b039exxxxx.boundary.hashicorp.cloud/"
$ boundary authenticate
Please enter the login name (it will be hidden):
Please enter the password (it will be hidden):
Authentication information:
Account ID: acctpw_Nf8BhC64Up
Auth Method ID: ampw_YSXPfaQrOn
Expiration Time: Wed, 25 Jun 2025 16:12:08 MDT
User ID: u_SmbPEXyx7m
The token name "default" was successfully stored in the chosen keyring and is not displayed here.
Create a host catalog
Select a project to create the new host catalog in, or create a new org and project. If you need to list your existing projects, use the boundary scopes list -recursive
command.
To create a new org for testing host catalogs, use the following command:
$ boundary scopes create -name "GCP infrastructure"
Scope information:
Created Time: Wed, 18 Jun 2025 16:24:33 MDT
ID: o_EYIrQH0g3H
Name: GCP infrastructure
Updated Time: Wed, 18 Jun 2025 16:24:33 MDT
Version: 1
Scope (parent):
ID: global
Name: global
Type: global
Authorized Actions:
no-op
read
update
delete
attach-storage-policy
detach-storage-policy
To create a new project, pass the org ID for the GCP infrastructure org (such as o_EYIrQH0g3H
) to the following command:
$ boundary scopes create -name "GCP hosts" -scope-id o_EYIrQH0g3H
Scope information:
Created Time: Wed, 18 Jun 2025 16:27:26 MDT
ID: p_D5xQlbkvtL
Name: GCP hosts
Updated Time: Wed, 18 Jun 2025 16:27:26 MDT
Version: 1
Scope (parent):
ID: o_EYIrQH0g3H
Name: GCP infrastructure
Parent Scope ID: global
Type: org
Authorized Actions:
update
delete
no-op
read
Export the project ID as the BOUNDARY_PROJECT_ID
environment variable.
$ export BOUNDARY_PROJECT_ID="p_D5xQlbkvtL"
Create a new plugin-type host catalog with a -plugin-name
of gcp
. Set the disable_credential_rotation=true
, zone
, and project-id
attributes using the -attr
flag. These values should map to the environment variables defined above. Set the worker-filter
parameter the filter expression for your worker.
$ boundary host-catalogs create plugin \
-scope-id $BOUNDARY_PROJECT_ID \
-plugin-name gcp \
-attr disable_credential_rotation=true \
-attr zone=$GCP_ZONE \
-attr project_id=$GCP_PROJECT_ID \
-worker-filter '"gcp-worker" in "/tags/type"'
Command flags:
-plugin-name
: This corresponds to the host catalog plugin's name,gcp
.disable_credential_rotation
: This tutorial disables credential rotation by setting this value totrue
.zone
: The deployment area within a GCP region. All host sets in this catalog are configured for this zone.project_id
: The GCP project ID associated with the service account.worker_filter
: A boolean expression to filter which workers can handle dynamic host catalog commands for this host catalog. This should match a valid filter expression for the worker deployed in GCP.
Sample output:
$ boundary host-catalogs create plugin \
-scope-id $BOUNDARY_PROJECT_ID \
-plugin-name gcp \
-attr disable_credential_rotation=true \
-attr zone=$GCP_ZONE \
-attr project_id=$GCP_PROJECT_ID \
-worker-filter '"gcp-worker" in "/tags/type"'
Host Catalog information:
Created Time: Tue, 24 Jun 2025 15:26:18 MDT
ID: hcplg_TTWMxKpPoD
Plugin ID: pl_Jm8LEmKXFt
Type: plugin
Updated Time: Tue, 24 Jun 2025 15:26:18 MDT
Version: 1
Worker Filter: "gcp-worker" in "/tags/type"
Scope:
ID: p_7WttEuXkTn
Name: GCP hosts
Parent Scope ID: o_KoLV9Z7DpF
Type: project
Plugin:
ID: pl_Jm8LEmKXFt
Name: gcp
Attributes:
disable_credential_rotation: true
project_id: hc-26fb1119fccb4f0081b121xxxxx
zone: us-central1-a
Authorized Actions:
read
update
delete
no-op
Authorized Actions on Host Catalog's Collections:
host-sets:
create
list
hosts:
list
Copy the host catalog ID from the output (hcplg_TTWMxKpPoD
in this example) and store it in the BOUNDARY_HOST_CATALOG_ID
environment variable.
$ export BOUNDARY_HOST_CATALOG_ID=hcplg_TTWMxKpPoD
When you create a host catalog, you need to define a worker filter that matches a filter expression for your worker. In the Configure the worker section, you deployed a worker config file with the following tag:
tags {
type = ["gcp-worker"]
}
The worker filter identifies which worker has permission to view the Compute Engine VMs using the boundary-service-account
assigned to it. When you configure the host catalog, by default Boundary uses any available worker to attempt to refresh the catalog, unless you specify which worker should be used. This means you need to specify which worker has access to your GCP hosts.
An appropriate filter expression to select this worker can select its tags:
"gcp-worker" in "/tags/type"
In Terraform configuration files, the filter must be defined using escape syntax. This means the filter will look like the following:
worker_filter = "\"gcp-worker\" in \"/tags/type\""
You can also select the correct worker using other filter expressions. To learn more, refer to the Worker tags documentation.
Rename the boundary-application-default-credentials.tf.bak
file in the learn-boundary-cloud-host-catalogs/gcp/terraform/
directory to boundary-application-default-credentials.tf
.
Examine the following Terraform resources in this file:
provider "boundary" {
addr = var.boundary_addr
auth_method_login_name = var.boundary_login_name
auth_method_password = var.boundary_login_password
}
variable "boundary_addr" {
type = string
}
variable "boundary_login_name" {
type = string
}
variable "boundary_login_password" {
type = string
}
These resources configure the boundary
provider to accept variables for the cluster's address, login name, and login password.
Now, export these values as Terraform variables in your current shell session. Replace the values prefixed with YOUR_
with your actual login credentials.
$ export TF_VAR_boundary_addr="YOUR_BOUNDARY_ADDR";
export TF_VAR_boundary_login_name="YOUR_BOUNDARY_LOGIN_NAME";
export TF_VAR_boundary_login_password="YOUR_BOUNDARY_LOGIN_PASSWORD"
For example:
$ export TF_VAR_boundary_addr="https://64d19cc2-2ef4-403a-b7f7-0529fd209c8f.boundary.hashicorp.cloud";
export TF_VAR_boundary_login_name="admin";
export TF_VAR_boundary_login_password="password"
Check that you defined these variables correctly in your shell session:
$ echo TF_VAR_boundary_addr; echo $TF_VAR_boundary_login_name; echo $TF_VAR_boundary_login_password
The following resources to create a Boundary test org and scope for testing the GCP host catalog.
resource "boundary_scope" "gcp_test_org" {
name = "GCP Infrastructure"
description = "Test org for GCP resources"
scope_id = "global"
auto_create_admin_role = true
auto_create_default_role = true
}
resource "boundary_scope" "gcp_project" {
name = "GCP hosts"
description = "Test project for GCP host catalogs"
scope_id = boundary_scope.gcp_test_org.id
auto_create_admin_role = true
}
Gather plugin details
To use the Boundary GCP hosts plugin with service account impersonation, you need the following details:
- GCP zone
- GCP project ID
- Worker filter
These values are defined in the Terraform configuration in the main.tf
file, and do not need to be redefined.
In the boundary-application-default-credentials.tf
file, examine the boundary_host_catalog_plugin
resource. This creates a new plugin-type host catalog. Set the plugin_name
to gcp
, and provide the GCP base service account key ID and service account key using the secrets_json
attribute. These values should map to the environment variables defined above. Set the GCP zone
, GCP project_id
, and disable_credential_rotation
to true
using attributes_json
Define the worker_filter
attribute.
After defining the host catalog, an output resource prints the host catalog ID.
resource "boundary_host_catalog_plugin" "gcp_host_catalog" {
name = "GCP Catalog"
description = "GCP Host Catalog"
scope_id = boundary_scope.gcp_project.id
plugin_name = "gcp"
# recommended to pass in GCP secrets using a file() or using environment variables
attributes_json = jsonencode({
"zone" = var.gcp_zone,
"project_id" = var.gcp_project_id,
"disable_credential_rotation" = true
})
worker_filter = "\"gcp-worker\" in \"/tags/type\""
}
output "gcp_host_catalog_id" {
value = boundary_host_catalog_plugin.gcp_host_catalog.id
}
To learn more about defining host catalogs plugins, refer to the boundary_host_catalog_plugin in the Terraform registry.
Note
Although credentials are stored encrypted within Boundary, by default this plugin attempts to rotate credentials during a create or update call to the host catalog resource. You should disable credential rotation explicitly if you don't configure credential rotation.
Upgrade the Terraform dependencies to add the Boundary provider:
$ terraform init -upgrade
Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/google...
- Finding latest version of hashicorp/local...
- Finding latest version of hashicorp/boundary...
- Using previously-installed hashicorp/google v6.40.0
- Installing hashicorp/boundary v1.2.0...
- Installed hashicorp/boundary v1.2.0 (signed by HashiCorp)
Terraform has made some changes to the provider dependency selections recorded
in the .terraform.lock.hcl file. Review those changes and commit them to your
version control system if they represent changes you intended to make.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Apply the new Terraform configuration to create the host catalog.
$ terraform apply --auto-approve
google_compute_address.vm_ip[1]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-2-ip]
google_service_account.boundary_service_account: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/serviceAccounts/boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com]
google_compute_address.vm_ip[2]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-3-ip]
google_compute_address.worker_ip: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-worker-ip]
google_compute_address.vm_ip[3]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-4-ip]
google_compute_network.network: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/global/networks/boundary-vm-network]
google_compute_address.vm_ip[0]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-1-ip]
google_project_iam_member.boundary_compute_viewer: Refreshing state... [id=hc-26fb1119fccb4f0081b121xxxxx/roles/compute.viewer/serviceAccount:boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com]
google_compute_firewall.allow_ssh: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/global/firewalls/boundary-vm-allow-ssh]
google_compute_subnetwork.subnet: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/subnetworks/boundary-vm-subnet]
google_compute_instance.worker: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-worker]
google_compute_instance.vm[1]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-2-dev]
google_compute_instance.vm[0]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-1-dev]
google_compute_instance.vm[3]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-4-production]
google_compute_instance.vm[2]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-3-production]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
+ create
Terraform will perform the following actions:
# boundary_host_catalog_plugin.gcp_host_catalog will be created
+ resource "boundary_host_catalog_plugin" "gcp_host_catalog" {
+ attributes_json = jsonencode(
{
+ disable_credential_rotation = true
+ project_id = "hc-26fb1119fccb4f0081b121xxxxx"
+ zone = "us-central1-a"
}
)
+ description = "GCP Host Catalog"
+ id = (known after apply)
+ internal_force_update = (known after apply)
+ internal_hmac_used_for_secrets_config_hmac = (known after apply)
+ internal_secrets_config_hmac = (known after apply)
+ name = "GCP Catalog"
+ plugin_id = (known after apply)
+ plugin_name = "gcp"
+ scope_id = (known after apply)
+ secrets_hmac = (known after apply)
+ worker_filter = "\"gcp-worker\" in \"/tags/type\""
}
# boundary_scope.gcp_project will be created
+ resource "boundary_scope" "gcp_project" {
+ auto_create_admin_role = true
+ description = "Test project for GCP host catalogs"
+ id = (known after apply)
+ name = "GCP hosts"
+ scope_id = (known after apply)
}
# boundary_scope.gcp_test_org will be created
+ resource "boundary_scope" "gcp_test_org" {
+ auto_create_admin_role = true
+ auto_create_default_role = true
+ description = "Test org for GCP resources"
+ id = (known after apply)
+ name = "GCP Infrastructure"
+ scope_id = "global"
}
Plan: 3 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ gcp_host_catalog_id = (known after apply)
boundary_scope.gcp_test_org: Creating...
boundary_scope.gcp_test_org: Creation complete after 0s [id=o_QSseE1ZcBe]
boundary_scope.gcp_project: Creating...
boundary_scope.gcp_project: Creation complete after 0s [id=p_FVhFUNq0jV]
boundary_host_catalog_plugin.gcp_host_catalog: Creating...
boundary_host_catalog_plugin.gcp_host_catalog: Creation complete after 1s [id=hcplg_yl7d2khxl2]
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
Outputs:
gcp_host_catalog_id = "hcplg_yl7d2khxl2"
vm_public_ips = [
"34.56.9.47",
"34.41.140.216",
"34.61.22.18",
"34.172.1.189",
]
worker_public_ip = "34.121.82.120"
worker_ssh_command = "ssh gcpuser@34.121.82.120 -i /path/to/gcpuser/private/key"
Copy the host catalog ID from the output (hcplg_yl7d2khxl2
in this example) and
store it in the BOUNDARY_HOST_CATALOG_ID
environment variable.
$ export BOUNDARY_HOST_CATALOG_ID=hcplg_yl7d2khxl2
Create the host sets
With the dynamic host catalog created, you can now create host sets that correspond to the service-type
and application
labels added to the VMs.
Recall the three host sets you want to create:
- All hosts with a
service-type
tag ofdatabase
- All hosts with an
application
tag ofdev
- All hosts with an
application
tag ofproduction
The corresponding host set filters are:
You should also add a filter to only show running hosts:
Authenticate to the Boundary Admin UI as the admin user.
Once logged in, select the org and project that contains your dynamic host catalog.
Navigate to the Host Catalogs page. Click on the GCP Catalog.
Click the Host Sets tab. Click the New button to create a new host set.
Complete the following fields:
- Name:
database
- Filter:
labels.service-type:database
Click Add beside the filter field to add the filter.
Add another filter to only show running hosts:
- Filter:
status:RUNNING
Click Add beside the filter field to add the filter.
- Name:
Click Save.
Wait a moment, then click on the Hosts tab, which should contain the following hosts:
- boundary-1-dev
- boundary-2-dev
- boundary-3-production
- boundary-4-production
Note
It may take up to five minutes for the host catalog to sync with the cloud provider. Refresh the page if the hosts do not initially appear in the catalog.
Now follow the same steps to create two more host sets for the following host set filters:
The
dev
host set.- Name:
dev
- Filter:
labels.application:dev
- Filter:
status:RUNNING
- Name:
The
production
host set.- Name:
production
- Filter:
labels.application:production
- Filter:
status:RUNNING
- Name:
Check the hosts included in the dev
and production
host sets, and then move on to the next section.
Create the first plugin host set containing hosts tagged with a service-type
of database
, supplying the host catalog ID copied above and the needed filter using the -attr
flag. Add another filter for running VMs use status=RUNNING
.
$ boundary host-sets create plugin \
-name database \
-host-catalog-id $BOUNDARY_HOST_CATALOG_ID \
-attr filters=labels.service-type:database \
-attr filters=status=RUNNING
Sample output:
$ boundary host-sets create plugin \
-name database \
-host-catalog-id $BOUNDARY_HOST_CATALOG_ID \
-attr filters=labels.service-type:database \
-attr filters=status=RUNNING
Host Set information:
Created Time: Thu, 19 Jun 2025 14:57:44 MDT
Host Catalog ID: hcplg_DFJUQeiL4i
ID: hsplg_HweIHdTM8s
Name: database
Type: plugin
Updated Time: Thu, 19 Jun 2025 14:57:44 MDT
Version: 1
Scope:
ID: p_D5xQlbkvtL
Name: GCP hosts
Parent Scope ID: o_EYIrQH0g3H
Type: project
Plugin:
ID: pl_Jm8LEmKXFt
Name: gcp
Attributes:
filters: [labels.service-type:database status=RUNNING]
Authorized Actions:
no-op
read
update
delete
Copy the database host set ID from the output (hsplg_sSCs67KYGD
in this example) and store it in the DATABASE_HOST_SET_ID
environment variable.
$ export DATABASE_HOST_SET_ID=hsplg_HweIHdTM8s
Wait a moment, then list all available hosts within the GCP hosts
host catalog, which should contain the newly created database
host set.
Note
It may take up to five minutes for the host catalog to sync with the cloud provider.
$ boundary hosts list -host-catalog-id $BOUNDARY_HOST_CATALOG_ID
Host information:
ID: hplg_ULhxTIg3QU
External ID: 6598389127844415597
External Name: boundary-1-dev
Version: 1
Type: plugin
Authorized Actions:
no-op
read
ID: hplg_yABgXT6KjD
External ID: 816652686823213165
External Name: boundary-4-production
Version: 1
Type: plugin
Authorized Actions:
no-op
read
ID: hplg_Bo9y5yZ0ha
External ID: 1073414352993208429
External Name: boundary-3-production
Version: 1
Type: plugin
Authorized Actions:
no-op
read
ID: hplg_Ojy49YJbKV
External ID: 7942699297193543790
External Name: boundary-2-dev
Version: 1
Type: plugin
Authorized Actions:
read
no-op
Troubleshooting
If the boundary hosts list
command returns No hosts found
, expand the accordion below to check your work.
If the host catalog is misconfigured, hosts will not be discoverable by Boundary. There are four potential issues to check:
- The host set filter is correct.
- The host catalog and host set IDs are exported as environment variables.
- The service account is assigned the
roles/compute.viewer
role. - The GCP zone is defined correctly.
Note
Depending on the type of configuration issue, you may need to wait 5 - 10 minutes for the existing host catalog or host sets to sync with the provider and refresh their values. If you do not want to wait, you can create a new host catalog and host set from scratch, or update the sync_interval attribute
for the host catalog.
First, check the environment variables defined when creating a host catalog plugin. Make sure these are the correct ones gathered when setting up the cloud hosts.
If these are incorrectly defined, you should set the environment variables again, and update the host catalog:
$ boundary host-catalogs update plugin \
-id $BOUNDARY_HOST_CATALOG_ID \
-plugin-name gcp \
-attr disable_credential_rotation=true \
-attr zone=$GCP_ZONE \
-attr project_id=env://GCP_PROJECT_ID \
-attr client_email=env://CLIENT_EMAIL \
-secret private_key_id=env://PRIVATE_KEY_ID \
-secret private_key=file://$PRIVATE_KEY_FILE_PATH
Second, check is the roles/compute.viewer
role is assigned to the boundary-service-account
service account. Boundary will not be able to view the hosts if incorrect permissions are assigned.
Review the steps for creating a service account.
After correcting the role assignment, give Boundary up to five minutes to refresh the connection to GCP, and list the available hosts again.
Now create a host set that correspond to the application
tag of dev
. Add another filter for running VMs use status=RUNNING
.
$ boundary host-sets create plugin \
-name dev \
-host-catalog-id $BOUNDARY_HOST_CATALOG_ID \
-attr filters=labels.application:dev \
-attr filters=status=RUNNING
Sample output:
$ boundary host-sets create plugin \
-name dev \
-host-catalog-id $BOUNDARY_HOST_CATALOG_ID \
-attr filters=labels.application:dev \
-attr filters=status=RUNNING
Host Set information:
Created Time: Thu, 19 Jun 2025 14:56:27 MDT
Host Catalog ID: hcplg_DFJUQeiL4i
ID: hsplg_f4HdZisPyX
Name: dev
Type: plugin
Updated Time: Thu, 19 Jun 2025 14:56:27 MDT
Version: 1
Scope:
ID: p_D5xQlbkvtL
Name: test project
Parent Scope ID: o_EYIrQH0g3H
Type: project
Plugin:
ID: pl_Jm8LEmKXFt
Name: gcp
Attributes:
filters: [labels.application:dev status=RUNNING]
Authorized Actions:
no-op
read
update
delete
Copy the dev host set ID from the output (hsplg_f4HdZisPyX
in this example) and store it in the DEV_HOST_SET_ID
environment variable.
$ export DEV_HOST_SET_ID=hsplg_f4HdZisPyX
Lastly, create a host set that correspond to the application
tag of production
.
$ boundary host-sets create plugin \
-name production \
-host-catalog-id $BOUNDARY_HOST_CATALOG_ID \
-attr filters=labels.application:production \
-attr filters=status=RUNNING
Sample output:
$ boundary host-sets create plugin \
-name production \
-host-catalog-id $BOUNDARY_HOST_CATALOG_ID \
-attr filters=labels.application:production \
-attr filters=status=RUNNING
Host Set information:
Created Time: Thu, 19 Jun 2025 15:11:28 MDT
Host Catalog ID: hcplg_DFJUQeiL4i
ID: hsplg_5TAYdIAXSQ
Name: production
Type: plugin
Updated Time: Thu, 19 Jun 2025 15:11:28 MDT
Version: 1
Scope:
ID: p_D5xQlbkvtL
Name: test project
Parent Scope ID: o_EYIrQH0g3H
Type: project
Plugin:
ID: pl_Jm8LEmKXFt
Name: gcp
Attributes:
filters: [labels.application:production status=RUNNING]
Authorized Actions:
no-op
read
update
delete
Copy the production host set ID from the output (hsplg_5TAYdIAXSQ
in this example) and store it in the PRODUCTION_HOST_SET_ID
environment variable.
$ export PRODUCTION_HOST_SET_ID=hsplg_5TAYdIAXSQ
Open the boundary-service-account.tf
file and uncomment the following resources:
resource "boundary_host_set_plugin" "database_host_set" {
name = "Database Host Set"
description = "GCP database host set"
host_catalog_id = boundary_host_catalog_plugin.gcp_host_catalog.id
attributes_json = jsonencode({
"filters" = ["labels.service-type:database"]
})
}
output "database_host_set_id" {
value = boundary_host_set_plugin.database_host_set.id
}
The boundary_host_set_plugin
resource creates a new plugin-type host set. The scope_id
is the same scope that contains the host catalog created earlier, which matches the host_catalog_id
.
The members of the host set are defined using the filters
attribute. To include all host VMs labeled as service-type:database
, the host set filter is set to labels.service-type:database
.
An output is included for the host set ID.
To learn more about defining host set plugins, refer to the boundary_host_set_plugin in the Terraform registry.
Now, uncommcent the two additional host sets for the following label filters and their host set ID outputs:
resource "boundary_host_set_plugin" "dev_host_set" {
name = "Dev Host Set"
description = "GCP dev host set"
host_catalog_id = boundary_host_catalog_plugin.gcp_host_catalog.id
attributes_json = jsonencode({
"filters" = ["labels.application:dev"]
})
}
output "dev_host_set_id" {
value = boundary_host_set_plugin.dev_host_set.id
}
resource "boundary_host_set_plugin" "production_host_set" {
name = "Production Host Set"
description = "GCP Production host set"
host_catalog_id = boundary_host_catalog_plugin.gcp_host_catalog.id
attributes_json = jsonencode({
"filters" = ["labels.application:production"]
})
}
output "production_host_set_id" {
value = boundary_host_set_plugin.production_host_set.id
}
Apply the new Terraform configuration to create the host sets.
$ terraform apply --auto-approve
google_service_account.boundary_service_account: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/serviceAccounts/boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com]
google_compute_address.vm_ip[3]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-4-ip]
google_compute_address.vm_ip[2]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-3-ip]
google_compute_address.vm_ip[0]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-1-ip]
google_compute_network.network: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/global/networks/boundary-vm-network]
google_compute_address.vm_ip[1]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-2-ip]
boundary_scope.gcp_test_org: Refreshing state... [id=o_Hf5FFlt4u4]
boundary_scope.gcp_project: Refreshing state... [id=p_oeF6xHS6Qt]
google_project_iam_member.boundary_compute_viewer: Refreshing state... [id=hc-26fb1119fccb4f0081b121xxxxx/roles/compute.viewer/serviceAccount:boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com]
google_service_account_key.boundary_service_account_key: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/serviceAccounts/boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com/keys/8792ecca68798d7c2dbd517ff3e0e16bd2e62319]
google_compute_firewall.allow_ssh: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/global/firewalls/boundary-vm-allow-ssh]
google_compute_subnetwork.subnet: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/subnetworks/boundary-vm-subnet]
boundary_host_catalog_plugin.gcp_host_catalog: Refreshing state... [id=hcplg_hm99CHB7wD]
google_compute_instance.vm[3]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-4-production]
google_compute_instance.vm[0]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-1-dev]
google_compute_instance.vm[2]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-3-production]
google_compute_instance.vm[1]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-2-dev]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
+ create
~ update in-place
Terraform will perform the following actions:
# boundary_host_catalog_plugin.gcp_host_catalog will be updated in-place
~ resource "boundary_host_catalog_plugin" "gcp_host_catalog" {
id = "hcplg_hm99CHB7wD"
~ internal_force_update = "5467475300603179901" -> (known after apply)
name = "GCP Catalog"
# (10 unchanged attributes hidden)
}
# boundary_host_set_plugin.database_host_set will be created
+ resource "boundary_host_set_plugin" "database_host_set" {
+ attributes_json = jsonencode(
{
+ filters = [
+ "labels.service-type:database",
]
}
)
+ description = "GCP database host set"
+ host_catalog_id = "hcplg_hm99CHB7wD"
+ id = (known after apply)
+ name = "Database Host Set"
+ type = "plugin"
}
# boundary_host_set_plugin.dev_host_set will be created
+ resource "boundary_host_set_plugin" "dev_host_set" {
+ attributes_json = jsonencode(
{
+ filters = [
+ "labels.application:dev",
]
}
)
+ description = "GCP dev host set"
+ host_catalog_id = "hcplg_hm99CHB7wD"
+ id = (known after apply)
+ name = "Dev Host Set"
+ type = "plugin"
}
# boundary_host_set_plugin.production_host_set will be created
+ resource "boundary_host_set_plugin" "production_host_set" {
+ attributes_json = jsonencode(
{
+ filters = [
+ "labels.application:production",
]
}
)
+ description = "GCP Production host set"
+ host_catalog_id = "hcplg_hm99CHB7wD"
+ id = (known after apply)
+ name = "Production Host Set"
+ type = "plugin"
}
Plan: 3 to add, 1 to change, 0 to destroy.
Changes to Outputs:
+ database_host_set_id = (known after apply)
+ dev_host_set_id = (known after apply)
+ production_host_set_id = (known after apply)
boundary_host_catalog_plugin.gcp_host_catalog: Modifying... [id=hcplg_hm99CHB7wD]
boundary_host_catalog_plugin.gcp_host_catalog: Modifications complete after 0s [id=hcplg_hm99CHB7wD]
boundary_host_set_plugin.database_host_set: Creating...
boundary_host_set_plugin.dev_host_set: Creating...
boundary_host_set_plugin.production_host_set: Creating...
boundary_host_set_plugin.production_host_set: Creation complete after 0s [id=hsplg_s5LZAC1Uo5]
boundary_host_set_plugin.dev_host_set: Creation complete after 0s [id=hsplg_FZCz6LEfsU]
boundary_host_set_plugin.database_host_set: Creation complete after 0s [id=hsplg_aw6Qiz5Dlk]
Apply complete! Resources: 3 added, 1 changed, 0 destroyed.
Outputs:
database_host_set_id = "hsplg_aw6Qiz5Dlk"
dev_host_set_id = "hsplg_FZCz6LEfsU"
gcp_host_catalog_id = "hcplg_hm99CHB7wD"
production_host_set_id = "hsplg_s5LZAC1Uo5"
vm_public_ips = [
"35.223.89.185",
"35.226.254.30",
"35.202.149.44",
"34.172.0.177",
]
Export the host catalog ID and the three host set IDs as environment variables:
$ export HOST_CATALOG_ID=hcplg_hm99CHB7wD;
export DATABASE_HOST_SET_ID=hsplg_aw6Qiz5Dlk;
export DEV_HOST_SET_ID=hsplg_FZCz6LEfsU;
export PRODUCTION_HOST_SET_ID=hsplg_s5LZAC1Uo5
Authenticate to the Boundary Admin UI as the admin user.
Once logged in, select the org and project that contains your dynamic host catalog.
Navigate to the Host Catalogs page. Click on the GCP Catalog.
Click the Host Sets tab. Click the New button to create a new host set.
Complete the following fields:
- Name:
database
- Filter:
labels.service-type:database
Click Add beside the filter field to add the filter.
Add another filter to only show running hosts:
- Filter:
status:RUNNING
Click Add beside the filter field to add the filter.
- Name:
Click Save.
Wait a moment, then click on the Hosts tab, which should contain the following hosts:
- boundary-1-dev
- boundary-2-dev
- boundary-3-production
- boundary-4-production
Note
It may take up to five minutes for the host catalog to sync with the cloud provider. Refresh the page if the hosts do not initially appear in the catalog.
Now follow the same steps to create two more host sets for the following host set filters:
The
dev
host set.- Name:
dev
- Filter:
labels.application:dev
- Filter:
status:RUNNING
- Name:
The
production
host set.- Name:
production
- Filter:
labels.application:production
- Filter:
status:RUNNING
- Name:
Check the hosts included in the dev
and production
host sets, and then move on to the next section.
Create the first plugin host set containing hosts tagged with a service-type
of database
, supplying the host catalog ID copied above and the needed filter using the -attr
flag. Add another filter for running VMs use status=RUNNING
.
$ boundary host-sets create plugin \
-name database \
-host-catalog-id $BOUNDARY_HOST_CATALOG_ID \
-attr filters=labels.service-type:database \
-attr filters=status=RUNNING
Sample output:
$ boundary host-sets create plugin \
-name database \
-host-catalog-id $BOUNDARY_HOST_CATALOG_ID \
-attr filters=labels.service-type:database \
-attr filters=status=RUNNING
Host Set information:
Created Time: Thu, 19 Jun 2025 14:57:44 MDT
Host Catalog ID: hcplg_DFJUQeiL4i
ID: hsplg_HweIHdTM8s
Name: database
Type: plugin
Updated Time: Thu, 19 Jun 2025 14:57:44 MDT
Version: 1
Scope:
ID: p_D5xQlbkvtL
Name: GCP hosts
Parent Scope ID: o_EYIrQH0g3H
Type: project
Plugin:
ID: pl_Jm8LEmKXFt
Name: gcp
Attributes:
filters: [labels.service-type:database status=RUNNING]
Authorized Actions:
no-op
read
update
delete
Copy the database host set ID from the output (hsplg_sSCs67KYGD
in this example) and store it in the DATABASE_HOST_SET_ID
environment variable.
$ export DATABASE_HOST_SET_ID=hsplg_HweIHdTM8s
Wait a moment, then list all available hosts within the GCP hosts
host catalog, which should contain the newly created database
host set.
Note
It may take up to five minutes for the host catalog to sync with the cloud provider.
$ boundary hosts list -host-catalog-id $BOUNDARY_HOST_CATALOG_ID
Host information:
ID: hplg_ULhxTIg3QU
External ID: 6598389127844415597
External Name: boundary-1-dev
Version: 1
Type: plugin
Authorized Actions:
no-op
read
ID: hplg_yABgXT6KjD
External ID: 816652686823213165
External Name: boundary-4-production
Version: 1
Type: plugin
Authorized Actions:
no-op
read
ID: hplg_Bo9y5yZ0ha
External ID: 1073414352993208429
External Name: boundary-3-production
Version: 1
Type: plugin
Authorized Actions:
no-op
read
ID: hplg_Ojy49YJbKV
External ID: 7942699297193543790
External Name: boundary-2-dev
Version: 1
Type: plugin
Authorized Actions:
read
no-op
Troubleshooting
If the boundary hosts list
command returns No hosts found
, expand the accordion below to check your work.
If the host catalog is misconfigured, hosts will not be discoverable by Boundary. There are four potential issues to check:
- The host set filter is correct.
- The host catalog and host set IDs are exported as environment variables.
- The target service account is assigned the
roles/compute.viewer
role. - The base service account is assigned the
roles/iam.serviceAccountTokenCreator
role. - The GCP zone is defined correctly.
Note
Depending on the type of configuration issue, you may need to wait 5 - 10 minutes for the existing host catalog or host sets to sync with the provider and refresh their values. If you do not want to wait, you can create a new host catalog and host set from scratch, or update the sync_interval attribute
for the host catalog.
First, check the environment variables defined when creating a host catalog plugin. Make sure these are the correct ones gathered when setting up the cloud hosts.
If these are incorrectly defined, you should set the environment variables again, and update the host catalog:
$ boundary host-catalogs update plugin \
-id $BOUNDARY_HOST_CATALOG_ID \
-plugin-name gcp \
-attr disable_credential_rotation=true \
-attr zone=$GCP_ZONE \
-attr project_id=env://GCP_PROJECT_ID \
-attr client_email=env://CLIENT_EMAIL \
-attr target_service_account_id=env://TARGET_EMAIL \
-secret private_key_id=env://PRIVATE_KEY_ID \
-secret private_key=file://$PRIVATE_KEY_FILE_PATH
Check that the roles/compute.viewer
role is assigned to the boundary-target-sa
service account. Boundary will not be able to view the hosts if incorrect permissions are assigned.
Check that the roles/iam.serviceAccountTokenCreator
role is assigned to the boundary-base-sa
service account. Boundary will not be able to impersonate the target service account without it.
Review the steps for creating a service account if needed.
After correcting the role assignment, give Boundary up to five minutes to refresh the connection to GCP, and list the available hosts again.
Now create a host set that correspond to the application
tag of dev
. Add another filter for running VMs use status=RUNNING
.
$ boundary host-sets create plugin \
-name dev \
-host-catalog-id $BOUNDARY_HOST_CATALOG_ID \
-attr filters=labels.application:dev \
-attr filters=status=RUNNING
Sample output:
$ boundary host-sets create plugin \
-name dev \
-host-catalog-id $BOUNDARY_HOST_CATALOG_ID \
-attr filters=labels.application:dev \
-attr filters=status=RUNNING
Host Set information:
Created Time: Thu, 19 Jun 2025 14:56:27 MDT
Host Catalog ID: hcplg_DFJUQeiL4i
ID: hsplg_f4HdZisPyX
Name: dev
Type: plugin
Updated Time: Thu, 19 Jun 2025 14:56:27 MDT
Version: 1
Scope:
ID: p_D5xQlbkvtL
Name: test project
Parent Scope ID: o_EYIrQH0g3H
Type: project
Plugin:
ID: pl_Jm8LEmKXFt
Name: gcp
Attributes:
filters: [labels.application:dev status=RUNNING]
Authorized Actions:
no-op
read
update
delete
Copy the dev host set ID from the output (hsplg_f4HdZisPyX
in this example) and store it in the DEV_HOST_SET_ID
environment variable.
$ export DEV_HOST_SET_ID=hsplg_f4HdZisPyX
Lastly, create a host set that correspond to the application
tag of production
.
$ boundary host-sets create plugin \
-name production \
-host-catalog-id $BOUNDARY_HOST_CATALOG_ID \
-attr filters=labels.application:production \
-attr filters=status=RUNNING
Sample output:
$ boundary host-sets create plugin \
-name production \
-host-catalog-id $BOUNDARY_HOST_CATALOG_ID \
-attr filters=labels.application:production \
-attr filters=status=RUNNING
Host Set information:
Created Time: Thu, 19 Jun 2025 15:11:28 MDT
Host Catalog ID: hcplg_DFJUQeiL4i
ID: hsplg_5TAYdIAXSQ
Name: production
Type: plugin
Updated Time: Thu, 19 Jun 2025 15:11:28 MDT
Version: 1
Scope:
ID: p_D5xQlbkvtL
Name: test project
Parent Scope ID: o_EYIrQH0g3H
Type: project
Plugin:
ID: pl_Jm8LEmKXFt
Name: gcp
Attributes:
filters: [labels.application:production status=RUNNING]
Authorized Actions:
no-op
read
update
delete
Copy the production host set ID from the output (hsplg_5TAYdIAXSQ
in this example) and store it in the PRODUCTION_HOST_SET_ID
environment variable.
$ export PRODUCTION_HOST_SET_ID=hsplg_5TAYdIAXSQ
Open the boundary-service-account-impersonation.tf
file and uncomment the following resources:
resource "boundary_host_set_plugin" "database_host_set" {
name = "Database Host Set"
description = "GCP database host set"
host_catalog_id = boundary_host_catalog_plugin.gcp_host_catalog.id
attributes_json = jsonencode({
"filters" = ["labels.service-type:database"]
})
}
output "database_host_set_id" {
value = boundary_host_set_plugin.database_host_set.id
}
The boundary_host_set_plugin
resource creates a new plugin-type host set. The scope_id
is the same scope that contains the host catalog created earlier, which matches the host_catalog_id
.
The members of the host set are defined using the filters
attribute. To include all host VMs labeled as service-type:database
, the host set filter is set to labels.service-type:database
.
An output is included for the host set ID.
To learn more about defining host set plugins, refer to the boundary_host_set_plugin in the Terraform registry.
Now, uncommcent the two additional host sets for the following label filters and their host set ID outputs:
resource "boundary_host_set_plugin" "dev_host_set" {
name = "Dev Host Set"
description = "GCP dev host set"
host_catalog_id = boundary_host_catalog_plugin.gcp_host_catalog.id
attributes_json = jsonencode({
"filters" = ["labels.application:dev"]
})
}
output "dev_host_set_id" {
value = boundary_host_set_plugin.dev_host_set.id
}
resource "boundary_host_set_plugin" "production_host_set" {
name = "Production Host Set"
description = "GCP Production host set"
host_catalog_id = boundary_host_catalog_plugin.gcp_host_catalog.id
attributes_json = jsonencode({
"filters" = ["labels.application:production"]
})
}
output "production_host_set_id" {
value = boundary_host_set_plugin.production_host_set.id
}
Apply the new Terraform configuration to create the host sets.
$ terraform apply --auto-approve
google_service_account.boundary_base_service_account: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/serviceAccounts/boundary-base-sa2@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com]
google_compute_network.network: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/global/networks/boundary-vm-network]
google_compute_address.vm_ip[3]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-4-ip]
google_compute_address.vm_ip[0]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-1-ip]
google_service_account.boundary_target_service_account: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/serviceAccounts/boundary-target-sa2@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com]
google_compute_address.vm_ip[2]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-3-ip]
google_compute_address.vm_ip[1]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-2-ip]
google_project_iam_member.boundary_SA_token_creator: Refreshing state... [id=hc-26fb1119fccb4f0081b121xxxxx/roles/iam.serviceAccountTokenCreator/serviceAccount:boundary-base-sa2@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com]
google_service_account_key.boundary_base_service_account_key: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/serviceAccounts/boundary-base-sa2@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com/keys/0a968341bbd0f25888dbd5fba459f13530b6ac1e]
google_project_iam_member.boundary_compute_viewer: Refreshing state... [id=hc-26fb1119fccb4f0081b121xxxxx/roles/compute.viewer/serviceAccount:boundary-target-sa2@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com]
google_compute_subnetwork.subnet: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/subnetworks/boundary-vm-subnet]
google_compute_firewall.allow_ssh: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/global/firewalls/boundary-vm-allow-ssh]
boundary_scope.gcp_test_org: Refreshing state... [id=o_ULxRiWUluK]
google_compute_instance.vm[1]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-2-dev]
google_compute_instance.vm[0]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-1-dev]
google_compute_instance.vm[3]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-4-production]
google_compute_instance.vm[2]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-3-production]
boundary_scope.gcp_project: Refreshing state... [id=p_wez6rEVyln]
boundary_host_catalog_plugin.gcp_host_catalog: Refreshing state... [id=hcplg_sIWdcZElCu]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
+ create
~ update in-place
Terraform will perform the following actions:
# boundary_host_catalog_plugin.gcp_host_catalog will be updated in-place
~ resource "boundary_host_catalog_plugin" "gcp_host_catalog" {
id = "hcplg_sIWdcZElCu"
~ internal_force_update = "9071317700538914654" -> (known after apply)
name = "GCP Catalog"
# (10 unchanged attributes hidden)
}
# boundary_host_set_plugin.database_host_set will be created
+ resource "boundary_host_set_plugin" "database_host_set" {
+ attributes_json = jsonencode(
{
+ filters = [
+ "labels.service-type:database",
]
}
)
+ description = "GCP database host set"
+ host_catalog_id = "hcplg_sIWdcZElCu"
+ id = (known after apply)
+ name = "Database Host Set"
+ type = "plugin"
}
# boundary_host_set_plugin.dev_host_set will be created
+ resource "boundary_host_set_plugin" "dev_host_set" {
+ attributes_json = jsonencode(
{
+ filters = [
+ "labels.application:dev",
]
}
)
+ description = "GCP dev host set"
+ host_catalog_id = "hcplg_sIWdcZElCu"
+ id = (known after apply)
+ name = "Dev Host Set"
+ type = "plugin"
}
# boundary_host_set_plugin.production_host_set will be created
+ resource "boundary_host_set_plugin" "production_host_set" {
+ attributes_json = jsonencode(
{
+ filters = [
+ "labels.application:production",
]
}
)
+ description = "GCP Production host set"
+ host_catalog_id = "hcplg_sIWdcZElCu"
+ id = (known after apply)
+ name = "Production Host Set"
+ type = "plugin"
}
Plan: 3 to add, 1 to change, 0 to destroy.
Changes to Outputs:
+ database_host_set_id = (known after apply)
+ dev_host_set_id = (known after apply)
+ production_host_set_id = (known after apply)
boundary_host_catalog_plugin.gcp_host_catalog: Modifying... [id=hcplg_sIWdcZElCu]
boundary_host_catalog_plugin.gcp_host_catalog: Modifications complete after 0s [id=hcplg_sIWdcZElCu]
boundary_host_set_plugin.production_host_set: Creating...
boundary_host_set_plugin.dev_host_set: Creating...
boundary_host_set_plugin.database_host_set: Creating...
boundary_host_set_plugin.production_host_set: Creation complete after 0s [id=hsplg_fLV9JZXssE]
boundary_host_set_plugin.dev_host_set: Creation complete after 0s [id=hsplg_Y8OEHtc6sk]
boundary_host_set_plugin.database_host_set: Creation complete after 0s [id=hsplg_qMPwed9IU0]
Apply complete! Resources: 3 added, 1 changed, 0 destroyed.
Outputs:
database_host_set_id = "hsplg_qMPwed9IU0"
dev_host_set_id = "hsplg_Y8OEHtc6sk"
gcp_host_catalog_id = "hcplg_sIWdcZElCu"
production_host_set_id = "hsplg_fLV9JZXssE"
vm_public_ips = [
"35.238.85.188",
"34.10.51.164",
"35.232.152.114",
"34.9.95.9",
]
Export the host catalog ID and the three host set IDs as environment variables:
$ export HOST_CATALOG_ID=hcplg_sIWdcZElCu;
export DATABASE_HOST_SET_ID=hsplg_qMPwed9IU0;
export DEV_HOST_SET_ID=hsplg_Y8OEHtc6sk;
export PRODUCTION_HOST_SET_ID=hsplg_fLV9JZXssE
Authenticate to the Boundary Admin UI as the admin user.
Once logged in, select the org and project that contains your dynamic host catalog.
Navigate to the Host Catalogs page. Click on the GCP Catalog.
Click the Host Sets tab. Click the New button to create a new host set.
Complete the following fields:
- Name:
database
- Filter:
labels.service-type:database
Click Add beside the filter field to add the filter.
Add another filter to only show running hosts:
- Filter:
status:RUNNING
Click Add beside the filter field to add the filter.
- Name:
Click Save.
Wait a moment, then click on the Hosts tab, which should contain the following hosts:
- boundary-1-dev
- boundary-2-dev
- boundary-3-production
- boundary-4-production
Note
It may take up to five minutes for the host catalog to sync with the cloud provider. Refresh the page if the hosts do not initially appear in the catalog.
Now follow the same steps to create two more host sets for the following host set filters:
The
dev
host set.- Name:
dev
- Filter:
labels.application:dev
- Filter:
status:RUNNING
- Name:
The
production
host set.- Name:
production
- Filter:
labels.application:production
- Filter:
status:RUNNING
- Name:
Check the hosts included in the dev
and production
host sets, and then move on to the next section.
Create the first plugin host set containing hosts tagged with a service-type
of database
, supplying the host catalog ID copied above and the needed filter using the -attr
flag. Add another filter for running VMs use status=RUNNING
.
$ boundary host-sets create plugin \
-name database \
-host-catalog-id $BOUNDARY_HOST_CATALOG_ID \
-attr filters=labels.service-type:database \
-attr filters=status=RUNNING
Sample output:
$ boundary host-sets create plugin \
-name database \
-host-catalog-id $BOUNDARY_HOST_CATALOG_ID \
-attr filters=labels.service-type:database \
-attr filters=status=RUNNING
Host Set information:
Created Time: Thu, 19 Jun 2025 14:57:44 MDT
Host Catalog ID: hcplg_DFJUQeiL4i
ID: hsplg_HweIHdTM8s
Name: database
Type: plugin
Updated Time: Thu, 19 Jun 2025 14:57:44 MDT
Version: 1
Scope:
ID: p_D5xQlbkvtL
Name: GCP hosts
Parent Scope ID: o_EYIrQH0g3H
Type: project
Plugin:
ID: pl_Jm8LEmKXFt
Name: gcp
Attributes:
filters: [labels.service-type:database status=RUNNING]
Authorized Actions:
no-op
read
update
delete
Copy the database host set ID from the output (hsplg_sSCs67KYGD
in this example) and store it in the DATABASE_HOST_SET_ID
environment variable.
$ export DATABASE_HOST_SET_ID=hsplg_HweIHdTM8s
Wait a moment, then list all available hosts within the GCP hosts
host catalog, which should contain the newly created database
host set.
Note
It may take up to five minutes for the host catalog to sync with the cloud provider.
$ boundary hosts list -host-catalog-id $BOUNDARY_HOST_CATALOG_ID
Host information:
ID: hplg_ULhxTIg3QU
External ID: 6598389127844415597
External Name: boundary-1-dev
Version: 1
Type: plugin
Authorized Actions:
no-op
read
ID: hplg_yABgXT6KjD
External ID: 816652686823213165
External Name: boundary-4-production
Version: 1
Type: plugin
Authorized Actions:
no-op
read
ID: hplg_Bo9y5yZ0ha
External ID: 1073414352993208429
External Name: boundary-3-production
Version: 1
Type: plugin
Authorized Actions:
no-op
read
ID: hplg_Ojy49YJbKV
External ID: 7942699297193543790
External Name: boundary-2-dev
Version: 1
Type: plugin
Authorized Actions:
read
no-op
Troubleshooting
If the boundary hosts list
command returns No hosts found
, expand the accordion below to check your work.
If the host catalog is misconfigured, hosts will not be discoverable by Boundary. There are a few things to check:
- The host catalog worker filter is correct.
- The host set filter is correct.
- The host catalog and host set IDs are exported as environment variables.
- The service account is assigned the
roles/compute.viewer
role. - The service account is assigned to the worker VM.
- The GCP zone is defined correctly.
Note
Depending on the type of configuration issue, you may need to wait 5 - 10 minutes for the existing host catalog or host sets to sync with the provider and refresh their values. If you do not want to wait, you can create a new host catalog and host set from scratch, or update the sync_interval attribute
for the host catalog.
First, check the environment variables defined when creating a host catalog plugin. Make sure these are the correct ones gathered when setting up the cloud hosts.
Second, check the host catalog worker filter. It should be set as "gcp-worker" in "/tags/type"
.
If these are incorrectly defined, you should set the environment variables again, and update the host catalog:
$ boundary host-catalogs create plugin \
-scope-id $BOUNDARY_PROJECT_ID \
-plugin-name gcp \
-attr disable_credential_rotation=true \
-attr zone=$GCP_ZONE \
-attr project_id=$GCP_PROJECT_ID \
-worker-filter '"gcp-worker" in "/tags/type"
Check that the roles/compute.viewer
role is assigned to the boundary-service-account
service account. Boundary will not be able to view the hosts if incorrect permissions are assigned.
Check that the service account has been assigned to the worker VM:
$ gcloud compute instances list --format="table(name,zone,status,serviceAccounts[].email:label=SERVICE_ACCOUNT)"
NAME ZONE STATUS SERVICE_ACCOUNT
boundary-1-dev us-central1-a RUNNING
boundary-2-dev us-central1-a RUNNING
boundary-3-production us-central1-a RUNNING
boundary-4-production us-central1-a RUNNING
boundary-worker us-central1-a RUNNING ['boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com']
If the service account or worker is misconfigured, review the steps for creating a service account.
After correcting the role assignment, give Boundary up to five minutes to refresh the connection to GCP, and list the available hosts again.
Now create a host set that correspond to the application
tag of dev
. Add another filter for running VMs use status=RUNNING
.
$ boundary host-sets create plugin \
-name dev \
-host-catalog-id $BOUNDARY_HOST_CATALOG_ID \
-attr filters=labels.application:dev \
-attr filters=status=RUNNING
Sample output:
$ boundary host-sets create plugin \
-name dev \
-host-catalog-id $BOUNDARY_HOST_CATALOG_ID \
-attr filters=labels.application:dev \
-attr filters=status=RUNNING
Host Set information:
Created Time: Thu, 19 Jun 2025 14:56:27 MDT
Host Catalog ID: hcplg_DFJUQeiL4i
ID: hsplg_f4HdZisPyX
Name: dev
Type: plugin
Updated Time: Thu, 19 Jun 2025 14:56:27 MDT
Version: 1
Scope:
ID: p_D5xQlbkvtL
Name: test project
Parent Scope ID: o_EYIrQH0g3H
Type: project
Plugin:
ID: pl_Jm8LEmKXFt
Name: gcp
Attributes:
filters: [labels.application:dev status=RUNNING]
Authorized Actions:
no-op
read
update
delete
Copy the dev host set ID from the output (hsplg_f4HdZisPyX
in this example) and store it in the DEV_HOST_SET_ID
environment variable.
$ export DEV_HOST_SET_ID=hsplg_f4HdZisPyX
Lastly, create a host set that correspond to the application
tag of production
.
$ boundary host-sets create plugin \
-name production \
-host-catalog-id $BOUNDARY_HOST_CATALOG_ID \
-attr filters=labels.application:production \
-attr filters=status=RUNNING
Sample output:
$ boundary host-sets create plugin \
-name production \
-host-catalog-id $BOUNDARY_HOST_CATALOG_ID \
-attr filters=labels.application:production \
-attr filters=status=RUNNING
Host Set information:
Created Time: Thu, 19 Jun 2025 15:11:28 MDT
Host Catalog ID: hcplg_DFJUQeiL4i
ID: hsplg_5TAYdIAXSQ
Name: production
Type: plugin
Updated Time: Thu, 19 Jun 2025 15:11:28 MDT
Version: 1
Scope:
ID: p_D5xQlbkvtL
Name: test project
Parent Scope ID: o_EYIrQH0g3H
Type: project
Plugin:
ID: pl_Jm8LEmKXFt
Name: gcp
Attributes:
filters: [labels.application:production status=RUNNING]
Authorized Actions:
no-op
read
update
delete
Copy the production host set ID from the output (hsplg_5TAYdIAXSQ
in this example) and store it in the PRODUCTION_HOST_SET_ID
environment variable.
$ export PRODUCTION_HOST_SET_ID=hsplg_5TAYdIAXSQ
Open the boundary-application-default-credentials.tf
file and uncomment the following resources:
resource "boundary_host_set_plugin" "database_host_set" {
name = "Database Host Set"
description = "GCP database host set"
host_catalog_id = boundary_host_catalog_plugin.gcp_host_catalog.id
attributes_json = jsonencode({
"filters" = ["labels.service-type:database"]
})
}
output "database_host_set_id" {
value = boundary_host_set_plugin.database_host_set.id
}
The boundary_host_set_plugin
resource creates a new plugin-type host set. The scope_id
is the same scope that contains the host catalog created earlier, which matches the host_catalog_id
.
The members of the host set are defined using the filters
attribute. To include all host VMs labeled as service-type:database
, the host set filter is set to labels.service-type:database
.
An output is included for the host set ID.
To learn more about defining host set plugins, refer to the boundary_host_set_plugin in the Terraform registry.
Now, uncommcent the two additional host sets for the following label filters and their host set ID outputs:
resource "boundary_host_set_plugin" "dev_host_set" {
name = "Dev Host Set"
description = "GCP dev host set"
host_catalog_id = boundary_host_catalog_plugin.gcp_host_catalog.id
attributes_json = jsonencode({
"filters" = ["labels.application:dev"]
})
}
output "dev_host_set_id" {
value = boundary_host_set_plugin.dev_host_set.id
}
resource "boundary_host_set_plugin" "production_host_set" {
name = "Production Host Set"
description = "GCP Production host set"
host_catalog_id = boundary_host_catalog_plugin.gcp_host_catalog.id
attributes_json = jsonencode({
"filters" = ["labels.application:production"]
})
}
output "production_host_set_id" {
value = boundary_host_set_plugin.production_host_set.id
}
Apply the new Terraform configuration to create the host sets.
$ terraform apply --auto-approve
google_compute_address.vm_ip[2]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-3-ip]
google_compute_address.vm_ip[0]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-1-ip]
google_service_account.boundary_service_account: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/serviceAccounts/boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com]
google_compute_address.vm_ip[3]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-4-ip]
google_compute_address.worker_ip: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-worker-ip]
google_compute_address.vm_ip[1]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-2-ip]
google_compute_network.network: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/global/networks/boundary-vm-network]
google_project_iam_member.boundary_compute_viewer: Refreshing state... [id=hc-26fb1119fccb4f0081b121xxxxx/roles/compute.viewer/serviceAccount:boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com]
google_compute_firewall.allow_ssh: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/global/firewalls/boundary-vm-allow-ssh]
google_compute_subnetwork.subnet: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/subnetworks/boundary-vm-subnet]
boundary_scope.gcp_test_org: Refreshing state... [id=o_QSseE1ZcBe]
google_compute_instance.worker: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-worker]
google_compute_instance.vm[0]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-1-dev]
google_compute_instance.vm[3]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-4-production]
google_compute_instance.vm[2]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-3-production]
google_compute_instance.vm[1]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-2-dev]
boundary_scope.gcp_project: Refreshing state... [id=p_FVhFUNq0jV]
boundary_host_catalog_plugin.gcp_host_catalog: Refreshing state... [id=hcplg_yl7d2khxl2]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
+ create
~ update in-place
Terraform will perform the following actions:
# boundary_host_catalog_plugin.gcp_host_catalog will be updated in-place
~ resource "boundary_host_catalog_plugin" "gcp_host_catalog" {
id = "hcplg_yl7d2khxl2"
~ internal_force_update = "3758449168327593267" -> (known after apply)
name = "GCP Catalog"
# (7 unchanged attributes hidden)
}
# boundary_host_set_plugin.database_host_set will be created
+ resource "boundary_host_set_plugin" "database_host_set" {
+ attributes_json = jsonencode(
{
+ filters = [
+ "labels.service-type:database",
]
}
)
+ description = "GCP database host set"
+ host_catalog_id = "hcplg_yl7d2khxl2"
+ id = (known after apply)
+ name = "Database Host Set"
+ type = "plugin"
}
# boundary_host_set_plugin.dev_host_set will be created
+ resource "boundary_host_set_plugin" "dev_host_set" {
+ attributes_json = jsonencode(
{
+ filters = [
+ "labels.application:dev",
]
}
)
+ description = "GCP dev host set"
+ host_catalog_id = "hcplg_yl7d2khxl2"
+ id = (known after apply)
+ name = "Dev Host Set"
+ type = "plugin"
}
# boundary_host_set_plugin.production_host_set will be created
+ resource "boundary_host_set_plugin" "production_host_set" {
+ attributes_json = jsonencode(
{
+ filters = [
+ "labels.application:production",
]
}
)
+ description = "GCP Production host set"
+ host_catalog_id = "hcplg_yl7d2khxl2"
+ id = (known after apply)
+ name = "Production Host Set"
+ type = "plugin"
}
Plan: 3 to add, 1 to change, 0 to destroy.
Changes to Outputs:
+ database_host_set_id = (known after apply)
+ dev_host_set_id = (known after apply)
+ production_host_set_id = (known after apply)
boundary_host_catalog_plugin.gcp_host_catalog: Modifying... [id=hcplg_yl7d2khxl2]
boundary_host_catalog_plugin.gcp_host_catalog: Modifications complete after 0s [id=hcplg_yl7d2khxl2]
boundary_host_set_plugin.dev_host_set: Creating...
boundary_host_set_plugin.production_host_set: Creating...
boundary_host_set_plugin.database_host_set: Creating...
boundary_host_set_plugin.dev_host_set: Creation complete after 1s [id=hsplg_5aY2c0cPF3]
boundary_host_set_plugin.database_host_set: Creation complete after 1s [id=hsplg_2ZHlOcxSxm]
boundary_host_set_plugin.production_host_set: Creation complete after 1s [id=hsplg_uwxxpczjMi]
Apply complete! Resources: 3 added, 1 changed, 0 destroyed.
Outputs:
database_host_set_id = "hsplg_2ZHlOcxSxm"
dev_host_set_id = "hsplg_5aY2c0cPF3"
gcp_host_catalog_id = "hcplg_yl7d2khxl2"
production_host_set_id = "hsplg_uwxxpczjMi"
vm_public_ips = [
"34.56.9.47",
"34.41.140.216",
"34.61.22.18",
"34.172.1.189",
]
worker_public_ip = "34.121.82.120"
worker_ssh_command = "ssh gcpuser@34.121.82.120 -i /path/to/gcpuser/private/key"
Export the host catalog ID and the three host set IDs as environment variables:
$ export HOST_CATALOG_ID=hcplg_yl7d2khxl2;
export DATABASE_HOST_SET_ID=hsplg_2ZHlOcxSxm;
export DEV_HOST_SET_ID=hsplg_5aY2c0cPF3;
export PRODUCTION_HOST_SET_ID=hsplg_uwxxpczjMi
Verify catalog membership
With the database
, dev
, and production
host sets defined within the GCP host catalog, the next step is to verify that the four instances listed as members of the catalog are dynamically included in the correct host sets.
Host membership can be verified by reading the host set details and verifying its membership IDs.
Check the database host set
First, verify that the database
host set contains all four members of the GCP host catalog.
Navigate to the Host Catalogs page and click on the GCP Catalog.
Click on the Host Sets tab.
Click on the Database Host Set host set, then click on the Hosts tab.
Verify that the
database
host set contains the following hosts:
- boundary-1-dev
- boundary-2-dev
- boundary-3-production
- boundary-4-production
If any of these hosts are missing, expand the troubleshooting accordion to diagnose what could be wrong.
If the host catalog is misconfigured, hosts will not be discoverable by Boundary.
At this point in the tutorial hosts are contained within the host catalog, but not appearing in one or more host sets. This means that the host set itself is likely misconfigured.
Navigate to the database
host set Details page. Check the Filter
section, and verify it matches the correctly defined filter:
labels.service-type:database
If the tag is incorrectly assigned, you should update the affected host set to fix the filter by clicking the Edit button.
After you update the filter, click Save. Boundary will automatically refresh the host set.
Note
Depending on the type of configuration issue, you may need to wait 5 - 10 minutes for the existing host catalog or host sets to sync with the provider and refresh their values. If you do not want to wait, you can create a new host catalog and host set from scratch, or update the sync_interval attribute
for the host catalog.
Check that the updated filter is working by navigating back to the Hosts tab.
If the dev
or production
host sets have incorrect filters, follow the same procedure to update their filters too.
Check the dev host set
Check the dev
host set members.
Navigate to the Host Catalogs page and click on the GCP Catalog.
Click on the Host Sets tab.
Click on the Dev Host Set host set, then click on the Hosts tab.
Verify the host set contains the following hosts:
- boundary-1-dev
- boundary-2-dev
If any of these hosts are missing, expand the troubleshooting accordion above to diagnose what could be wrong.
Check the production host set
Lastly, check the production host set members.
Navigate to the Host Catalogs page and click on the GCP Catalog.
Click on the Host Sets tab.
Click on the Production Host Set host set, then click on the Hosts tab.
Verify the host set contains the following host:
- boundary-3-production
Notice the Host IDs
section of this output. Even though there are two production instances, only one exists in the host set.
To figure out what could be wrong, compare the members of the production
host set to the members of the database
host set. Remember, members of the production
and dev
host sets are a subset of the database
host set.
For example, by comparing the Host IDs
of the dev
host catalog to the production
catalog, a host ID such as hplg_s2qrXNCm5p
should be missing from the production
host set, although it is in the database
host set.
Check the database host set
If you haven't already, export your Boundary cluster address as the BOUNDARY_ADDR
environment variable and authenticate to Boundary.
$ export BOUNDARY_ADDR="https://237bdcda-6f22-4ce3-b7b5-92b039exxxxx.boundary.hashicorp.cloud/"
Authenticate to Boundary using your admin credentials.
$ boundary authenticate
Please enter the login name (it will be hidden):
Please enter the password (it will be hidden):
Authentication information:
Account ID: acctpw_Nf8BhC64Up
Auth Method ID: ampw_YSXPfaQrOn
Expiration Time: Wed, 25 Jun 2025 16:12:08 MDT
User ID: u_SmbPEXyx7m
The token name "default" was successfully stored in the chosen keyring and is not displayed here.
Perform a read on the host set named database
to view its members.
$ boundary host-sets read -id $DATABASE_HOST_SET_ID
Host Set information:
Created Time: Thu, 19 Jun 2025 14:57:44 MDT
Host Catalog ID: hcplg_DFJUQeiL4i
ID: hsplg_HweIHdTM8s
Name: database
Type: plugin
Updated Time: Thu, 19 Jun 2025 15:08:04 MDT
Version: 3
Scope:
ID: p_D5xQlbkvtL
Name: test project
Parent Scope ID: o_EYIrQH0g3H
Type: project
Plugin:
ID: pl_Jm8LEmKXFt
Name: gcp
Attributes:
filters: [labels.service-type:database status=RUNNING]
Authorized Actions:
no-op
read
update
delete
Host IDs:
hplg_55nBj7Mj2d
hplg_i0ylmz4EQI
hplg_s2qrXNCm5p
hplg_wi9evLRlRc
If the Host IDs
section is missing, expand the troubleshooting accordion to diagnose what could be wrong.
If the host catalog is misconfigured, hosts will not be discoverable by Boundary.
At this point in the tutorial hosts are contained within the host catalog, but not appearing in one or more host sets. This means that the host set itself is likely misconfigured.
Earlier, you performed a read
on the database host set. Check the Attributes
section, and verify it matches the correctly defined filter:
Attributes:
filters: labels.service-type:database
If the tag is incorrectly assigned, you should update the affected host set to fix the filter:
$ boundary host-sets update plugin \
-id $DATABASE_HOST_SET_ID \
-name production \
-attr filters=labels.application:production
After you update the filter, Boundary will automatically refresh the host set.
Note
Depending on the type of configuration issue, you may need to wait 5 - 10 minutes for the existing host catalog or host sets to sync with the provider and refresh their values. If you do not want to wait, you can create a new host catalog and host set from scratch, or update the sync_interval attribute
for the host catalog.
Check that the updated filter is working by performing another read
on the database
host set.
$ boundary host-sets read -id $DATABASE_HOST_SET_ID
If the dev
or production
host sets are affected by incorrect filters, follow the same procedure to update their filters accordingly.
Check the dev host set
Read the dev
host set details. Verify the Host IDs are the correctly tagged hosts from the cloud provider.
$ boundary host-sets read -id $DEV_HOST_SET_ID
Host Set information:
Created Time: Thu, 19 Jun 2025 14:56:27 MDT
Host Catalog ID: hcplg_DFJUQeiL4i
ID: hsplg_f4HdZisPyX
Name: dev
Type: plugin
Updated Time: Thu, 19 Jun 2025 15:21:45 MDT
Version: 4
Scope:
ID: p_D5xQlbkvtL
Name: test project
Parent Scope ID: o_EYIrQH0g3H
Type: project
Plugin:
ID: pl_Jm8LEmKXFt
Name: gcp
Attributes:
filters: [labels.application:dev]
Authorized Actions:
no-op
read
update
delete
Host IDs:
hplg_55nBj7Mj2d
hplg_i0ylmz4EQI
Notice the Host IDs
section of the output, which returns the two dev instances configured in GCP.
Check the production host set
Lastly, read the production host set and verify its Host IDs.
$ boundary host-sets read -id $PRODUCTION_HOST_SET_ID
Host Set information:
Created Time: Thu, 19 Jun 2025 15:11:28 MDT
Host Catalog ID: hcplg_DFJUQeiL4i
ID: hsplg_5TAYdIAXSQ
Name: production
Type: plugin
Updated Time: Thu, 19 Jun 2025 15:21:45 MDT
Version: 3
Scope:
ID: p_D5xQlbkvtL
Name: test project
Parent Scope ID: o_EYIrQH0g3H
Type: project
Plugin:
ID: pl_Jm8LEmKXFt
Name: gcp
Attributes:
filters: [labels.application:production status=RUNNING]
Authorized Actions:
update
delete
no-op
read
Host IDs:
hplg_wi9evLRlRc
Notice the Host IDs
section of this output. Even though there are two production instances, only one is listed in the host set.
To figure out what could be wrong, compare the members of the production
host set to the members of the database
host set. Remember, members of the production
and dev
host sets are a sub-set of the database
host set.
$ boundary host-sets read -id $DATABASE_HOST_SET_ID
Host Set information:
Created Time: Thu, 19 Jun 2025 14:57:44 MDT
Host Catalog ID: hcplg_DFJUQeiL4i
ID: hsplg_HweIHdTM8s
Name: database
Type: plugin
Updated Time: Thu, 19 Jun 2025 15:21:45 MDT
Version: 4
Scope:
ID: p_D5xQlbkvtL
Name: test project
Parent Scope ID: o_EYIrQH0g3H
Type: project
Plugin:
ID: pl_Jm8LEmKXFt
Name: gcp
Attributes:
filters: [labels.service-type:database status=RUNNING]
Authorized Actions:
update
delete
no-op
read
Host IDs:
hplg_55nBj7Mj2d
hplg_i0ylmz4EQI
hplg_s2qrXNCm5p
hplg_wi9evLRlRc
By comparing the Host IDs
of the dev
host catalog to the production
catalog, notice that host hplg_s2qrXNCm5p
is missing from the production
host set, but it is included in database
.
Update the misconfigured host
Check the details for the missing host using the CLI or the GCP cloud console.
Find the host's name
Check the misconfigured host's details in the Boundary Admin UI.
Navigate to the Host Catalogs page and click on the GCP Catalog.
Click on the Host Sets tab.
Click on the production host set, then click on the Hosts tab.
Click on the boundary-4-production host to view its details.
The External Name:
field shows the ID of the misconfigured host (boundary-4-production
in this example). Copy this value.
Update the host's labels
Recall that host set membership is defined based on the instance tags.
Open the GCP Compute Engine dashboard and navigate to the VM Instances page.
Select the boundary-4-production
VM. Check the Labels section and the value of the application
label.
Notice that the application
label is misconfigured as prod
, instead of production
. An easy mistake to make!
Remember the filter defined for the production
host set:
labels.application:production
The label's value must equal production
exactly to match this host set.
Click Edit and then click Manage labels. Update the application
label to production
for the misconfigured instance. Click Save when finished, then Save again to finish editing the VM.
Boundary will update the production
host set automatically the next time it refreshes. This process can take up to ten minutes. For reference, the refresh interval can be manually configured on the host catalog itself.
After waiting, navigate back to the production
host set and verify that its Hosts tab now contains the boundary-4-production
host as a member.
Note
You may need to wait approximately 5 - 10 minutes for the existing host catalog or host sets to sync with the provider and refresh their values. If you do not want to wait, you can create a new host catalog and host set from scratch, or update the sync_interval attribute
for the host catalog.
$ boundary hosts read -id hplg_s2qrXNCm5p
Host information:
Created Time: Thu, 19 Jun 2025 14:57:44 MDT
External ID: 3747471569987782172
External Name: boundary-4-production
Host Catalog ID: hcplg_DFJUQeiL4i
ID: hplg_s2qrXNCm5p
Type: plugin
Updated Time: Thu, 19 Jun 2025 14:57:44 MDT
Version: 1
Scope:
ID: p_D5xQlbkvtL
Name: test project
Parent Scope ID: o_EYIrQH0g3H
Type: project
Plugin:
ID: pl_Jm8LEmKXFt
Name: gcp
Authorized Actions:
no-op
read
Host Set IDs:
hsplg_HweIHdTM8s
IP Addresses:
10.1.0.3
34.59.133.205
The External Name:
field shows the name of the misconfigured host (boundary-4-production
in this example). Copy the this value.
Recall that host set membership is defined based on the VM's labels.
Use gcloud
to query the VM's details and its label values.
$ gcloud compute instances list --filter="name=boundary-4-production" --format="table(name,labels.list():label=LABELS)"
NAME LABELS
boundary-4-production application=prod,goog-terraform-provisioned=true,name=boundary-4-production,service-type=database
Notice that the application
tag is misconfigured as prod
, instead of production
. An easy mistake to make!
Remember the filter defined for the production
host set:
labels.application:production
The label's value must equal production
exactly to be included in this host set.
Update the application
label to production
for the misconfigured VM using the gcloud compute instances add-labels
command, which will overwrite the existing tag value.
$ gcloud compute instances add-labels boundary-4-production --labels=application=production --zone=us-central1-a
Updating labels of instance [boundary-vm-4]...done.
Re-run the gcloud compute instances list
command to directly query for the updated label values.
$ gcloud compute instances list --filter="name=boundary-4-production" --format="table(name,labels.list():label=LABELS)"
NAME LABELS
boundary-4-production application=production,goog-terraform-provisioned=true,name=boundary-4-production,service-type=database
Finally, verify that the production
host set now contains both hosts.
$ boundary host-sets read -id $PRODUCTION_HOST_SET_ID
Host Set information:
Created Time: Thu, 19 Jun 2025 15:11:28 MDT
Host Catalog ID: hcplg_DFJUQeiL4i
ID: hsplg_5TAYdIAXSQ
Name: production
Type: plugin
Updated Time: Thu, 19 Jun 2025 15:21:45 MDT
Version: 3
Scope:
ID: p_D5xQlbkvtL
Name: test project
Parent Scope ID: o_EYIrQH0g3H
Type: project
Plugin:
ID: pl_Jm8LEmKXFt
Name: gcp
Attributes:
filters: [labels.application:production status=RUNNING]
Authorized Actions:
update
delete
no-op
read
Host IDs:
hplg_s2qrXNCm5p
hplg_wi9evLRlRc
Perform a read on the missing host.
$ boundary hosts read -id hplg_9PKGDlEciD
Host information:
Created Time: Fri, 20 Jun 2025 14:53:09 MDT
External ID: 6955039402468342334
External Name: boundary-4-production
Host Catalog ID: hcplg_hm99CHB7wD
ID: hplg_9PKGDlEciD
Type: plugin
Updated Time: Fri, 20 Jun 2025 14:53:09 MDT
Version: 1
Scope:
ID: p_oeF6xHS6Qt
Name: GCP hosts
Parent Scope ID: o_Hf5FFlt4u4
Type: project
Plugin:
ID: pl_Jm8LEmKXFt
Name: gcp
Authorized Actions:
no-op
read
Host Set IDs:
hsplg_aw6Qiz5Dlk
IP Addresses:
10.1.0.2
34.172.0.177
The External Name:
field shows the name of the misconfigured host (boundary-4-production
in this example). Copy the this value.
Recall that host set membership is defined based on the VM's labels.
Use gcloud
to query the VM's details and its label values.
$ gcloud compute instances list --filter="name=boundary-4-production" --format="table(name,labels.list():label=LABELS)"
NAME LABELS
boundary-4-production application=prod,goog-terraform-provisioned=true,name=boundary-4-production,service-type=database
Notice that the application
tag is misconfigured as prod
, instead of production
. An easy mistake to make!
Remember the filter defined for the production
host set:
labels.application:production
The label's value must equal production
exactly to be included in this host set.
Open the main.tf
file and check the labels configured for the VMs.
main.tf
variable "vm_labels" {
default = [
{"Name":"boundary-1-dev","service-type":"database", "application":"dev"},
{"Name":"boundary-2-dev","service-type":"database", "application":"dev"},
{"Name":"boundary-3-production","service-type":"database", "application":"production"},
{"Name":"boundary-4-production","service-type":"database", "application":"prod"}
]
}
The vm_labels
variable defines the labels for each VM. Notice that the label for boundary-4-production
is misconfigured as application:prod
.
Update the application label for boundary-4-production to "application":"production"
.
variable "vm_labels" {
default = [
{"Name":"boundary-1-dev","service-type":"database", "application":"dev"},
{"Name":"boundary-2-dev","service-type":"database", "application":"dev"},
{"Name":"boundary-3-production","service-type":"database", "application":"production"},
{"Name":"boundary-4-production","service-type":"database", "application":"production"}
]
}
Save the main.tf
file.
Execute terraform apply
to apply the configuration.
$ terraform apply --auto-approve
google_compute_address.vm_ip[1]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-2-ip]
google_compute_address.vm_ip[0]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-1-ip]
google_service_account.boundary_service_account: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/serviceAccounts/boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com]
google_compute_address.vm_ip[3]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-4-ip]
google_compute_network.network: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/global/networks/boundary-vm-network]
google_compute_address.vm_ip[2]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-3-ip]
boundary_scope.gcp_test_org: Refreshing state... [id=o_Hf5FFlt4u4]
boundary_scope.gcp_project: Refreshing state... [id=p_oeF6xHS6Qt]
google_project_iam_member.boundary_compute_viewer: Refreshing state... [id=hc-26fb1119fccb4f0081b121xxxxx/roles/compute.viewer/serviceAccount:boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com]
google_service_account_key.boundary_service_account_key: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/serviceAccounts/boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com/keys/8792ecca68798d7c2dbd517ff3e0e16bd2e62319]
google_compute_firewall.allow_ssh: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/global/firewalls/boundary-vm-allow-ssh]
google_compute_subnetwork.subnet: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/subnetworks/boundary-vm-subnet]
boundary_host_catalog_plugin.gcp_host_catalog: Refreshing state... [id=hcplg_hm99CHB7wD]
boundary_host_set_plugin.database_host_set: Refreshing state... [id=hsplg_aw6Qiz5Dlk]
boundary_host_set_plugin.dev_host_set: Refreshing state... [id=hsplg_FZCz6LEfsU]
boundary_host_set_plugin.production_host_set: Refreshing state... [id=hsplg_s5LZAC1Uo5]
google_compute_instance.vm[2]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-3-production]
google_compute_instance.vm[0]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-1-dev]
google_compute_instance.vm[3]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-4-production]
google_compute_instance.vm[1]: Refreshing state... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-2-dev]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
~ update in-place
Terraform will perform the following actions:
# boundary_host_catalog_plugin.gcp_host_catalog will be updated in-place
~ resource "boundary_host_catalog_plugin" "gcp_host_catalog" {
id = "hcplg_hm99CHB7wD"
~ internal_force_update = "682062092769732930" -> (known after apply)
name = "GCP Catalog"
# (10 unchanged attributes hidden)
}
# google_compute_instance.vm[3] will be updated in-place
~ resource "google_compute_instance" "vm" {
~ effective_labels = {
~ "application" = "prod" -> "production"
# (3 unchanged elements hidden)
}
id = "projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-4-production"
~ labels = {
~ "application" = "prod" -> "production"
# (2 unchanged elements hidden)
}
name = "boundary-4-production"
tags = [
"boundary-vm",
]
~ terraform_labels = {
~ "application" = "prod" -> "production"
# (3 unchanged elements hidden)
}
# (20 unchanged attributes hidden)
# (4 unchanged blocks hidden)
}
Plan: 0 to add, 2 to change, 0 to destroy.
google_compute_instance.vm[3]: Modifying... [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-4-production]
boundary_host_catalog_plugin.gcp_host_catalog: Modifying... [id=hcplg_hm99CHB7wD]
boundary_host_catalog_plugin.gcp_host_catalog: Modifications complete after 0s [id=hcplg_hm99CHB7wD]
google_compute_instance.vm[3]: Still modifying... [id=projects/hc-26fb1119fccb4f0081b121xxxxx...ral1-a/instances/boundary-4-production, 10s elapsed]
google_compute_instance.vm[3]: Modifications complete after 12s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-4-production]
Apply complete! Resources: 0 added, 2 changed, 0 destroyed.
Outputs:
database_host_set_id = "hsplg_aw6Qiz5Dlk"
dev_host_set_id = "hsplg_FZCz6LEfsU"
gcp_host_catalog_id = "hcplg_hm99CHB7wD"
production_host_set_id = "hsplg_s5LZAC1Uo5"
vm_public_ips = [
"35.223.89.185",
"35.226.254.30",
"35.202.149.44",
"34.172.0.177",
]
The output displays the updated labels, but you can check your work using gcloud
too.
$ gcloud compute instances list --filter="name=boundary-4-production" --format="table(name,labels.list():label=LABELS)"
NAME LABELS
boundary-4-production application=production,goog-terraform-provisioned=true,name=boundary-4-production,service-type=database
Finally, verify that the production
host set now contains both hosts.
You can do this using the Boundary Admin UI, or the CLI:
$ boundary host-sets read -id $PRODUCTION_HOST_SET_ID
Host Set information:
Created Time: Fri, 20 Jun 2025 14:52:13 MDT
Description: GCP Production host set
Host Catalog ID: hcplg_hm99CHB7wD
ID: hsplg_s5LZAC1Uo5
Name: Production Host Set
Type: plugin
Updated Time: Fri, 20 Jun 2025 15:14:10 MDT
Version: 4
Scope:
ID: p_oeF6xHS6Qt
Name: GCP hosts
Parent Scope ID: o_Hf5FFlt4u4
Type: project
Plugin:
ID: pl_Jm8LEmKXFt
Name: gcp
Attributes:
filters: [labels.application:production]
Authorized Actions:
no-op
read
update
delete
Host IDs:
hplg_8G6M1gCViI
hplg_9PKGDlEciD
Cleanup and teardown
Destroy the Terraform resources.
Remove all GCP and Boundary resources using
terraform apply -destroy
. Enteryes
when prompted to confirm the operation.Note
Terraform 0.15.2+ uses
terraform apply -destroy
to cleanup resources. If using an earlier version of Terraform, you may need to executeterraform destroy
.Open the terminal session where you deployed Terraform.
$ terraform apply -destroy
Remove the Terraform state files.
$ rm *.tfstate*
Delete any service accounts you created for this tutorial.
Navigate to the Service Accounts page. Select the name of your project, and then locate the service account you created for this tutorial, such as
boundary-service-account
.Click the three dots Actions button next to the name of the service account and select Delete. Enter in the email of the service account (such as
boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com
) and confirm by clicking Delete.Repeat this process for any other service accounts you created for this tutorial.
Delete the
gcpuser
keypair, if you created one.This keypair should only be stored on your machine.
Disable any APIs enabled for this project.
If you enabled any APIs, you can disable them by clicking the Manage button on the following API pages. This will take you to a page where you can click the Disable API button for that API.
Clean up Boundary.
If you created a test cluster for this tutorial, log in to the HCP portal and delete the HCP Boundary instance.
If you keep your HCP cluster, clean up any test orgs, projects, or host catalogs you created for this tutorial.
An an example, deleting the org created for this tutorial also removes the project, host catalog, and host sets it contains. This action cannot be undone.
In the Global scope, click Orgs.
Click the name of your test org, such as GCP infrastructure. Click on Org settings.
Click the Manage dropdown and select Delete Org. Click OK to confim the org deletion.
Destroy the Terraform resources.
Remove all GCP and Boundary resources using
terraform apply -destroy
. Enteryes
when prompted to confirm the operation.Note
Terraform 0.15.2+ uses
terraform apply -destroy
to cleanup resources. If using an earlier version of Terraform, you may need to executeterraform destroy
.Open the terminal session where you deployed Terraform.
$ terraform apply -destroy
Remove the Terraform state files.
$ rm *.tfstate*
Delete any service accounts created for this tutorial.
$ gcloud iam service-accounts delete boundary-service-account@$GCP_PROJECT_ID.iam.gserviceaccount.com You are about to delete service account [boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com] Do you want to continue (Y/n)? Y deleted service account [boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com]
Delete the
gcpuser
keypair, if you created one.This keypair should only be stored on your machine.
Disable any APIs enabled for this project.
If you enabled any APIs, you can disable them by clicking the Manage button on the following API pages. This will take you to a page where you can click the Disable API button for that API.
Clean up Boundary.
If you created a test cluster for this tutorial, log in to the HCP portal and delete the HCP Boundary instance.
If you keep your HCP cluster, clean up any test orgs, projects, or host catalogs you created for this tutorial.
An an example, deleting the org created for this tutorial also removes the project, host catalog, and host sets it contains. This action cannot be undone.
$ boundary scopes delete -id o_KoLV9Z7DpF The delete operation completed successfully.
Destroy the Terraform resources.
Remove all GCP and Boundary resources using
terraform apply -destroy
. Enteryes
when prompted to confirm the operation.Note
Terraform 0.15.2+ uses
terraform apply -destroy
to cleanup resources. If using an earlier version of Terraform, you may need to executeterraform destroy
.$ terraform apply -destroy
Remove the Terraform state files.
$ rm *.tfstate*
Disable any APIs enabled for this project.
If you enabled any APIs, you can disable them by clicking the Manage button on the following API pages. This will take you to a page where you can click the Disable API button for that API.
Clean up Boundary.
If you created a test cluster for this tutorial, log in to the HCP portal and delete the HCP Boundary instance.
If you keep your HCP cluster, check that Terraform cleaned up all your resources successfully.
Next steps
This tutorial demonstrated the steps to set up a dynamic host catalog using the GCP host plugin. You deployed the hosts on GCP use Terraform, configured a plugin-type host catalog within Boundary, and created three host sets that filtered for hosts based on their label values.
To learn more about integrating Boundary with cloud providers like GCP, AWS, and Azure, check out the Access management tutorials.