Boundary
Dynamic host catalogs on GCP
Dynamic updates to host catalogs is an important feature that sets Boundary apart from traditional access methods that rely on manual target configuration. Dynamic host catalogs enable tight integrations with major cloud providers for seamlessly onboarding cloud tenant identities, roles, and targets.
Enabling automated discovery of target hosts and services ensures that hosts and host catalogs are consistently up-to-date. This critical workflow offers access-on-demand and eliminates the need to manually configure targets for dynamic, cloud-based infrastructure.
This tutorial demonstrates configuring a dynamic host catalog using Google Cloud Platform (GCP).
Dynamic hosts catalog overview
- Get setup
- Dynamic host catalogs background
- Set up the cloud VMs
- Build a GCP host catalog
- Verify catalog membership
Prerequisites
A Boundary binary greater than 0.19.0 in your
PATH
.This tutorial assumes you can connect to an HCP Boundary cluster, a Boundary Enterprise cluster, or launch Boundary in dev mode.
A Google Cloud Platform account. This tutorial requires the creation of new cloud resources and will incur costs associated with the deployment and management of these resources.
Installing the gcloud CLI is required for this tutorial.The executable must be available within your
PATH
.Installing Terraform 0.14.9 or greater is required to deploy the lab environment for this tutorial. The binary must be available in your
PATH
.
Get setup
In this tutorial, you will test dynamic host catalog integrations using HCP Boundary, a Boundary Enterprise cluster, or by running a Boundary controller locally using Boundary Community Edition and dev mode.
Select a Deployment model for the tutorial in the upper-right corner of the screen:
- HCP Boundary
- Dev mode
- Enterprise
Open a new terminal session and start Boundary in dev mode:
$ boundary dev
==> Boundary server configuration:
[Bsr] Aead Key Bytes: Ra89HGiuufj6VCXe/HV9LgQ+SjQWq3fqo/C1uC3AAYk=
[Recovery] Aead Key Bytes: rHK1dIEPqCVmLPVdvOIdc+fNqRt5KYQIWuFnWVu0J0c=
[Root] Aead Key Bytes: 349LwA5KXqS7emj+OPICLs2ZCCVsbtaypIvUY8xqCvg=
[Worker-Auth-Storage] Aead Key Bytes: lSIzhJlHrup4dQqF6zJkHcIFF4kdTZzvUYVNN2oYA7Q=
[Worker-Auth] Aead Key Bytes: rQC5cAtlgy4hAYgd9BSho2kxmgWNnc4rfwGAGjekmxs=
[Bsr] Aead Type: aes-gcm
[Recovery] Aead Type: aes-gcm
[Root] Aead Type: aes-gcm
[Worker-Auth-Storage] Aead Type: aes-gcm
[Worker-Auth] Aead Type: aes-gcm
Cgo: disabled
Controller Public Cluster Addr: 127.0.0.1:9201
Dev Database Container: magical_snyder
Dev Database Url: postgres://postgres:password@localhost:55000/boundary?sslmode=disable
Generated Admin Login Name: admin
Generated Admin Password: password
Generated Host Catalog Id: hcst_1234567890
Generated Host Id: hst_1234567890
Generated Host Set Id: hsst_1234567890
Generated Ldap Auth Method Base Search Dns: users="ou=people,dc=example,dc=org" groups="ou=groups,dc=example,dc=org"
Generated Ldap Auth Method Host:port: [127.0.0.1]:50044 (does not have a root DSE; use simple bind)
Generated Ldap Auth Method Id: amldap_1234567890
Generated Oidc Auth Method Id: amoidc_1234567890
Generated Org Scope Id: o_1234567890
Generated Password Auth Method Id: ampw_1234567890
Generated Postgres Target With Alias: postgres.boundary.dev
Generated Project Scope Id: p_1234567890
Generated Ssh Target With Alias: ssh.boundary.dev
Generated Target With Address Id: ttcp_1234567890
Generated Target With Host Source Id: ttcp_0987654321
Generated Unprivileged Login Name: user
Generated Unprivileged Password: password
Generated Web Target With Alias: www.hashicorp.com
Listener 1: tcp (addr: "127.0.0.1:9200", cors_allowed_headers: "[]", cors_allowed_origins: "[*]", cors_enabled: "true", max_request_duration: "1m30s", purpose: "api")
Listener 2: tcp (addr: "127.0.0.1:9201", max_request_duration: "1m30s", purpose: "cluster")
Listener 3: tcp (addr: "127.0.0.1:9203", max_request_duration: "1m30s", purpose: "ops")
Listener 4: tcp (addr: "127.0.0.1:9202", max_request_duration: "1m30s", purpose: "proxy")
Log Level: info
Mlock: supported: false, enabled: false
Version: Boundary v0.19.1
Version Sha: d363c002e743cf11b8aa8d45b51148c51006dee9
Worker Auth Current Key Id: valid-graves-pungent-gestate-valuables-gigabyte-dodgy-grudge
Worker Auth Storage Path: /var/folders/_c/pgdmjpwj24bgc5knrf5plgcw0000gn/T/nodeenrollment1418653705
Worker Public Proxy Addr: 127.0.0.1:9202
==> Boundary server started! Log data will stream in below:
{
"id": "J2GwC3Tn3w",
"source": "https://hashicorp.com/boundary/robinbeck-CWY760DPQ7/controller+worker",
"specversion": "1.0",
"type": "system",
"data": {
"version": "v0.1",
"op": "controller.(rateLimiterConfig).writeSysEvent",
"data": {
"limits": {
"account": {
"change-password": [
{
"resource": "account",
"action": "change-password",
"per": "ip-address",
"unlimited": false,
"limit": 30000,
"period": "30s"
},
{
"resource": "account",
"action": "change-password",
"per": "total",
"unlimited": false,
"limit": 30000,
"period": "30s"
},
...
...More output...
...
Leave dev mode running in the current session, and open a new terminal window or tab to continue the tutorial.
If you intend on using Terraform to configure Boundary, copy the [Recovery]
AEAD Key Bytes
field from the output of boundary dev
.
Ensure you are able to authenticate to Boundary as the admin user, using the default user admin
and password password
:
$ boundary authenticate
Please enter the login name (it will be hidden):
Please enter the password (it will be hidden):
Authentication information:
Account ID: acctpw_VOeNSFX8pQ
Auth Method ID: ampw_wxzojlKJLN
Expiration Time: Mon, 13 Feb 2023 12:35:32 MST
User ID: u_1vUkf5fPs9
The token was successfully stored in the chosen keyring and is not displayed here.
Configure ngrok (Application Default Credentials only)
If you use GCP's Application Default Credentials (ADC) to connect to your local Boundary dev cluster, you will need to expose your dev cluster using ngrok.
In this tutorial, connecting to GCP using ADC requires you to deploy a self-managed Boundary worker in your GCP account. The self-managed worker needs to connect to the Boundary control plane. When Boundary is running in dev mode, you must expose it on a public port so the worker can resolve its address. You can use ngrok to expose the worker to GCP. If you do not want to use ngrok, you can follow along with the HCP Boundary Deployment version of this tutorial instead.
Set up ngrok
This example is for demonstration purposes only. Do not use dev mode in conjunction with ngrok to configure production workloads.
Note
Be aware that using ngrok creates a publicly routable address for the Boundary cluster running locally on your machine. If you do not want to use ngrok, consider the HCP deployment method for this tutorial instead.
Follow the official instructions to download ngrok for your OS distribution and set up a free account:
After setting up your account and obtaining an authtoken, configure ngrok with the authtoken:
$ ngrok config add-authtoken <token>
Now you can expose Boundary's dev controller using ngrok.
In another terminal, create a publicly routable address to port 9200 using ngrok by passing in the localhost port mapping on 9200
:
$ ngrok tcp 127.0.0.1:9200
ngrok (Ctrl+C to quit)
Visit http://localhost:4040/ to inspect, replay, and modify your requests
Session Status online
Account Jane Doe (Plan: Free)
Version 3.1.0
Region United States (us)
Latency 103ms
Web Interface http://127.0.0.1:4040
Forwarding tcp://4.tcp.us-cal-1.ngrok.io:13713 -> 127.0.0.1:7389
Connections ttl opn rt1 rt5 p50 p90
3 0 0.00 0.00 9.44 502.94
Locate the Forwarding
output and copy the address, such as
4.tcp.us-cal-1.ngrok.io:13713
.
Now that Boundary has a publicly accessible address, you can continue the tutorial and deploy the lab environment. You will deploy a self-managed worker in GCP later on.
Dynamic host catalogs background
In a cloud operating model, infrastructure resources are highly dynamic and ephemeral. Boundary does not require an on-target agent or daemon to discover target virtual machine hosts, which are challenging to maintain at scale. Instead, Boundary relies on an external entity, such as manual configuration by an administrator or IaC (infrastructure as code) application like Terraform, to ensure host definitions route to the appropriate network location. Many other secure access solutions follow this pattern.
Dynamic host catalog plugins are an alternative way to automate the discovery and configuration of Boundary hosts and targets by delegating the host registry and their connection information to a cloud infrastructure provider. Administrators supply credentials for the catalog provider and a set of tag-based rules for discovering resources in the catalog. For example, "this catalog contains VM instance types in GCP’s us-east1 region within the Engineering subscription". This model does not rely on IaC target discovery or agent-based target discovery.
Boundary uses Go-Plugin to implement a plugin model for expanding the dynamic host catalog ecosystem. Plugins enable a future ecosystem of partner and community contributed integrations across each step in the Boundary access workflow.
Host tag filtering
To maintain a dynamic host catalog, you should tag hosts in a logical way that enables sorting into host sets identifiable by filters.
For example, this tutorial configures hosts on GCP using the following tags:
Boundary hosts will be sorted into any host catalogs and host sets you configure using these filtering attributes.
GCP credential types
You can select from three types of credential configurations for setting up access to your GCP account for Boundary:
Select a credential type to continue.
Service accounts are special user accounts used to authenticate applications or services, rather than individual users. They allow automated access to GCP resources and APIs without requiring users to directly manage credentials.
To set up service account credentials for this tutorial, you will:
- Deploy the host VMs using the provided sample code.
- Enable the IAM and Service Account Credentials APIs in your project.
- Create a service account and download the private key.
- Configure a Boundary dynamic host catalog.
Set up cloud VMs
Warning
This tutorial deploys cloud VMs to test host catalog plugin configuration. You are responsible for any costs incurred by following the steps in this tutorial. Recommendations for destroying the cloud resources created in this tutorial are in the Cleanup and teardown section.
You need an GCP account to set up the Boundary GCP host plugin.
This tutorial enables configuration of the test VM hosts using Terraform.
You need access to a GCP account and sample project to set up the GCP hosts plugin for Boundary. If you don't have an account, sign up for GCP. A free account is suitable for the steps outlined in this tutorial, but please note that you are responsible for any charges incurred by following the steps in this tutorial.
The prerequisites for setting up the learning environment are:
- Terraform 0.14.9 or greater or greater is installed
- An active GCP account
- The GCP CLI is installed and available in your
PATH
.
Terraform needs to perform the following tasks to set up the lab environment:
- Deploy and tag the host set Virtual Machines in GCP.
- Configure an SSH key for the host VMs (optional).
- Configure networking permissions for the host VMs.
Configure the gcloud CLI
Authenticate to your GCP account.
$ gcloud auth login
Check the configured project for the CLI:
$ gcloud config get-value project
hc-26fb1119fccb4f0081b121xxxxx
If the correct project is defined, take no action. To change the active project, execute gcloud config set project YOUR_PROJECT
.
This command may return your project name, such as test-project
.
If you have the project name but still need the ID, execute the following command to get the project ID:
$ gcloud projects list --filter="name:test-project" --format="value(projectId)"
hc-26fb1119fccb4f0081b121xxxxx
Export the project ID as the GCP_PROJECT_ID
environment variable:
$ export GCP_PROJECT_ID="hc-26fb1119fccb4f0081b121xxxxx"
Authenticate to your GCP account using the application-default login. This enables your shell session to interact with GCP using their SDK. You must authenticate using this method to deploy Terraform.
$ gcloud auth application-default login
Configure the lab environment
This tutorial assumes you are working out of the home directory ~/
, but you can use any working directory you want for the following steps.
Clone the example code for this tutorial into your working directory.
$ git clone https://github.com/hashicorp-education/learn-boundary-cloud-host-catalogs
Navigate into the gcp/terraform
directory.
$ cd learn-boundary-cloud-host-catalogs/gcp/terraform
Examine the Terraform configuration main.tf
file. It configures the google
provider and sets up the credentials Boundary will use to authenticate to GCP.
Configure an SSH credential (required for ADC)
You can optionally configure an SSH credential to enable authentication to the host VMs configured for this tutorial. This step is not required to set up a dynamic host catalog.
Note
You must configure an SSH credential for the ADC workflow.
If you want to log into the host VMs after provisioning them with Terraform, create an SSH credential using the following documentation. If you do not want to create a keypair, skip to the Configure Terraform section.
Refer to the Create SSH keys GCP documentation to create a new keypair.
Name the username gcpuser
. For example, on a Linux machine with user admin
:
$ ssh-keygen -t rsa -f /home/admin/.ssh/gcpuser -C gcpuser
Follow the steps in the GCP SSH keys documentation linked above to create a keyfile with another operating system, or using the GCP Console UI.
After you create the new keypair and have access to the private key locally, continue to the next section.
Configure Terraform
Open the main.tf
file in your code editor.
Locate the following resources and update the values for the desired GCP project
ID, GCP region
, GCP zone
, and ssh_pub_key_file
path.
Note
Providing the ssh_pub_key_file
is optional for this workflow.
variable "project_id" {
default = "hc-26fb1119fccb4f0081b121xxxxx"
}
variable "region" {
default = "us-central1"
}
variable "zone" {
default = "us-central1-a"
}
variable "ssh_pub_key_file" {
## Optional SSH public key file path for access to the VMs
## This is required if using GCP Application Default Credentials (ADC)
description = "Path to SSH public key for the VM"
default = "/Users/username/.ssh/gcpuser.pub"
}
Save the main.tf
file.
Now initialize the Terraform plan.
$ terraform init
Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/google...
- Installing hashicorp/google v6.38.0...
- Installed hashicorp/google v6.38.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Deploy the virtual machine hosts
Now you will configure and deploy the host VMs to test the dynamic host catalog integration.
Deploy the Terraform configuration using terraform apply
.
$ terraform apply --auto-approve
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
+ create
Terraform will perform the following actions:
# google_compute_address.vm_ip[0] will be created
+ resource "google_compute_address" "vm_ip" {
+ address = (known after apply)
+ address_type = "EXTERNAL"
+ creation_timestamp = (known after apply)
+ effective_labels = {
+ "goog-terraform-provisioned" = "true"
}
+ id = (known after apply)
+ label_fingerprint = (known after apply)
+ name = "boundary-vm-1-ip"
+ network_tier = (known after apply)
+ prefix_length = (known after apply)
+ project = "hc-26fb1119fccb4f0081b121xxxxx"
+ purpose = (known after apply)
+ region = "us-central1"
+ self_link = (known after apply)
+ subnetwork = (known after apply)
+ terraform_labels = {
+ "goog-terraform-provisioned" = "true"
}
+ users = (known after apply)
}
# google_compute_address.vm_ip[1] will be created
+ resource "google_compute_address" "vm_ip" {
+ address = (known after apply)
...
... snip ...
...
Plan: 11 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ vm_public_ips = [
+ (known after apply),
+ (known after apply),
+ (known after apply),
+ (known after apply),
]
google_compute_address.vm_ip[2]: Creating...
google_compute_network.network: Creating...
google_compute_address.vm_ip[0]: Creating...
google_compute_address.vm_ip[1]: Creating...
google_compute_address.vm_ip[3]: Creating...
google_compute_address.vm_ip[0]: Creation complete after 4s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-1-ip]
google_compute_address.vm_ip[1]: Creation complete after 4s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-2-ip]
google_compute_network.network: Still creating... [10s elapsed]
google_compute_address.vm_ip[2]: Still creating... [10s elapsed]
google_compute_address.vm_ip[3]: Still creating... [10s elapsed]
google_compute_network.network: Creation complete after 12s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/global/networks/boundary-vm-network]
google_compute_firewall.allow_ssh: Creating...
google_compute_subnetwork.subnet: Creating...
google_compute_address.vm_ip[3]: Creation complete after 12s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-4-ip]
google_compute_address.vm_ip[2]: Creation complete after 12s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/addresses/boundary-vm-3-ip]
google_compute_subnetwork.subnet: Still creating... [10s elapsed]
google_compute_firewall.allow_ssh: Still creating... [10s elapsed]
google_compute_firewall.allow_ssh: Creation complete after 11s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/global/firewalls/boundary-vm-allow-ssh]
google_compute_subnetwork.subnet: Still creating... [20s elapsed]
google_compute_subnetwork.subnet: Creation complete after 21s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/regions/us-central1/subnetworks/boundary-vm-subnet]
google_compute_instance.vm[0]: Creating...
google_compute_instance.vm[1]: Creating...
google_compute_instance.vm[2]: Creating...
google_compute_instance.vm[3]: Creating...
google_compute_instance.vm[3]: Still creating... [10s elapsed]
google_compute_instance.vm[0]: Still creating... [10s elapsed]
google_compute_instance.vm[1]: Still creating... [10s elapsed]
google_compute_instance.vm[2]: Still creating... [10s elapsed]
google_compute_instance.vm[2]: Creation complete after 18s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-3-production]
google_compute_instance.vm[0]: Still creating... [20s elapsed]
google_compute_instance.vm[1]: Still creating... [20s elapsed]
google_compute_instance.vm[3]: Still creating... [20s elapsed]
google_compute_instance.vm[3]: Creation complete after 27s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-4-production]
google_compute_instance.vm[0]: Creation complete after 28s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-1-dev]
google_compute_instance.vm[1]: Creation complete after 29s [id=projects/hc-26fb1119fccb4f0081b121xxxxx/zones/us-central1-a/instances/boundary-2-dev]
Apply complete! Resources: 11 added, 0 changed, 0 destroyed.
Outputs:
vm_public_ips = [
"35.223.89.185",
"35.226.254.30",
"35.202.149.44",
"34.172.0.177",
]
You can reference the Terraform outputs at any time by executing terraform output
.
Configure GCP credentials
Boundary uses dynamic host catalogs to automatically discover GCP Compute Engine VM instances and add them as hosts. Boundary needs GCP credentials to maintain an up-to-date catalog registry.
You need to enable the IAM and IAM Service Account Credentials APIs to set up a credential method for Boundary.
Enable the IAM API
Navigate to the IAM API page.
If the API is already enabled, take no action.
Click Enable.
Enable the IAM Service Account Credentials API
Navigate to the IAM Service Account Credentials API page.
If the API is already enabled, take no action.
Click Enable.
Configure a credential type
You can authenticate Boundary to GCP using a service account, service impersonation, or GCP Application Default Credentials (ADC).
Select a credential type to continue.
Service accounts are special user accounts used to authenticate applications or services, rather than individual users. They allow automated access to GCP resources and APIs without requiring users to directly manage credentials.
To set up service account credentials for this tutorial, you will:
- Create a service account and download the private key.
- Format the private key for Boundary.
- Configure a Boundary dynamic host catalog.
Create a service account
You can configure a service account using the GCP cloud console UI, the gcloud CLI, or using Terraform.
Select a workflow to continue.
Create a new service account:
- Navigate to the IAM & Admin Service Accounts page.
- Click the name of the project you deployed your Boundary hosts to.
- Click Create service account.
- Fill in a service account name, such as
Boundary service account
. The Service account ID should be automatically created. You can optionally add a service account description. - Click Create and continue.
- Under the Permissions section, click the Select a role dropdown. Enter
roles/compute.viewer
into the filter, and select the Compute Viewer role. - Click the + Add another role button. Click the Select a role dropdown. Enter
roles/iam.serviceAccountKeyAdmin
into the filter, and select the Service Account Key Admin role. - Click Done.
- Verify that
Boundary service account
exists on the Service Accounts page.
Create the service account private key:
- From the Service Account page, click on the
Boundary service account
. Navigate to the Keys page. - Click the Add key dropdown, and select Create new key.
- Select the JSON key type, then click Create.
- Copy the Key ID field (such as
b990f6a2246bd12fd08d0bd4f6e5bc294d98da1a
). Save this value to use when setting up the host catalog later on. - The private key file is automatically downloaded to your local machine (such as
hc-d0932372bdc04876af2bbe8561e-987893b7c2d7.json
). - Follow the instructions below to format the private key.
The private key file may contain extra /n
characters and fields that can cause an error later on. Boundary needs the private key file to only contain the private key entry. You can remove these extra characters yourself, or use a tool like jq.
Remove an extra /n
characters using jq
by opening your terminal session and navigating to the directory where the private key was downloaded, such as ~/Downloads/
. Execute the following command, replacing my-gcp-private-key
with the name of your private key file:
$ jq -r '.private_key' my-gcp-private-key.json
When finished, the private key should have the following format:
-----BEGIN PRIVATE KEY-----
MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCp5TKBnZv7S6MA
qngKyf0EPDHsAfmqdRTGFF+Da1otkzldUK7fIYirAvf//9ZtT53OG6OJrYIwEO1r
UH7xjxvtXgv2uQ1++FnNDFkFTw1QThwoWCM6vKE97/N27/yx9c6CxTi4HVQBN8Pp
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
QBySlHc740JarQoo6eatThwPmDJzVp3x3wfYOHoD/N8AHi0axJ0UHZnRjeWyBx8f
7A37FQB36llLzsiERQQ3LI9Z7VHDiryOGvuxAQvWovzoamfXgKFk0Q8IwncZYHTV
UhzcSnq/AgMBAAECggEARSz+AhmnA8yZy7EdXKM+4sURvJtXSWkPstFjzJJe7vSl
pFGwSkkQqTT1vqYwbGTBB8VoMqxTuHeD/DCT545SHDWxYF2b2amMgvl2m7tC3AJZ
47FzcryQWLRFaRWxSdKgqc1c2VaTuEU4/33HWTtKEAeyR+aViDP+slHau/Yg9qn0
pGoyzWwQ8+/qofmMiNTNfM2qbPa735WC7VpsusSaxNIaiGbneGWPPkuKYVVUmS2X
2qPaIinuUXLfWSl5DsNZc6Mw5m+Q4JyBjmY+y2dQ/9e9biMbw1C3fhkiGZHQgtqm
cJfhgu1r4yVu9xxjS9hN/VyLPMAjb05Dg1Z03pXGZQKBgQDcsUAbkxO0Ylkhzqel
0KbVXaXiSXsdmAsjLv98jpuV+OxlLoesP8hC8lQt0SNp7Hme6rBO4in6UazBuR82
j9TNeMA1pjXLbvfxEn3B+8iUur+kasEHoeb1DHQmBrWo7WnwPdyi+66jn7EiLq3+
5wKM/uyHJwCNEy43smCxaVa95QKBgQDFE3wjtRHu26ur3Xe8hUx/44H+NQNPVgow
jJpZJ8vVOOvKIQ5vQf3fk5jBa03eGkp0MnKVu73NdCUKhr4DK6AZwX4+gPF3qJlS
M7YOFWkhbigUgRzazo1UJEgnrdRjnfrpA97olcnW4rSH+Il4CF610qejrSvjTe9m
7NhZheSr0wKBgFtOEgHWha55ifrMrtuRSZS42+qVEBScVO9HgHgd4AzaIaNy7rq6
4LWh4GXcQtSN+3teCXd5Znij1d+IIXvHYfloXc1UaKkzzey1A8Z/zuqJoMP7TsVD
nHQBpQQefoXXQ58bWO8tRYF4jiZgPahaFtoSlfUMk9PJ/bMZX5vGwxZpAoGAfK7q
MFEjqmHyh8aTNYOENblDiggSMwR1Z+fc0yE5dYoQq44kasFulB/2WhDAcA9kIYW1
NwRTfgPIV5ON7cWRAhqH+5Vqr9DMR9SNjvV+0Pa3htl03v4lLiHSQMBaijfuAbRA
OBhkXX6Kxye4GWf6O8Ct7QDnrmSlXRHlgyYR2Z8CgYEAuP1x/2MTBZYfP9lRQLim
NAfUMhxXiCfEhFkahdpEIjJgK+ekaosF24EP/merZPqqsi5Su/0cedgXr2Nk7yWE
AXolFjCKuXr0ti7tILp5vj9wH0Dy9/GVoKmNsygAiOd/c2OEgtwToJd1XmqCBtus
h+q+6phCiGqPTVCvJa0xxrk=
-----END PRIVATE KEY-----
Copy this private key value and save it for setting up the credential store later.
Host catalog plugins
For Boundary, the process for creating a dynamic host catalog has two steps:
- Create a plugin-type host catalog
- Create a host set that defines membership using filters
You set up a plugin-type host catalog using the cloud provider account details. Then you can configure a host set using a filter that selects hosts for membership based on the labels defined when you set up the hosts.
Host set filter expressions are defined by the GCP plugin provider. The GCP plugin uses simple filter queries to specify labels associated with hosts based on labels.name=value
.
For example, a host set filter that selects all hosts tagged with
"service-type": "database"
is written as:
labels.service-type=database
You can also filter for the VM status. Another common filter to return instances that are running is:
status=RUNNING
Resources within GCP can generally be filtered by label names and values, and filters can use either/or selectors for label values. This process is described in the Boundary GCP Host Plugin documentation.
To learn more about GCP filters for listing resources, visit the
instances.list
method documentation page.
Create a GCP host catalog
The details you need to set up a host catalog depend on the GCP credential type you are using.
Select a credential type to continue.
Authenticate to the Boundary Admin UI as the admin user.
Once logged in, select the org and project you want to create a dynamic host catalog for. If you are testing plugins, consider creating a new project for testing purposes.
Navigate to the Host Catalogs page. Click New Host Catalog.
Select Dynamic for the host catalog type. Select from the static or dynamic credential type tabs to learn how you should fill out the new catalog form.
Complete the following fields:
Name:
GCP Catalog
Description:
GCP host catalog
Type:
Dynamic
Provider:
GCP
Project ID:
hc-26fb1119fccb4f0081b121xxxxx
(Add your project ID)Zone:
us-central1-a
(or other zone used for this lab)Client Email:
boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com
(Replace this with your service account email)Private Key ID:
b990f6a2246bd12fd08d0bd4f6e5bc294d98da1a
(Replace this with your Private Key ID, saved when you Created the service account)Private Key:
-----BEGIN PRIVATE KEY----- MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCp5TKBnZv7S6MA qngKyf0EPDHsAfmqdRTGFF+Da1otkzldUK7fIYirAvf//9ZtT53OG6OJrYIwEO1r UH7xjxvtXgv2uQ1++FnNDFkFTw1QThwoWCM6vKE97/N27/yx9c6CxTi4HVQBN8Pp xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx QBySlHc740JarQoo6eatThwPmDJzVp3x3wfYOHoD/N8AHi0axJ0UHZnRjeWyBx8f 7A37FQB36llLzsiERQQ3LI9Z7VHDiryOGvuxAQvWovzoamfXgKFk0Q8IwncZYHTV UhzcSnq/AgMBAAECggEARSz+AhmnA8yZy7EdXKM+4sURvJtXSWkPstFjzJJe7vSl pFGwSkkQqTT1vqYwbGTBB8VoMqxTuHeD/DCT545SHDWxYF2b2amMgvl2m7tC3AJZ 47FzcryQWLRFaRWxSdKgqc1c2VaTuEU4/33HWTtKEAeyR+aViDP+slHau/Yg9qn0 pGoyzWwQ8+/qofmMiNTNfM2qbPa735WC7VpsusSaxNIaiGbneGWPPkuKYVVUmS2X 2qPaIinuUXLfWSl5DsNZc6Mw5m+Q4JyBjmY+y2dQ/9e9biMbw1C3fhkiGZHQgtqm cJfhgu1r4yVu9xxjS9hN/VyLPMAjb05Dg1Z03pXGZQKBgQDcsUAbkxO0Ylkhzqel 0KbVXaXiSXsdmAsjLv98jpuV+OxlLoesP8hC8lQt0SNp7Hme6rBO4in6UazBuR82 j9TNeMA1pjXLbvfxEn3B+8iUur+kasEHoeb1DHQmBrWo7WnwPdyi+66jn7EiLq3+ 5wKM/uyHJwCNEy43smCxaVa95QKBgQDFE3wjtRHu26ur3Xe8hUx/44H+NQNPVgow jJpZJ8vVOOvKIQ5vQf3fk5jBa03eGkp0MnKVu73NdCUKhr4DK6AZwX4+gPF3qJlS M7YOFWkhbigUgRzazo1UJEgnrdRjnfrpA97olcnW4rSH+Il4CF610qejrSvjTe9m 7NhZheSr0wKBgFtOEgHWha55ifrMrtuRSZS42+qVEBScVO9HgHgd4AzaIaNy7rq6 4LWh4GXcQtSN+3teCXd5Znij1d+IIXvHYfloXc1UaKkzzey1A8Z/zuqJoMP7TsVD nHQBpQQefoXXQ58bWO8tRYF4jiZgPahaFtoSlfUMk9PJ/bMZX5vGwxZpAoGAfK7q MFEjqmHyh8aTNYOENblDiggSMwR1Z+fc0yE5dYoQq44kasFulB/2WhDAcA9kIYW1 NwRTfgPIV5ON7cWRAhqH+5Vqr9DMR9SNjvV+0Pa3htl03v4lLiHSQMBaijfuAbRA OBhkXX6Kxye4GWf6O8Ct7QDnrmSlXRHlgyYR2Z8CgYEAuP1x/2MTBZYfP9lRQLim NAfUMhxXiCfEhFkahdpEIjJgK+ekaosF24EP/merZPqqsi5Su/0cedgXr2Nk7yWE AXolFjCKuXr0ti7tILp5vj9wH0Dy9/GVoKmNsygAiOd/c2OEgtwToJd1XmqCBtus h+q+6phCiGqPTVCvJa0xxrk= -----END PRIVATE KEY-----
(make sure the key is formatted correctly using
jq
, as described in the Create a service account section)Disable credential rotation:
true
(Check this box)
Click Save.
Create the host sets
With the dynamic host catalog created, you can now create host sets that correspond to the service-type
and application
labels added to the VMs.
Recall the three host sets you want to create:
- All hosts with a
service-type
tag ofdatabase
- All hosts with an
application
tag ofdev
- All hosts with an
application
tag ofproduction
The corresponding host set filters are:
You should also add a filter to only show running hosts:
Authenticate to the Boundary Admin UI as the admin user.
Once logged in, select the org and project that contains your dynamic host catalog.
Navigate to the Host Catalogs page. Click on the GCP Catalog.
Click the Host Sets tab. Click the New button to create a new host set.
Complete the following fields:
- Name:
database
- Filter:
labels.service-type:database
Click Add beside the filter field to add the filter.
Add another filter to only show running hosts:
- Filter:
status:RUNNING
Click Add beside the filter field to add the filter.
- Name:
Click Save.
Wait a moment, then click on the Hosts tab, which should contain the following hosts:
- boundary-1-dev
- boundary-2-dev
- boundary-3-production
- boundary-4-production
Note
It may take up to five minutes for the host catalog to sync with the cloud provider. Refresh the page if the hosts do not initially appear in the catalog.
Now follow the same steps to create two more host sets for the following host set filters:
The
dev
host set.- Name:
dev
- Filter:
labels.application:dev
- Filter:
status:RUNNING
- Name:
The
production
host set.- Name:
production
- Filter:
labels.application:production
- Filter:
status:RUNNING
- Name:
Check the hosts included in the dev
and production
host sets, and then move on to the next section.
Verify catalog membership
With the database
, dev
, and production
host sets defined within the GCP host catalog, the next step is to verify that the four instances listed as members of the catalog are dynamically included in the correct host sets.
Host membership can be verified by reading the host set details and verifying its membership IDs.
Check the database host set
First, verify that the database
host set contains all four members of the GCP host catalog.
Navigate to the Host Catalogs page and click on the GCP Catalog.
Click on the Host Sets tab.
Click on the Database Host Set host set, then click on the Hosts tab.
Verify that the
database
host set contains the following hosts:
- boundary-1-dev
- boundary-2-dev
- boundary-3-production
- boundary-4-production
If any of these hosts are missing, expand the troubleshooting accordion to diagnose what could be wrong.
If the host catalog is misconfigured, hosts will not be discoverable by Boundary.
At this point in the tutorial hosts are contained within the host catalog, but not appearing in one or more host sets. This means that the host set itself is likely misconfigured.
Navigate to the database
host set Details page. Check the Filter
section, and verify it matches the correctly defined filter:
labels.service-type:database
If the tag is incorrectly assigned, you should update the affected host set to fix the filter by clicking the Edit button.
After you update the filter, click Save. Boundary will automatically refresh the host set.
Note
Depending on the type of configuration issue, you may need to wait 5 - 10 minutes for the existing host catalog or host sets to sync with the provider and refresh their values. If you do not want to wait, you can create a new host catalog and host set from scratch, or update the sync_interval attribute
for the host catalog.
Check that the updated filter is working by navigating back to the Hosts tab.
If the dev
or production
host sets have incorrect filters, follow the same procedure to update their filters too.
Check the dev host set
Check the dev
host set members.
Navigate to the Host Catalogs page and click on the GCP Catalog.
Click on the Host Sets tab.
Click on the Dev Host Set host set, then click on the Hosts tab.
Verify the host set contains the following hosts:
- boundary-1-dev
- boundary-2-dev
If any of these hosts are missing, expand the troubleshooting accordion above to diagnose what could be wrong.
Check the production host set
Lastly, check the production host set members.
Navigate to the Host Catalogs page and click on the GCP Catalog.
Click on the Host Sets tab.
Click on the Production Host Set host set, then click on the Hosts tab.
Verify the host set contains the following host:
- boundary-3-production
Notice the Host IDs
section of this output. Even though there are two production instances, only one exists in the host set.
To figure out what could be wrong, compare the members of the production
host set to the members of the database
host set. Remember, members of the production
and dev
host sets are a subset of the database
host set.
For example, by comparing the Host IDs
of the dev
host catalog to the production
catalog, a host ID such as hplg_s2qrXNCm5p
should be missing from the production
host set, although it is in the database
host set.
Update the misconfigured host
Check the details for the missing host using the CLI or the GCP cloud console.
Find the host's name
Check the misconfigured host's details in the Boundary Admin UI.
Navigate to the Host Catalogs page and click on the GCP Catalog.
Click on the Host Sets tab.
Click on the production host set, then click on the Hosts tab.
Click on the boundary-4-production host to view its details.
The External Name:
field shows the ID of the misconfigured host (boundary-4-production
in this example). Copy this value.
Update the host's labels
Recall that host set membership is defined based on the instance tags.
Open the GCP Compute Engine dashboard and navigate to the VM Instances page.
Select the boundary-4-production
VM. Check the Labels section and the value of the application
label.
Notice that the application
label is misconfigured as prod
, instead of production
. An easy mistake to make!
Remember the filter defined for the production
host set:
labels.application:production
The label's value must equal production
exactly to match this host set.
Click Edit and then click Manage labels. Update the application
label to production
for the misconfigured instance. Click Save when finished, then Save again to finish editing the VM.
Boundary will update the production
host set automatically the next time it refreshes. This process can take up to ten minutes. For reference, the refresh interval can be manually configured on the host catalog itself.
After waiting, navigate back to the production
host set and verify that its Hosts tab now contains the boundary-4-production
host as a member.
Note
You may need to wait approximately 5 - 10 minutes for the existing host catalog or host sets to sync with the provider and refresh their values. If you do not want to wait, you can create a new host catalog and host set from scratch, or update the sync_interval attribute
for the host catalog.
Cleanup and teardown
Destroy the Terraform resources.
Remove all GCP and Boundary resources using
terraform apply -destroy
. Enteryes
when prompted to confirm the operation.Note
Terraform 0.15.2+ uses
terraform apply -destroy
to cleanup resources. If using an earlier version of Terraform, you may need to executeterraform destroy
.Open the terminal session where you deployed Terraform.
$ terraform apply -destroy
Remove the Terraform state files.
$ rm *.tfstate*
Delete any service accounts you created for this tutorial.
Navigate to the Service Accounts page. Select the name of your project, and then locate the service account you created for this tutorial, such as
boundary-service-account
.Click the three dots Actions button next to the name of the service account and select Delete. Enter in the email of the service account (such as
boundary-service-account@hc-26fb1119fccb4f0081b121xxxxx.iam.gserviceaccount.com
) and confirm by clicking Delete.Repeat this process for any other service accounts you created for this tutorial.
Delete the
gcpuser
keypair, if you created one.This keypair should only be stored on your machine.
Disable any APIs enabled for this project.
If you enabled any APIs, you can disable them by clicking the Manage button on the following API pages. This will take you to a page where you can click the Disable API button for that API.
Clean up Boundary.
Locate the shell where you executed
boundary dev
and enterctrl+c
to stop dev mode.^C==> Boundary dev environment shutdown triggered, interrupt again to force ... ... ... { "id": "lOp2Pa9JKe", "source": "https://hashicorp.com/boundary/dev-controller/boundary-dev", "specversion": "1.0", "type": "system", "data": { "version": "v0.1", "op": "github.com/hashicorp/cap/oidc.(*TestProvider).startCachedCodesCleanupTicking.func1", "data": { "msg": "cleanup of cached codes shutting down" } }, "datacontentype": "text/plain", "time": "2021-08-16T17:06:36.275678-06:00" }
Next steps
This tutorial demonstrated the steps to set up a dynamic host catalog using the GCP host plugin. You deployed the hosts on GCP use Terraform, configured a plugin-type host catalog within Boundary, and created three host sets that filtered for hosts based on their label values.
To learn more about integrating Boundary with cloud providers like GCP, AWS, and Azure, check out the Access management tutorials.