Deploying Boundary enterprise using Terraform
HashiCorp provides a set of official HVD modules to make it easier to deploy a Boundary Enterprise environment that adheres to the requirements and standards laid out in this HashiCorp Validated Design.
Platform-specific guidance
AWS
HashiCorp Provides official HVD Modules to deploy Boundary Enterprise controllers and workers on AWS EC2.
Before deployment, you will need to deploy the prerequisite infrastructure in AWS.
- A functional VPC with the required subnets.
- A Boundary Enterprise license that has been uploaded to AWS Secrets Manager.
- A TLS private key and certificate, valid for the fully qualified domain name you plan to use with Boundary, that have been base64-encoded and uploaded to AWS Secrets Manager.
- The ARN of the Boundary database password secret in AWS Secrets Manager.
Azure
HashiCorp provides official HVD Modules to deploy Boundary Enterprise controllers and workers on Azure VMs.
Before deployment, you will need to deploy the prerequisite infrastructure in Azure.
- An Azure resource group.
- An Azure Key Vault to store the prerequisite secret material.
- A Boundary Enterprise license that has been uploaded to Azure Key Vault
- A TLS rivate key and certificate, valid for the fully qualified domain name you plan to use with Boundary, that have been base64-encoded and uploaded to Azure Key Vault.
- The name of the Boundary database password secret in Azure Key Vault.
GCP
HashiCorp provides official HVD Modules to deploy Boundary Enterprise controllers and workers on GCP GCE.
Before deployment, you will need to deploy the prerequisite infrastructure in GCP.
- A function VPC with a public and private subnet.
- A Google Secret Manager to store the prerequisite secret material.
- A Boundary Enterprise license that has been uploaded to Google Secret Manager.
- A TLS private key and certificate, valid for the fully qualified domain name you plan to use with Boundary, that have been base64-encoded and uploaded to Google Secret Manager.
- The version of the Boundary database password secret in Google Secret Manager.
Using these modules, you will be able to use Terraform to deploy a complete, end-to-end Boundary Enterprise deployment inside of your own cloud account.
While we have made efforts throughout this document to provide prescriptive best practices, we recognize that each organization has their own unique requirements and constraints when it comes to deploying infrastructure. Wherever possible, we have attempted to represent the different considerations you will need to make when deploying Boundary in your cloud environment within the context of this Terraform module. The module contains additional capabilities that you may wish to review if the variables from this module do not suit your specific needs.
Deployment sequence overview
- Ensure prerequisites are satisfied.
- Obtain the license file.
- Download the Boundary CLI. (Optionally Boundary Desktop client)
- Download the Terraform CLI.
- Deploy your prerequisite resources.
- Obtain the HashiCorp Validated Design Terraform module for deploying Boundary Enterprise controllers and workers.
- Configure your cloud credentials.
- Initialize your Terraform workspace.
- Input your variables, including the values from your prerequisite deployment, in the module for deployment.
- Create a Terraform plan for the controller.
- Apply the plan.
- Bootstrap the Boundary controller.
- Create a Terraform plan for the worker(s).
- Apply the plan.
- Begin creating targets and using Boundary.
Preparation
Create the certificate files
Create a standard X.509 certificate that will be installed on the Boundary servers. Refer to your organization's process on creating a new certificate that matches the DNS record you intend to direct users to when accessing Boundary.
A total of three files will be needed as follows.
- The certificate (tls-cert-secret.pub).
- The certificate's private key (tls-cert-private.key).
- The bundle file from the certificate authority used to vend the certificate (tls-ca-bundle.pub).
Keep these files to hand as you will need them later in the installation process.
Obtain the Boundary enterprise license file
Obtain the Boundary Enterprise license file from your HashiCorp account team. This file contains a license key unique to your environment. The file will be named something like boundary.hclic.
Keep this file handy, as you will need it later in the installation process.
Download and install the Boundary CLI
- Download the appropriate package for your operating system from the HashiCorp Releases site.
- Unzip the package.
- Move the boundary binary (boundary.exe for Windows) to a directory in your system’s PATH.
- Optional: Install Boundary Desktop client
Download and install the Terraform CLI
- Download the appropriate package for your operating system from the HashiCorp Releases site.
- Unzip the package.
- Move the terraform binary (terraform.exe for Windows) to a directory in your system's PATH.
Download the Terraform module(s)
For the purpose of an automated deployment, HashiCorp and HashiCorp partners provide private Terraform modules to customize and support your deployment.
Once you have downloaded the module, navigate to the examples/default/ directory. Use this as the base working directory during the installation process.
AWS
Configure AWS credentials
Ensure the correct AWS credentials are in place and accessible to Terraform. Terraform can read credentials from:
- Credentials file: typically located at
$HOME/.aws/credentials
(%UserProfile%\.aws\credentials
on Windows). - Environment variables as follows.
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY
- AWS_SESSION_TOKEN (if using an IAM role or other expiring credentials)
- AWS_DEFAULT_REGION
For complete details on how to configure AWS credentials for Terraform, see the HashiCorp Terraform AWS provider documentation.
Ensure that the credentials you will be using have sufficient permissions to perform the necessary actions that Terraform will be performing.
Azure
Configure Azure credentials
Ensure the correct Azure credentials are in place and accessible to Terraform by running az login with your assumed credentials. For complete details on how to configure Azure credentials for Terraform, see the HashiCorp Terraform Azure provider documentation.
Ensure that the credentials you will be using have sufficient permissions to perform the necessary actions that Terraform will be performing.
GCP
Configure GCP credentials
Ensure the correct GCP credentials are in place and accessible to Terraform. Terraform can read credentials from:
- Credentials file: typically located at
$HOME/.config/gcloud/application_default_credentials.json
(%APPDATA%\gcloud\application_default_credentials.json
on Windows). - Environment variables as follows.
- GOOGLE_CREDENTIALS
For complete details on how to configure GCP credentials for Terraform, see the HashiCorp Terraform GCP provider documentation.
Ensure that the credentials you will be using have sufficient permissions to perform the necessary actions that Terraform will be performing.
Installation
Initialize Terraform
Run terraform init to initialize your Terraform workspace. Inspect the output to ensure that all providers and modules are successfully downloaded, and that there are no outstanding errors before continuing.
Configure variables for deployment
Warning
You can only configure variables for the installation module's terraform.tfvars file after all the prerequisite resources are available. You will need to supply values from the prerequisites to the Vault module.
Review the terraform.tfvars.example file HashiCorp maintains in the examples/default/ directory for explanations of each relevant variable. There is a terraform.tfvars.example file in the respective module for each public cloud provider. Copy this file to a file called terraform.tfvars, and then fill in the values for each declared variable with the applicable values for your environment.
Create and apply Terraform plan
From the examples/default/ directory, generate a Terraform plan with the following command:
terraform plan -out plan.out
Review the plan output to see the changes that will be applied, then apply the changes with this command:
terraform apply plan
Confirm the changes by typing yes when prompted.
Validate installation
After your terraform apply finishes successfully, you can monitor the installation progress by connecting to your Boundary controller VM instance shell via SSH, AWS SSM, or Google IAP and observing the cloud-init (user_data) logs:
Higher-level logs:
$ tail -f /var/log/boundary-cloud-init.log
Lower-level logs:
$ journalctl -xu cloud-final -f
Note
The -f argument is to follow the logs as they append in real-time, and is optional. You may remove the -f for a static view.
The log files should display the following message after the cloud-init (user_data) script finishes successfully:
[INFO] boundary_custom_data script finished successfully!
Once the cloud-init script finishes successfully, while still connected to the VM via SSH you can check the status of the boundary service:
[INFO] boundary_custom_data script finished successfully!
From the terminal where you performed the terraform apply run the following command:
terraform output
Using the terraform output that references the load balancer name or ip address, create a new DNS entry that matches your TLS certificate and points to the load balancer for the Vault cluster. Set the following environment variables:
$ export BOUNDARY_ADDR="https://boundary.example.com"
Bootstrapping Boundary
After deploying a Boundary controller the system is in a partially initialized state. To complete initialization and configuration for initial authentication utilize the bootstrapping module.
Repeat the steps starting from “Initialize Terraform” for this new module.
After bootstrapping is complete you should now be able to authenticate to the Boundary cluster via CLI or admin UI.
Install Boundary workers
Note
Boundary workers in all instances require access to either the controller or the upstream workers. See the Network Connectivity page for more information.
Utilize the Boundary Enterprise worker HVD module for AWS, Azure, or GCP and repeat the steps starting from “Initialize Terraform” for this new module.]
After your terraform apply finishes successfully, you can monitor the installation progress by connecting to your Boundary worker VM instance shell via SSH, AWS SSM, or Google IAP and observing the cloud-init (user_data) logs:
Higher-level logs:
$ tail -f /var/log/boundary-cloud-init.log
Lower-level logs:
$ journalctl -xu cloud-final -f
Note
The -f argument is to follow the logs as they append in real-time, and is optional. You may remove the -f for a static view.
The log files should display the following message after the cloud-init (user_data) script finishes successfully:
[INFO] boundary_custom_data script finished successfully!
Once the cloud-init script finishes successfully, while still connected to the VM via SSH you can check the status of the boundary service:
$ sudo systemctl status boundary
After the Boundary worker has successfully deployed, it will now show up in the Boundary clusters’ workers list.