ยปProduction Installation
Installing Boundary in a production setting requires prerequisits for infrastructure. At the most basic level, Boundary operators should run a minimum of 3 controllers and 3 workers. Running 3 of each server type gives a fundamental level of high availability for the control plane (controller), as well as bandwidth for number of sessions on the data plane (worker). Both server type should be ran in a fault tolerant setting, that is, in a self-healing environment such as an auto-scaling group. The documentation here does not cover self-healing infrastructure and assumes the operator has their preferred scheduling methods for these environments.
Network Requirements
- Client -> Controller port is :9200
- Worker -> Controller port is :9201
- Client -> Worker port is :9202
- Workers must have a route and port access to the targets which they service
Architecture
The general architecture for the server infrastructure requires 3 controllers and 3 workers. The documentation here uses virtual machines running on Amazon EC2 as the example environment, but this use case can be extrapolated to almost any cloud platform to suit operator needs:
As shown above, Boundary is broken up into its controller and worker server components across 3 EC2 instances, in 3 separate subnets, in three separate availability zones, with the controller API and UI being publically exposed by an application load balancer (ALB). The worker and controller VM's are in independent auto-scaling groups, allowing them to maintain their exact capacity.
Boundary requires an external Postgres and KMS. In the example above, we're using AWS managed services for these components. For Postgres, we're using RDS and for KMS we're using Amazon's Key Management Service.
Architecture Breakdown
API and Console Load Balancer
Load balancing the controller allows operators to secure the ingress to the Boundary system. We recommend placing all Boundary server's in private networks and using load balancing tecniques to expose services such as the API and administrative console to public networks. In the production architecture, we recommend load balancing using a layer 7 load balancer and further constraining ingress to that load balancer with layer 4 constraints such as security groups or IP tables.
For general configuration, we recommend the following:
- HTTPS listener with valid TLS certificate for the domain it's serving or TLS passthrough
- Health check port should use :9200 with TCP protocol
Controller Configuration
When running Boundary controller as a service we recommend storing the file at /etc/boundary-controller.hcl
. A boundary
user and group should exist to manage this configuration file and to further restrict who can read and modify it.
Example controller configuration:
Worker Configuration
name must be unique!
Installation
TYPE
below can be either worker
or controller
.
/etc/boundary-${TYPE}.hcl
: Configuration file for the boundary service See above example configurations./usr/local/bin/boundary
: The Boundary binary Can build from https://github.com/hashicorp/boundary or download binary from our release pages./etc/systemd/system/boundary-${TYPE}.service
: Systemd unit file for the Boundary service Example:
Here's a simple install script that creates the boundary group and user, installs the systemd unit file and enables it at startup:
Postgres Configuration
TBD
KMS Configuration
TBD