Architecture
Recommended deployment architecture
This section explores the recommended Boundary Enterprise architecture, which is designed to provide a highly available, scalable, and secure deployment suitable for production workloads.
The primary components that make up a Boundary Enterprise cluster are:
- Controller nodes
- Worker nodes
- Load balancer
- PostgreSQL database
- KMS (Key management service)
The following diagram shows the recommended architecture for deploying Boundary Enterprise within a single region.
Controllers
A minimum deployment consists of three controllers in three separate private subnets distributed across three availability zones to ensure high availability. Configure the controllers to run in a fault-tolerant setup, such as an auto-scaling group for a self-healing environment. Users authenticate with the controllers when using Boundary. We recommend exposing the controller API and UI to your users through a layer 4 load balancer. See load balancing section for more information.
Database
Controllers are stateless, and all configuration is managed through an external PostgreSQL database. We recommend configuring the PostgreSQL database for high availability. Please refer to the PostgreSQL high availability, load balancing, and replication documentation. If you use a managed service, refer to your provider's PostgreSQL high availability documentation.
Workers
A minimum deployment consists of at least three workers across different availability zones within each network boundary to ensure high availability. In environments where inbound connections are restricted, both ingress and egress workers can be deployed to enable multi-hop session proxying, which only requires outbound connectivity. Please refer to the recommended architecture for more details.
Key management service (KMS)
Boundary controllers use KMS keys to encrypt data at rest and in transit. Before the Boundary worker can proxy user sessions to targets, it must authenticate and register with the Boundary control plane. We recommend using a KMS-led authorization and authentication flow to auto-register the worker. This method requires the controller and worker to share a KMS key to authenticate the worker with the controller. The HashiCorp validated design leverages Vault’s Transit secret engine for key management. If you use a managed service, refer to your provider’s key management documentation for guidance.
Transport layer security (TLS)
Client-to-controller TLS
We recommend configuring the TLS (i.e., a public certificate, private key, and certificate authority bundle from a trusted certificate authority) on the Boundary controller nodes. The load balancer should be configured to pass through TLS connections to controller nodes. Do not manage the TLS certificate on the load balancer. Terminating TLS connections at the controller nodes offers enhanced security by ensuring that a client request remains encrypted end-to-end, reducing the attack surface for potential eavesdropping or data tampering.
Client-to-worker TLS
Workers do not require any configuration for their client-facing listeners. Instead, the TLS configuration is determined dynamically via SNI during session authorization, and the session is then mutually authenticated.
Worker-to-upstream TLS
Workers establish TLS connections to upstreams (controllers or other workers). The TLS stack is configured dynamically during worker registration. For more information, refer to this document.
Load balancing
The control plane components of Boundary’s architecture benefit from using load balancers. Here are the three scenarios:
The load balancers help secure Boundary’s components and increase reliability and stability.
For load balancing from clients to workers, e.g., when clients initiate sessions to a Boundary target, the Boundary control plane manages the distribution of sessions amongst available workers. As such, Boundary workers do not require any load balancing.
The design of this architecture requires the use of layer 4 or 7 load balancers. It must be able to poll the /health API endpoint to detect the node's status and direct traffic accordingly.
Cluster architecture summary
In summary, the HashiCorp validated design for a Boundary Enterprise cluster architecture includes the following components:
Component | Configuration Details |
---|---|
Controllers | Three controllers distributed across three availability zones within a geographic region |
Database | External PostgreSQL database |
Workers | Three workers in each network boundary, also deployed across three availability zones |
Worker registration | KMS-driven authentication and authorization flow for automatic worker registration |
Controller access (Load balancer) | Layer 4 (TCP) load balancer in a public network to access controller API and admin UI from the Boundary client |
Worker-to-controller connectivity | Layer 4 (TCP) load balancer in a private network to establish connectivity from workers to controllers |
Worker-to-worker connectivity | Layer 4 (TCP) load balancer(s) for downstream-to-upstream worker connectivity. The number depends on the number of network boundaries and workers within each boundary to connect to the target. This load balancer is unnecessary if the cluster has only ingress workers directly connecting to the target. |