»Self-Managed Worker Operations
This page outlines operational guidance for running self-managed workers with HCP Boundary in production. Self-managed workers allow Boundary users to securely connect to private endpoints without exposing an organization's networks to the public, or to HashiCorp-managed resources. All session activity is proxied by the organization's worker nodes. To learn more about self-managed workers, see the self-managed workers tutorial.
Boundary workers proxy connections to remote endpoints. Workers can either proxy connections to target endpoints, proxy connections from Boundary control plane traffic to private Vault environments and other peer services, or both. The following is a breakdown for worker network connectivity requirements depending on what the worker will be used for.
Today there are three network connectivity requirements for workers that proxy connections to targets:
- Outbound access to an existing trusted Boundary control point (Boundary control plane, ie the origin url)
- Outbound access to the target
- Inbound access from client trying to establish the session
Note: 3 does not necessitate exposure to the public internet, just inbound access from clients. Consider the case of Boundary being accessed by clients from a private corporate network (not public internet) to facilitate connections to a separate private datacenter network:
- The worker would need outbound connectivity to a trusted Boundary control point (either another trusted worker or the Boundary control plane, ie the origin url)
- The worker would need outbound connectivity to the host network (eg the datacenter network or cloud VPC) for which it can make outbound (worker->host) calls to hosts
- The worker would need to allow inbound (client->worker) connections from the client's network (this would be the corporate network, not public internet in this scenario)
When proxying connections to private Vault clusters, workers have two network connectivity requirements:
- Outbound access to an existing trusted Boundary control point (either another trusted worker or the Boundary control plane, ie the origin url)
- Outbound access to the destination private Vault
The following diagram illustrates worker connectivity directionality based on the requirements above for HCP Boundary with self-managed workers.
Each network enclave that will be accessed by Boundary will need at minimum 1 worker to provide access. To ensure high availability for production use cases, we recommend at least 3 workers per network enclave.
Worker performance is most affected by the number of concurrent sessions the worker is proxying and the rates of data transfer within those sessions.
Worker session assignment is intelligently dictated by the Boundary control plane based on.
- Which workers are candidates to proxy a session based on the worker's tags and the target's worker filter, and
- The health and connectivity of candidate workers You do not need a load balancer to manage worker traffic.
Ultimately the constraints of your access use case and the sensitivity of workloads in each network enclave, will dictate what level of redundancy and sizing you require for your workers.
Sizing recommendations have been divided into two common cluster sizes.
Small clusters would be appropriate for most initial production deployments or for development and testing environments.
Large clusters are production environments with a large number of Boundary clients.
Worker performance is most affected by the number of concurrent sessions the worker is proxying and the rates of data transfer within those sessions. The size of workers will be dependent on your usage of Boundary. For example, if you are utilizing Boundary for SSH connections and HTTP access to hosts, your instance selection and performance might differ somewhat significantly than if you are consistently doing large data transfers.
Below are some general guidelines, however we recommend that as you utilize Boundary, you continue to monitor your cloud providers' network throughput limitations for your machine types and observe relevant metrics where possible, in addition to other host metrics, and scale horizontally or vertically as needed.
Some examples of relevant documentation might include:
- AWS: EC2 Network Performance and Monitoring EC2 Network Performance
- Azure: Azure Virtual Machine Throughput and Accelerated Network for Azure VMs
- GCP: Network Bandwidth and About Machine Families
HCP Boundary environments function optimally when workers and controllers are running the same version. The following section outlines API compatibility and hot-fix policies for self-managed workers of HCP Boundary environments.
HCP Boundary only supports API backwards compatibility between HCP Boundary and self-managed workers of the prior “major release”. A major release is identified by a change in the first (X) or second (Y) digit in the following versioning nomenclature: Version X.Y.Z. All self-managed workers within an environment must be on the same version as each other.
For example, Boundary self-managed workers version 0.11.0 will be compatible with HCP Boundary environments running Boundary 0.12.0 However, they will not have compatibility once the HCP Boundary control plane is updated to version 0.13.0 or above.
Eligible code-fixes and hot-fixes for HCP Boundary self-managed workers are only provided via a new minor release (Z) on top of the latest “major release” branch.
HashiCorp will be responsible for keeping the customers’ HCP Boundary control plane versions up to date with the latest release of Boundary software. Customers are expected to maintain their self-managed workers and ensure that they are running the same versions as the control plane. It is important to note that all self-managed workers need to be on the same major and minor version without any exceptions.