Boundary
Deploy workers
Before you deploy workers, you should have completed the following steps:
- Installed Boundary on at least three controller nodes.
- Prepared or have three existing network boundaries:
- Public/DMZ network
- Intermediary network
- Private network
- Prepared three virtual machines for Boundary workers, one in each network boundary with the Boundary binary installed on it.
In the following configuration files, there are common configuration components as well as some unique components depending on the role the Boundary worker performs. There are three files, one for each worker in a unique network boundary. Additionally, Boundary Enterprise supports a multi-hop configuration in which the Boundary workers can serve one of three purposes: an ingress worker, an ingress/egress worker, or an egress worker.
Prepare the environment files
HashiCorp recommends using either the env:// or file:// notation within the configuration files, to securely provide secret configuration components to the Boundary worker binaries.
The following configuration example uses env:// to secure AWS KMS configuration items.
When you install the Boundary binary using a package manager, it includes a unit file which configures an environment file at /etc/boundar.d/boundary.env.
You can use this file to set sensitive values in the Boundary worker configuration file.
The following file is an example of how this environment file could be configured:
/etc/boundary.d/boundary.env
AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
In the example above, the proper IAM roles and permissions for the given AWS_ACCESS_KEY and AWS_SECRET_ACCESS_KEY must be in place so that Boundary can use them to access the different KMS keys.
Prepare the worker KMS keys
The worker-auth storage KMS key is used by a Worker for the encrypted storage of authentication keys.
This is recommended for workers using controller-led or worker-led registration methods. If not specified, the authentication keys are not encrypted on disk. Optionally, if you deploy KMS authentication-driven Boundary workers, you must generate an additional KMS key to authenticate the Boundary worker with the controller.
HashiCorp strongly recommends using the Key Management System (KMS) of the cloud provider where you deploy your Boundary workers.
Boundary workers must have the correct level of permissions for interacting with the cloud provider's KMS.
Refer to your cloud provider's documentation for more information.
Create the worker configurations
After you create the requisite key or keys in the cloud provider of your choice, you can begin configuring the workers.
The following configuration examples all employ the worker-led authorization flow. For more information on configuring KMS authentication for Boundary workers, refer to the KMS authentication configuration documenation.
If you use Boundary Enterprise, you can configure multiple workers to act in three different roles: ingress, intermediary, and egress. For Community Edition, workers only serve one role, acting as both the point of ingress and egress. Select your Boundary edition, and complete the following steps to configure workers.
For Boundary Enterprise, you can configure ingress, intermediary, and egress workers to take advantage of multi-hop worker capabilities.
Note that "ingress," "intermediary," and "egress" are general ways to describe how the respective worker interacts with resources. A worker can serve more than one of those roles at a time. Refer to Multi-hop sessions for more information.
Complete the steps below to configure workers for Boundary Enterprise.
If you are configuring your workers to support session recording, you will need to add an auth_storage_path and configure a storage backend. Refer to the Configure workers for storage documentation to learn more.
Ingress worker configuration
Create the ingress-worker.hcl file with the relevant configuration information:
/etc/boundary.d/ingress-worker.hcl
# disable memory from being swapped to disk
disable_mlock = true
# listener denoting this is a worker proxy
listener "tcp" {
address = "0.0.0.0:9202"
purpose = "proxy"
}
# worker block for configuring the specifics of the
# worker service
worker {
public_addr = "<worker_public_addr>"
initial_upstreams = ["<controller_lb_address>:9201"]
auth_storage_path = "/var/lib/boundary"
tags {
type = ["worker1", "upstream"]
}
}
# Events (logging) configuration. This
# configures logging for ALL events to both
# stderr and a file at /var/log/boundary/<boundary_use>.log
events {
audit_enabled = true
sysevents_enabled = true
observations_enable = true
sink "stderr" {
name = "all-events"
description = "All events sent to stderr"
event_types = ["*"]
format = "cloudevents-json"
}
sink {
name = "file-sink"
description = "All events sent to a file"
event_types = ["*"]
format = "cloudevents-json"
file {
path = "/var/log/boundary"
file_name = "ingress-worker.log"
}
audit_config {
audit_filter_overrides {
sensitive = "redact"
secret = "redact"
}
}
}
}
# kms block for encrypting the authentication PKI material
kms "awskms" {
purpose = "worker-auth-storage"
region = "us-east-1"
kms_key_id = "19ec80b0-dfdd-4d97-8164-c6examplekey3"
endpoint = "https://vpce-0e1bb1852241f8cc6-pzi0do8n.kms.us-east-1.vpce.amazonaws.com"
}
Intermediate worker configuration
Create the intermediate-worker.hcl file with the relevant configuration information:
/etc/boundary.d/intermediate-worker.hcl
# disable memory from being swapped to disk
disable_mlock = true
# listener denoting this is a worker proxy
listener "tcp" {
address = "0.0.0.0:9202"
purpose = "proxy"
}
# worker block for configuring the specifics of the
# worker service
worker {
public_addr = "<worker_public_addr>"
initial_upstreams = ["<ingress_worker_address>:9202"]
auth_storage_path = "/var/lib/boundary"
tags {
type = ["worker2", "intermediate"]
}
}
# Events (logging) configuration. This
# configures logging for ALL events to both
# stderr and a file at /var/log/boundary/<boundary_use>.log
events {
audit_enabled = true
sysevents_enabled = true
observations_enable = true
sink "stderr" {
name = "all-events"
description = "All events sent to stderr"
event_types = ["*"]
format = "cloudevents-json"
}
sink {
name = "file-sink"
description = "All events sent to a file"
event_types = ["*"]
format = "cloudevents-json"
file {
path = "/var/log/boundary"
file_name = "intermediate-worker.log"
}
audit_config {
audit_filter_overrides {
sensitive = "redact"
secret = "redact"
}
}
}
}
# kms block for encrypting the authentication PKI material
kms "awskms" {
purpose = "worker-auth-storage"
region = "us-east-1"
kms_key_id = "19ec80b0-dfdd-4d97-8164-c6examplekey4"
endpoint = "https://vpce-0e1bb1852241f8cc6-pzi0do8n.kms.us-east-1.vpce.amazonaws.com"
}
Egress worker configuration
Create the egress-worker.hcl file with the relevant configuration information:
/etc/boundary.d/egress-worker.hcl
# disable memory from being swapped to disk
disable_mlock = true
# listener denoting this is a worker proxy
listener "tcp" {
address = "0.0.0.0:9202"
purpose = "proxy"
}
# worker block for configuring the specifics of the
# worker service
worker {
public_addr = "<worker_public_addr>"
initial_upstreams = ["<intermediate_worker_address>:9202"]
auth_storage_path = "/var/lib/boundary"
tags {
type = ["worker3", "egress"]
}
}
# Events (logging) configuration. This
# configures logging for ALL events to both
# stderr and a file at /var/log/boundary/<boundary_use>.log
events {
audit_enabled = true
sysevents_enabled = true
observations_enable = true
sink "stderr" {
name = "all-events"
description = "All events sent to stderr"
event_types = ["*"]
format = "cloudevents-json"
}
sink {
name = "file-sink"
description = "All events sent to a file"
event_types = ["*"]
format = "cloudevents-json"
file {
path = "/var/log/boundary"
file_name = "egress-worker.log"
}
audit_config {
audit_filter_overrides {
sensitive = "redact"
secret = "redact"
}
}
}
}
# kms block for encrypting the authentication PKI material
kms "awskms" {
purpose = "worker-auth-storage"
region = "us-east-1"
kms_key_id = "19ec80b0-dfdd-4d97-8164-c6examplekey5"
endpoint = "https://vpce-0e1bb1852241f8cc6-pzi0do8n.kms.us-east-1.vpce.amazonaws.com"
}
Refer to the list below for explanations of the parameters used in the example above:
disable mlock (bool: false)- Disables the server from executing themlocksyscall, which prevents memory from being swapped to the disk. This is fine for local development and testing. However, it is not recommended for production unless the systems running Boundary use only encrypted swap or do not use swap at all. Boundary only supports memory locking on UNIX-like systems that supportmlock()syscall like Linux and FreeBSD.On Linux, to give the Boundary executable the ability to use
mlocksyscall without running the process as root, run the following command:sudo setcap cap_ipc_lock=+ep $(readlink -f $(which boundary))If you use a Linux distribution with a modern version of systemd, you can add the following directive to the "[Service]" configuration section:
LimitMEMLOCK=infinitylistener- Configures the listeners on which Boundary serves traffic (API cluster and proxy).worker- Configures the worker. If present,boundary serverstarts a worker subprocess.events- Configures event-specific parameters.The example events configuration above is exhaustive and writes all events to both
stderrand a file. This configuration may or may not work for your organization's logging solution.kms- Configures KMS blocks for various purposes.Refer to the links below for configuration information for the different cloud KMS blocks:
Refer to the documentation for additional top-level configuration options and additional worker-specific options.
Start the Boundary service
When the configuration files are in place on each Boundary controller, you can proceed to enable and start the binary on each of the Boundary worker nodes using systemd.
Run the following commands to enable and start the service:
$ sudo systemctl enable boundary$ sudo systemctl start boundary
Manually configure systemd (optional)
If you installed Boundary manually, you can configure Boundary to run as a service under systemd.
To do this, you should:
- Check the location of your worker configuration file on disk, such as
/etc/boundary.d/egress-worker.hclin the example on this page. You will need to reference the location to the .hcl configuration file when you set up the unit file in the next steps. - Configure the user and group the Boundary service runs under.
- Set up the systemd unit file.
- Start the Boundary service.
Configure the user and group
HashiCorp recommends running Boundary as a non-root user and managing the Boundary process running under systemd with this user.
Add the boundary system user and group to ensure you have a no-login user that owns and runs Boundary:
$ sudo adduser --system --group boundary || true ;
$ sudo chown boundary:boundary /etc/boundary.d/worker.hcl ;
$ sudo chown boundary:boundary /usr/local/bin/boundary
Set up the unit file
Create a new unit file, such as /etc/systemd/system/boundary-worker.service. Add the following code to the file. Update the path to your worker .hcl config file on the ExecStart line, and the user and group as needed.
/etc/systemd/system/boundary-worker.service
[Unit]
Description="HashiCorp Boundary worker"
Documentation=https://developer.hashicorp.com/boundary/docs
StartLimitIntervalSec=60
StartLimitBurst=3
[Service]
EnvironmentFile=-/etc/boundary.d/boundary.env
User=boundary
Group=boundary
ProtectSystem=full
ProtectHome=read-only
ExecStart=/usr/bin/boundary server -config=/etc/boundary.d/egress-worker.hcl
ExecReload=/bin/kill --signal HUP $MAINPID
KillMode=process
KillSignal=SIGINT
Restart=on-failure
RestartSec=5
TimeoutStopSec=30
LimitMEMLOCK=infinity
[Install]
WantedBy=multi-user.target
Start the Boundary service
Set the appropriate permissions on the unit file:
$ sudo chmod 664 /etc/systemd/system/boundary-worker.service
Reload the systemd daemon, and enable and start the Boundary service:
$ sudo systemctl daemon-reload ;
$ sudo systemctl enable boundary-worker ;
$ sudo systemctl start boundary-worker
Register the workers (optional)
If you deploy a worker using the worker-led method described above, you must register the Boundary workers to a controller.
Complete the following steps to register the worker using the UI:
Log in to Boundary as the admin user.
Select Workers in the navigation pane.
Click New.
(Optional) You can use the workers page to construct the contents of the
worker.hclfile, if you did not create the configuration file as part of the installation process above. Provide the following details, and Boundary constructs the worker configuration file for you:- Boundary Cluster ID
- Worker Public Address
- Config file path
- Worker Tags
Scroll to the bottom of the New Worker page, and paste the Worker Auth Registration Request key. Boundary provides you with the Worker Auth Registration Request key in the CLI output when you start the worker. You can also locate this value in the
auth_request_tokenfile.Click Register Worker.
Click Done.
The new worker appears on the Workers page.
Repeat the registration process for any other workers, such as the intermediate and egress workers.
Next steps
After you configure workers, you should: