Deploying Boundary enterprise
Overview
This section details the steps to create a Boundary cluster manually in a private datacenter. This guide assumes that you have already read the Recommended Deployment Architecture and Detailed Design section of this guide and have a basic understanding of the Boundary architecture and the steps required to deploy Boundary in a private datacenter.
Throughout this section we will use the following names and IP addresses for the Boundary Controllers and Workers:
DNS Name | IP Address | Node Type | Location |
---|---|---|---|
controller-api-lb.boundary.domain | controller_api_lb_address | Internet-facing controller-api-load balancer (TCP:443) | (all zones) |
controller-cluster-lb.boundary.domain | controller_cluster_lb_address | Internal-facing controller-cluster-load balancer (TCP:9201) | (all zones) |
controller1.boundary.domain | 10.0.253.11 | Controller VM | zone1 |
controller2.boundary.domain | 10.0.254.12 | Controller VM | zone2 |
controller3.boundary.domain | 10.0.255.13 | Controller VM | zone3 |
ingressworker-lb.boundary.domain | ingress_lb_address | Internal-facing ingressworker-load balancer (TCP:9202) | (all zones) |
ingressworker1.boundary.domain | 10.0.253.101 | Ingress worker VM | zone1 |
ingressworker2.boundary.domain | 10.0.254.102 | Ingress worker VM | zone2 |
ingressworker3.boundary.domain | 10.0.255.103 | Ingress worker VM | zone3 |
egressworker1.boundary.domain | 10.0.253.201 | Egress worker VM | zone1 |
egressworker2.boundary.domain | 10.0.254.202 | Egress worker VM | zone2 |
egressworker3.boundary.domain | 10.0.255.203 | Egress worker VM | zone3 |
Prepare
License
Obtain your active Boundary Enterprise license file. If you do not have this file, please contact your HashiCorp account team.
Servers
- Refer to the Detailed Design section of the guide for more information on what to consider the sizing based on your environment.
- Identify the availability zones within your datacenter where your Boundary controllers, ingress/egress workers will live. For the rest of this document, we will refer to these as zone1, zone2, zone3, etc.
- Build the servers. Ensure there is 1 Boundary controller, ingress/egress worker per each of the 3 availability zones, for a total of 3 servers. For the rest of this document, we will refer to these servers as controller1, controller2, controller3, ingressworker1, ingressworker2, ingressworker3, egressworker1, egressworker2, egressworker2, etc.
- Ensure you can log in to each server as a user with sudo or root privileges via SSH or equivalent.
Load balancer
- A layer 4 load balancer exposes controller API and admin UI via HTTPS (port 443) to boundary clients. The load balancer distributes Boundary client requests to the controllers’s API port (default TCP-9200).
- Another layer 4 load balancer exposes the controller's cluster port (default TCP-9201) for Workers session authorization, credentials, etc.
- Refer to the Detailed Design section of the guide for more information on how to configure the load balancer.
PostgreSQL
Controllers are stateless, and all configurations are managed through an external PostgreSQL database. We recommend configuring the PostgreSQL database for high availability. Please refer to the PostgreSQL high availability, load balancing, and replication documentation. If you use a managed service, refer to your provider's PostgreSQL high availability documentation.
Storage for session recording
We recommend using S3-compliant object storage for audit logging and session recording. Refer to the Detailed Design section of the guide for more information on how to configure storage for session recording.
Boundary controllers configuration
TLS
Create an X.509 certificate that will be installed onto each of the Boundary controllers. Refer to your organization's process on creating a new certificate that matches the DNS record you intend to direct users to when accessing Boundary - in this case, the DNS record pointing at the load balancer: boundary.domain
(Note: Please replace boundary.domain
with your actual domain name.)
Three files will be needed:
- The certificate (
cert.pem
). - The certificate's private key (
key.pem
). - The certificate authority bundle from a trusted certificate authority (
bundle.pem
).
You may have to create a new directory to store the certificate material at /etc/boundary.d/tls.
$ ls -l /etc/boundary.d/tls
drwxr-x--- 2 boundary boundary 4096 Oct 17 03:47 .
drwxr-x--- 3 boundary boundary 4096 Oct 17 03:47 ..
-rw-r----- 1 boundary boundary 1801 Oct 17 03:47 bundle.pem
-rw-r----- 1 boundary boundary 1842 Oct 17 03:47 cert.pem
-rw-r----- 1 boundary boundary 1679 Oct 17 03:47 key.pem
The following configuration is specified in the /etc/boundary.d/controller.hcl
file to contain the tls certificates configuration.
# API listener configuration block
listener "tcp" {
address = "0.0.0.0:9200"
purpose = "api"
tls_disable = false
tls_cert_file = "/etc/boundary.d/tls/cert.pem"
tls_key_file = "/etc/boundary.d/tls/key.pem"
tls_client_ca_file = "/etc/boundary.d/tls/bundle.pem"
cors_enabled = true
cors_allowed_origins = ["*"]
}
# Ops listener for operations like health checks for load balancers
listener "tcp" {
address = "0.0.0.0:9203"
purpose = "ops"
tls_disable = false
tls_cert_file = "/etc/boundary.d/tls/cert.pem"
tls_key_file = "/etc/boundary.d/tls/key.pem"
tls_client_ca_file = "/etc/boundary.d/tls/bundle.pem"
}
Note
We recommend storing the public certificate, private key, certificate authority bundle from a trusted certificate authority, and license key for Boundary controllers in HashiCorp Vault or cloud provider services such as AWS Secrets Manager, Azure Key Vault or GCP Secret Manager.
KMS for controllers
Four KMS keys will be needed:
- root
- The root key is the primary encryption key used by Boundary.
- worker-auth
- This key is shared by the controller and worker in order to authenticate a worker to the controller.
- recovery
- This key is used for rescue/recovery operations that can be used in case of system issues or when normal authentication methods are unavailable.
- bsr
- This key is used for session recording functionality, and it encrypts recorded session data and ensures the integrity of those recordings.
The following configuration is specified in the /etc/boundary.d/controller.hcl
file to contain the kms keys configuration.
# Root KMS Key (managed by AWS KMS in this example)
kms "awskms" {
purpose = "root"
region = "ap-southeast-1"
kms_key_id = "abcd1234-a123-456a-a12b-aexamplekey1"
}
# Recovery KMS Key (managed by AWS KMS in this example)
kms "awskms" {
purpose = "recovery"
region = "ap-southeast-1"
kms_key_id = "abcd1234-a123-456a-a12b-aexamplekey2"
}
# Worker-Auth KMS Key (managed by AWS KMS in this example)
kms "awskms" {
purpose = "worker-auth"
region = "ap-southeast-1"
kms_key_id = "abcd1234-a123-456a-a12b-aexamplekey3"
}
# BSR KMS Key (managed by AWS KMS in this example)
kms "awskms" {
purpose = "bsr"
region = "ap-southeast-1"
kms_key_id = "abcd1234-a123-456a-a12b-aexamplekey4"
}
For KMS keys management for Boundary, we recommend that you use the HashiCorp Validated Design which leverages Vault Transit secret engine. Refer to the Vault Transit section of the guide for more information on how to configure Boundary to use Vault's Transit secret engine for key management. If you use a managed service, refer to our KMS documentation for all available KMS types for guidance.
Prepare controller configuration to initialize a PostgreSQL database
The following configuration is specified in the /etc/boundary.d/controller.hcl
file to contain the database URL.
# Controller configuration block
controller {
name = "<controller1>" #update here for other controllers
description = "<Boundary Controller 1>" #update here for other controllers
database {
url = "postgresql://POSTGRESQL_CONNECTION_STRING"
}
license = "file:////opt/boundary/license/license.hclic"
}
Prepare Boundary controllers configuration
Populate the /etc/boundary.d/controller.hcl
file with the configuration information below.
All three controllers configuration will be the same except the controller configuration stanza
and listener configuration stanza:
# disable memory from being swapped to disk
disable_mlock = true
telemetry {
prometheus_retention_time = "24h"
disable_hostname = true
}
# Controller configuration block
controller {
name = "<controller1>" #update here for other controllers
description = "<Boundary Controller 1>" #update here for other controllers
database {
url = "postgresql://POSTGRESQL_CONNECTION_STRING"
}
license = "file:////opt/boundary/license/license.hclic"
}
# API listener configuration block
listener "tcp" {
address = "0.0.0.0:9200"
purpose = "api"
tls_disable = false
tls_cert_file = "/etc/boundary.d/tls/cert.pem"
tls_key_file = "/etc/boundary.d/tls/key.pem"
tls_client_ca_file = "/etc/boundary.d/tls/bundle.pem"
cors_enabled = true
cors_allowed_origins = ["*"]
}
# Data-plane listener configuration block (used for worker coordination)
listener "tcp" {
address = "<10.0.253.11>:9201" #update here for other controllers ip address
purpose = "cluster"
}
# Ops listener for operations like health checks for load balancers
listener "tcp" {
address = "0.0.0.0:9203"
purpose = "ops"
tls_disable = false
tls_cert_file = "/etc/boundary.d/tls/cert.pem"
tls_key_file = "/etc/boundary.d/tls/key.pem"
tls_client_ca_file = "/etc/boundary.d/tls/bundle.pem"
}
# Events (logging) configuration. This
# configured logging for ALL events to both
# stderr and a file at /var/log/boundary/controller.log
events {
audit_enabled = true
sysevents_enabled = true
observations_enable = true
sink "stderr" {
name = "all-events"
description = "All events sent to stderr"
event_types = ["*"]
format = "cloudevents-json"
}
sink {
name = "file-sink"
description = "All events sent to a file"
event_types = ["*"]
format = "cloudevents-json"
file {
path = "/var/log/boundary"
file_name = "controller.log"
}
audit_config {
audit_filter_overrides {
sensitive = "redact"
secret = "redact"
}
}
}
}
# Root KMS Key (managed by AWS KMS in this example)
kms "awskms" {
purpose = "root"
region = "ap-southeast-1"
kms_key_id = "abcd1234-a123-456a-a12b-aexamplekey1"
}
# Recovery KMS Key
kms "awskms" {
purpose = "recovery"
region = "ap-southeast-1"
kms_key_id = "abcd1234-a123-456a-a12b-aexamplekey2"
}
# Worker-Auth KMS Key (in this example uses KMS authenticated workers)
kms "awskms" {
purpose = "worker-auth"
region = "ap-southeast-1"
kms_key_id = "abcd1234-a123-456a-a12b-aexamplekey3"
}
# BSR KMS Key (in this example uses KMS for the session recording feature)
kms "awskms" {
purpose = "bsr"
region = "ap-southeast-1"
kms_key_id = "abcd1234-a123-456a-a12b-aexamplekey4"
}
Initialize a PostgreSQL database
Before you can start Boundary, you must initialize the database from one Boundary controller.
The following command initializes a Boundary's database with the configuration specified in the /etc/boundary.d/controller.hcl
file:
$ boundary database init -config=/etc/boundary/controller.hcl
Starting Boundary controller service
When the configuration files are in place on each Boundary controller, you can proceed to enable and start the binary via systemd
on each of the Boundary controller nodes.
Perform these steps on all Boundary controllers:
Create a
boundary
user, and create directories for Boundary configuration owned by this user.$ adduser --system --group boundary || true $ mkdir -p /etc/boundary.d /etc/boundary.d/tls /opt/boundary/license /var/log/boundary $ chown -R boundary:boundary /etc/boundary.d /var/log/boundary
Download the Boundary Enterprise package from HashiCorp. Unzip the package and move the
boundary
binary to a sharedPATH
location such as/usr/local/bin
, owned by theboundary
user. Please note: at the time of writing, the current version of Boundary Enterprise is 0.17.1+ent.$ curl -O https://releases.hashicorp.com/boundary/0.17.1+ent/boundary_0.17.1+ent_linux_amd64.zip $ unzip boundary_0.17.1+ent_linux_amd64.zip $ mv boundary /usr/local/bin/ $ chown boundary:boundary /usr/local/bin/boundary
Add the license file, certificate files, and relevant config file to /etc/boundary.d. In the end, the directory should look like this:
$ chown boundary:boundary /etc/boundary.d/* $ chmod 640 /etc/boundary.d/* $ ls -l /etc/boundary.d/ drwxr-x--- 3 boundary boundary 4096 Oct 17 03:47 . drwxr-xr-x 94 root root 4096 Oct 17 05:51 .. -rw-r----- 1 boundary boundary 1652 Oct 17 03:47 controller.hcl drwxr-x--- 2 boundary boundary 4096 Oct 17 03:47 tls $ ls -l /etc/boundary.d/tls drwxr-x--- 2 boundary boundary 4096 Oct 17 03:47 . drwxr-x--- 3 boundary boundary 4096 Oct 17 03:47 .. -rw-r----- 1 boundary boundary 1801 Oct 17 03:47 bundle.pem -rw-r----- 1 boundary boundary 1842 Oct 17 03:47 cert.pem -rw-r----- 1 boundary boundary 1679 Oct 17 03:47 key.pem $ ls -l /opt/boundary/license -rw-rw-r-- 1 root root 3514 Aug 15 18:21 EULA.txt -rw-rw-r-- 1 root root 4922 Aug 15 18:21 LICENSE.txt -rw-rw-r-- 1 root root 9518 Aug 15 18:21 TermsOfEvaluation.txt -rw-r--r-- 1 root root 1163 Oct 17 03:47 license.hclic $ ls -la /var/log/boundary total 8 drwxr-x--- 2 boundary boundary 4096 Oct 17 06:23 . drwxrwxr-x 11 root syslog 4096 Oct 17 06:23 ..
Create a
systemd
unit file for the Boundary service, then load it intosystemd
. Note that theExecStart
line runs theboundary
binary pointing to yourcontroller.hcl
file:$ cat << EOF >> /etc/systemd/system/boundary.service [Unit] Description="HashiCorp Boundary" Documentation=https://www.boundaryproject.io/docs/ Requires=network-online.target After=network-online.target ConditionFileNotEmpty=/etc/boundary.d/controller.hcl StartLimitIntervalSec=60 StartLimitBurst=3 [Service] User=boundary Group=boundary ProtectSystem=full ProtectHome=read-only PrivateTmp=yes PrivateDevices=yes SecureBits=keep-caps AmbientCapabilities=CAP_IPC_LOCK CapabilityBoundingSet=CAP_SYSLOG CAP_IPC_LOCK NoNewPrivileges=yes ExecStart=/usr/bin/boundary server -config=/etc/boundary.d/controller.hcl ExecReload=/bin/kill --signal HUP $MAINPID KillMode=process KillSignal=SIGINT Restart=on-failure RestartSec=5 TimeoutStopSec=30 LimitNOFILE=65536 LimitMEMLOCK=infinity [Install] WantedBy=multi-user.target EOF
$ systemctl daemon-reload $ systemctl enable boundary
Start the Boundary controller service.
$ systemctl start boundary
Boundary ingress workers configuration
KMS for ingress workers
Get the worker-auth
KMS key from the KMS for controllers section. This key enables secure communication between workers and controllers, ensuring that only authorized workers can connect to each other.
The following configuration is specified in the /etc/boundary.d/worker.hcl
file to contain the kms keys configuration.
# Worker-Auth KMS Key (managed by AWS KMS in this example)
kms "awskms" {
purpose = "worker-auth"
region = "ap-southeast-1"
kms_key_id = "abcd1234-a123-456a-a12b-aexamplekey3"
}
Prepare ingress workers configuration
Populate the /etc/boundary.d/worker.hcl
file with the configuration information below.
All three ingress workers configuration will be the same except the worker configuration stanza
:
# disable memory from being swapped to disk
disable_mlock = true
telemetry {
prometheus_retention_time = "24h"
disable_hostname = true
}
# worker block for configuring the specifics of the worker service
worker {
public_addr = "<10.0.253.101>" #update here for other ingress workers ip address
name = "<ingressworker1>" #update here for other ingress workers name
initial_upstreams = ["<controller_cluster_lb_address>:9201"]
recording_storage_path="/opt/boundary/bsr"
recording_storage_minimum_available_capacity="500MB"
tags {"app"="worker","env"="uat","bsr"="enabled","worker-type"="ingress"}
}
# listener denoting this is a worker proxy
listener "tcp" {
address = "0.0.0.0:9202"
purpose = "proxy"
}
# Ops listener for operations like health checks for ingress workers
listener "tcp" {
address = "0.0.0.0:9203"
purpose = "ops"
tls_disable = true
}
# Events (logging) configuration. This
# configured logging for ALL events to both
# stderr and a file at /var/log/boundary/ingress-worker.log
events {
audit_enabled = true
sysevents_enabled = true
observations_enable = true
sink "stderr" {
name = "all-events"
description = "All events sent to stderr"
event_types = ["*"]
format = "cloudevents-json"
}
sink {
name = "file-sink"
description = "All events sent to a file"
event_types = ["*"]
format = "cloudevents-json"
file {
path = "/var/log/boundary"
file_name = "ingress-worker.log"
}
audit_config {
audit_filter_overrides {
sensitive = "redact"
secret = "redact"
}
}
}
}
# kms block for encrypting the authentication PKI material
# Worker-Auth KMS Key (managed by AWS KMS in this example)
kms "awskms" {
purpose = "worker-auth"
region = "ap-southeast-1"
kms_key_id = "abcd1234-a123-456a-a12b-aexamplekey3"
}
Starting Boundary ingress worker service
When the configuration files are in place on each Boundary ingress worker, you can proceed to enable and start the binary via systemd
on each of the Boundary ingress worker nodes.
Perform these steps on all Boundary ingress workers:
Create a
boundary
user, and create directories for Boundary configuration owned by this user.$ adduser --system --group boundary || true $ mkdir -p /etc/boundary.d /opt/boundary/bsr /var/log/boundary $ chown -R boundary:boundary /etc/boundary.d /opt/boundary/bsr /var/log/boundary
Download the Boundary Enterprise package from HashiCorp. Unzip the package and move the
boundary
binary to a sharedPATH
location such as/usr/local/bin
, owned by theboundary
user. Please note: at the time of writing, the current version of Boundary Enterprise is 0.18.0+ent.$ curl -O https://releases.hashicorp.com/boundary/0.17.1+ent/boundary_0.18.0+ent_linux_amd64.zip $ unzip boundary_0.17.1+ent_linux_amd64.zip $ mv boundary /usr/local/bin/ $ chown boundary:boundary /usr/local/bin/boundary
Add relevant config file to
/etc/boundary.d
. In the end, the directory should look like this:$ chown boundary:boundary /etc/boundary.d/* $ chmod 640 /etc/boundary.d/* $ ls -l /etc/boundary.d/ -rw-r----- 1 boundary boundary 704 Oct 17 06:23 worker.hcl $ ls -l /opt/boundary/ drwxr-x--- 3 boundary boundary 4096 Oct 17 06:23 bsr drwxr-x--- 2 boundary boundary 4096 Oct 17 06:23 data $ ls -la /var/log/boundary total 8 drwxr-x--- 2 boundary boundary 4096 Oct 17 06:23 . drwxrwxr-x 11 root syslog 4096 Oct 17 06:23 ..
Create a systemd unit file for the Boundary service, then load it into
systemd
. Note that theExecStart
line runs theboundary
binary pointing to yourworker.hcl
file:$ cat << EOF >> /etc/systemd/system/boundary.service [Unit] Description="HashiCorp Boundary" Documentation=https://www.boundaryproject.io/docs/ Requires=network-online.target After=network-online.target ConditionFileNotEmpty=/etc/boundary.d/worker.hcl StartLimitIntervalSec=60 StartLimitBurst=3 [Service] User=boundary Group=boundary ProtectSystem=full ProtectHome=read-only PrivateTmp=yes PrivateDevices=yes SecureBits=keep-caps AmbientCapabilities=CAP_IPC_LOCK CapabilityBoundingSet=CAP_SYSLOG CAP_IPC_LOCK NoNewPrivileges=yes ExecStart=/usr/bin/boundary server -config=/etc/boundary.d/worker.hcl ExecReload=/bin/kill --signal HUP $MAINPID KillMode=process KillSignal=SIGINT Restart=on-failure RestartSec=5 TimeoutStopSec=30 LimitNOFILE=65536 LimitMEMLOCK=infinity [Install] WantedBy=multi-user.target EOF
$ systemctl daemon-reload $ systemctl enable boundary
Start the Boundary ingress worker service.
$ systemctl start boundary
Boundary egress workers configuration
KMS for egress workers
Get the worker-auth
KMS key from the KMS for controllers section. This key enables secure communication between workers and controllers, ensuring that only authorized workers can connect to each other.
The following configuration is specified in the /etc/boundary.d/worker.hcl
file to contain the kms keys configuration.
# Worker-Auth KMS Key (managed by AWS KMS in this example)
kms "awskms" {
purpose = "worker-auth"
region = "ap-southeast-1"
kms_key_id = "abcd1234-a123-456a-a12b-aexamplekey3"
}
Prepare egress workers configuration
Populate the /etc/boundary.d/worker.hcl
file with the configuration information below.
All three egress workers configuration will be the same except the worker configuration stanza:
# disable memory from being swapped to disk
disable_mlock = true
telemetry {
prometheus_retention_time = "24h"
disable_hostname = true
}
# worker block for configuring the specifics of the worker service
worker {
public_addr = "<10.0.253.201>" #update here for other egress workers ip address
name = "<egressworker1>" #update here for other egress workers name
initial_upstreams = ["<ingress_lb_address>:9202"]
recording_storage_path="/opt/boundary/bsr"
recording_storage_minimum_available_capacity="500MB"
tags {"app"="worker","env"="uat","bsr"="enabled","worker-type"="egress"}
}
# listener denoting this is a worker proxy
listener "tcp" {
address = "0.0.0.0:9202"
purpose = "proxy"
}
# Ops listener for operations like health checks for ingress workers
listener "tcp" {
address = "0.0.0.0:9203"
purpose = "ops"
tls_disable = true
}
# Events (logging) configuration. This
# configured logging for ALL events to both
# stderr and a file at /var/log/boundary/egress-worker.log
events {
audit_enabled = true
sysevents_enabled = true
observations_enable = true
sink "stderr" {
name = "all-events"
description = "All events sent to stderr"
event_types = ["*"]
format = "cloudevents-json"
}
sink {
name = "file-sink"
description = "All events sent to a file"
event_types = ["*"]
format = "cloudevents-json"
file {
path = "/var/log/boundary"
file_name = "egress-worker.log"
}
audit_config {
audit_filter_overrides {
sensitive = "redact"
secret = "redact"
}
}
}
}
# kms block for encrypting the authentication PKI material
# Worker-Auth KMS Key (managed by AWS KMS in this example)
kms "awskms" {
purpose = "worker-auth"
region = "ap-southeast-1"
kms_key_id = "abcd1234-a123-456a-a12b-aexamplekey3"
}
Starting Boundary egress worker service
When the configuration files are in place on each Boundary egress worker, you can proceed to enable and start the binary via systemd
on each of the Boundary egress worker nodes.
Perform these steps on all Boundary egress workers:
Create a
boundary
user, and create directories for Boundary configuration owned by this user.$ adduser --system --group boundary || true $ mkdir -p /etc/boundary.d /opt/boundary/bsr /var/log/boundary $ chown -R boundary:boundary /etc/boundary.d /opt/boundary/bsr /var/log/boundary
Download the Boundary Enterprise package from HashiCorp. Unzip the package and move the
boundary
binary to a sharedPATH
location such as/usr/local/bin
, owned by theboundary
user. Please note: as of this writing, the current version of Boundary Enterprise is 0.18.0+ent.$ curl -O https://releases.hashicorp.com/boundary/0.17.1+ent/boundary_0.17.1+ent_linux_amd64.zip $ unzip boundary_0.17.1+ent_linux_amd64.zip $ mv boundary /usr/local/bin/ $ chown boundary:boundary /usr/local/bin/boundary
Add relevant config file to
/etc/boundary.d
. In the end, the directory should look like this:$ chown boundary:boundary /etc/boundary.d/* $ chmod 640 /etc/boundary.d/* $ ls -l /etc/boundary.d/ -rw-r----- 1 boundary boundary 704 Oct 17 06:23 worker.hcl $ ls -l /opt/boundary/ drwxr-x--- 3 boundary boundary 4096 Oct 17 06:23 bsr drwxr-x--- 2 boundary boundary 4096 Oct 17 06:23 data $ ls -la /var/log/boundary total 8 drwxr-x--- 2 boundary boundary 4096 Oct 17 06:23 . drwxrwxr-x 11 root syslog 4096 Oct 17 06:23 ..
Create a systemd unit file for the Boundary service, then load it into
systemd
. Note that theExecStart
line runs theboundary
binary pointing to yourworker.hcl
file:$ cat << EOF >> /etc/systemd/system/boundary.service [Unit] Description="HashiCorp Boundary" Documentation=https://www.boundaryproject.io/docs/ Requires=network-online.target After=network-online.target ConditionFileNotEmpty=/etc/boundary.d/worker.hcl StartLimitIntervalSec=60 StartLimitBurst=3 [Service] User=boundary Group=boundary ProtectSystem=full ProtectHome=read-only PrivateTmp=yes PrivateDevices=yes SecureBits=keep-caps AmbientCapabilities=CAP_IPC_LOCK CapabilityBoundingSet=CAP_SYSLOG CAP_IPC_LOCK NoNewPrivileges=yes ExecStart=/usr/bin/boundary server -config=/etc/boundary.d/worker.hcl ExecReload=/bin/kill --signal HUP $MAINPID KillMode=process KillSignal=SIGINT Restart=on-failure RestartSec=5 TimeoutStopSec=30 LimitNOFILE=65536 LimitMEMLOCK=infinity [Install] WantedBy=multi-user.target EOF
$ systemctl daemon-reload $ systemctl enable boundary
Start the Boundary egress worker service.
$ systemctl start boundary
Next steps
After setting up a Boundary cluster, it's essential to perform initial configuration steps to ensure the environment is secure, functional, and ready for use. Please refer to the "Initial Configuration" section of the oiperating guide for Boundary Adoption.