Boundary
KMS worker configuration
This page describes configuration for workers that authenticate to upstreams using a shared KMS. This mechanism auto-registers the worker in addition to authenticating it, and does not require on-disk storage for credentials since each time it connects it reauthenticates using the trusted KMS.
If using 0.13+ workers with pre-0.13 controllers you must set the
use_deprecated_kms_auth_method
value to true. Additionally, a worker
registered using this method against a pre-0.13 controller will only be able
to register directly to a controller, and cannot be used as part of multi-hop or
Vault private access capabilities.
KMS Workers require a name
field. This specifies a unique name of this worker
within the Boundary cluster and must be unique across workers. The name
value can be:
- a direct name string (must be all lowercase)
- a reference to a file on disk (
file://
) from which the name is read - an env var (
env://
) from which the name is read.
KMS Workers accept an optional description
field. The description
value can
be:
- a direct description string
- a reference to a file on disk (
file://
) from which the name is read - an env var (
env://
) from which the name is read.
worker {
name = "example-worker"
description = "An example worker"
public_addr = "5.1.23.198"
# Uncomment if using 0.13 worker against pre-0.13 controller
# use_deprecated_kms_auth_method = true
}
KMS Workers also require a KMS block designated for worker-auth
. This is the
KMS configuration for authentication between the workers and controllers and
must be present. Example (not safe for production!):
kms "aead" {
purpose = "worker-auth"
aead_type = "aes-gcm"
key = "X+IJMVT6OnsrIR6G/9OTcJSX+lM9FSPN"
key_id = "global_worker-auth"
}
The upstream controller or worker must have a kms
block that references the
same key and purpose. (If both a controller and worker are running as the same
server process only one stanza is needed.) It is also possible to specify a
kms
block with the downstream-worker-auth
purpose. If specified, this will
be a separate KMS that can be used for authenticating new downstream nodes.
Blocks with this purpose can be specified multiple times. This allows a single
upstream node to authenticate with one key to its own upstream (via the
worker-auth
purpose) and then serve as an authenticating upstream to nodes
across various networks, each with their own separate KMS system or key:
kms "aead" {
purpose = "downstream-worker-auth"
aead_type = "aes-gcm"
key = "XthZVtFtBD1Bw1XwAWhZKVrIwRhR7HcZ"
key_id = "iot-nodes-auth"
}
kms "aead" {
purpose = "downstream-worker-auth"
aead_type = "aes-gcm"
key = "OLFhJNbEb3umRjdhY15QKNEmNXokY1Iq"
key_id = "production-nodes-auth"
}
In the examples above we are encoding key bytes directly in the configuration
file. This is because we are using the aead
method where you directly supply a
key; in production you'd want to use a KMS such as AWS KMS, GCP CKMS, Azure
KeyVault, or HashiCorp Vault. For a complete guide to all available KMS types,
see our KMS documentation.
Complete configuration example
listener "tcp" {
purpose = "proxy"
tls_disable = true
address = "127.0.0.1"
}
worker {
# Name attr must be unique across workers
name = "demo-worker-1"
description = "A default worker created for demonstration"
# Workers must be able to reach upstreams on :9201
initial_upstreams = [
"10.0.0.1",
"10.0.0.2",
"10.0.0.3",
]
public_addr = "myhost.mycompany.com"
tags {
type = ["prod", "webservers"]
region = ["us-east-1"]
}
# use_deprecated_kms_auth_method = true
}
# must be same key as used on controller config
kms "aead" {
purpose = "worker-auth"
aead_type = "aes-gcm"
key = "X+IJMVT6OnsrIR6G/9OTcJSX+lM9FSPN"
key_id = "global_worker-auth"
}
initial_upstreams
are used to connect to upstream Boundary clusters.
Resources
For more on how tags{}
in the above configuration are used to facilitate
routing to the correct target, refer to the Worker
Tags page.