Initial configuration
This section describes how the product will be configured after installation (in case of self-hosted) and how it will be configured after the initial admin account is established.
Overview
Before you start populating Vault with secrets, there are initial configuration tasks that you should complete. This document covers configuration for both HCP Vault and Vault Enterprise and we make a note if any of the configuration items do not apply to a certain product type.
Tasks in this section:
- The process of configuring the audit logs - For Vault Enterprise Only
- Namespace design and recommended structure
- Configuring an authentication method for Vault users in your organization
Prerequisites
- You have reviewed and implemented the HashiCorp Validated Document (HVD) for Vault Solution Design
- You have a running Vault cluster that is initialized and unsealed
- You have a valid root token for your Vault cluster
Configure audit logs
Note
You do not need to configure audit logs for HCP Vault. Audit logs are accessible to production tier clusters, they are stored in an encrypted Amazon S3 bucket in the same region as the cluster. HCP Vault supports streaming audit logs to a variety of destinations.Audit logs are critical for Vault administrators to ensure proper usage, access, and compliance with established security policy. In this section, you will learn about the different types of audit devices and how to enable audit logging in your Vault cluster.
Each line in the audit log is a JSON object containing all of the information for any given request and corresponding response. By default, sensitive information is hashed before it is logged. Audit logs can be used by administrators to monitor the health of the service and to troubleshoot issues, or by compliance auditors to ensure secrets are being accessed and used securely. Each audit log entry contains data such as client IP address, the time of the request, the requested action, and the resulting data from Vault.
Note
Audit device logs are separate and unrelated to Vault operational logs. Operational logs are typically gathered by the operating system journal from standard output and standard error while Vault is running, and hold a different set of information.When you enable an audit device in Vault, most strings contained within requests and responses are hashed with a salt using HMAC-SHA256. The purpose of the hash is so that secrets aren't in plaintext within your audit logs. However, you're still able to check the value of secrets by generating HMACs yourself; this can be done with the audit device's hash function and salt by using the /sys/audit-hash
API endpoint (see the documentation for more details).
Audit logs are enabled in Vault by configuring an Audit device. Audit devices define the destination for audit log data. There are three types of audit devices: file, syslog, and socket.
Types of audit devices
File audit device
The file audit device writes logs to a file. New logs are appended to the log file. The device does not support log rotation. It is up to the operator to use third-party tools such as logrotate to manage log rotation. Sending a SIGHUP to the Vault process will cause file audit devices to close and re-open their underlying file, which can assist with log rotation needs.
Note
It is important to rotate and archive audit log files to prevent it from growing to a size that consumes the entire disk space. Vault will not respond to any API requests if there is a blocked file audit device.Configuration details for this option can be found here.
Log file rotation:
Logrotate is a common Linux system utility designed to ease administration of systems that generate large numbers of log files. It allows automatic rotation, compression, removal, and mailing of log files. Each log file may be handled daily, weekly, monthly, or when it grows too large. Logrotate is available on many Linux distributions, although the default configuration may vary between distributions. Normally, logrotate is run as a daily cron job. It will not modify a log more than once in one day unless the criterion for that log is based on the log's size and logrotate is being run more than once each day, or unless the -f or --force option is used.
Below is an example of logrotate configuration (adjust the retention and path to systemctl for your environment):
/opt/vault/log/vault_audit.log {
daily
rotate 7
notifempty
missingok
compress
delaycompress
postrotate
# systemd unit file should be set to send SIGHUP to Vault process on reload, i.e. ExecReload=/bin/kill --signal HUP $MAINPID
/bin/systemctl reload vault 2> /dev/null || true
endscript
create 0644 vault vault
}
Since the audit log is verbose, we recommend that you only keep a few days of audit logs locally and export old logs to archive storage.
Syslog audit device
The Syslog audit device writes audit logs to syslog. It does not support remote syslog destinations and always sends audit logs to a local Syslog agent. The Syslog audit device is usually configured to write to a local file for either log collection or as a second option for audit device configuration.
Configuration details for this option can be found here
Socket audit device
The socket audit device writes to a TCP, UDP, or UNIX socket. Due to the unreliable nature of the underlying protocol, we do not recommend enabling the socket audit device unless it is absolutely necessary. If you do enable the socket audit device, always enable a secondary “non-socket” audit device to ensure accuracy and to guarantee that audit logs will not be lost.
Configuration details for this option can be found here
Multiple audit devices
An otherwise-successful request will fail if it cannot be logged to at least one configured audit device. Failure to log to at least one audit device will prevent Vault from servicing requests (see blocked audit device). This is by design to ensure that all requests and responses are captured correctly.
We strongly recommend that you enable at least two audit devices of different types for two reasons:
Improved Availability
There are two types of audit device failures: blocking and non-blocking.
A blocking failure is one where an attempt to write to the audit device stalls without returning an error. This is unlikely with a local disk device, but could occur with a network-based audit device.
A non-blocking failure is one where an attempt to write to the audit device returns an error and no audit log is written.
When multiple audit devices are enabled, if any of them fail in a non-blocking fashion, Vault requests can still complete successfully provided at least one audit device successfully writes the audit record. If any of the audit devices fail in a blocking fashion however, Vault requests will hang until the blocking is resolved.
Checking and verification
Configuring multiple audit devices provides you not only with redundant copies, but also a way to check for data tampering in the logs themselves. We recommend that you set up one audit log for analysis, and another for secure storage and archiving. In case there are concerns about the integrity of the analysis logs, you can refer to the archived logs for verification. For archival purposes, a second audit device should be enabled to write to a filesystem or syslog destination which is configured with strict access control permissions. Read-only access to these logs can be granted in the case where there is a need to reconcile with the main audit log, but otherwise these logs can remain untouched. This ensures there is an unaltered version of the audit log for security review. When writing to a “file” audit backend it is important to monitor disk space on the disk where the logs are being written. If disk space fills up, it will result in a blocked audit device, preventing Vault from responding to requests.
Enabling an audit device
When a Vault server is first initialized, no auditing is enabled. Audit devices must be enabled by a root user using the CLI, API, or Terraform.
Note
Audit device configuration is replicated to all nodes within a cluster by default, and to performance/DR secondaries for Vault Enterprise clusters. Each node in the cluster writes to its own audit log, in the same locations as the active node. Before enabling an audit device, ensure that all nodes within the cluster(s), including your DR and Performance Secondary clusters, will be able to successfully log to the audit device to avoid Vault being blocked from serving requests. Audit logs from all nodes in a Vault cluster need to be analyzed to audit any event. Thus it is best practice to use a centralized logging solution. An audit device can also be limited to only the nodes within the cluster using the local parameter. This is useful if you want to have different audit device configurations on replicated clusters.We recommend that you enable a file audit device as well as a syslog audit device (see Multiple Audit Devices).
Step 1: Ensure you have the correct system permissions
It is important to ensure that the Vault process user has the correct system permissions to write to the configured audit device.
For the file
audit device, the Vault process user must have write access to the location of the file.
For the syslog
audit device, the Vault process user must have the correct capabilities such as CAP_SYSLOG
and permissions where required to write to the system log.
Step 2: Enable the audit device
First, set your root token using the Vault CLI.
$ export VAULT_ADDR=https://vault:8200
$ export VAULT_TOKEN=<your root token>
Note
TheVAULT_TOKEN
environment variable sets the authentication token that the Vault CLI will use for all subsequent requests. The VAULT_ADDR
environment variable tells the Vault CLI where to send all subsequent requests. See our documentation for a list of valid environment variables.The following command enables a file audit device. The output logs are stored in the /vault/vault-audit.log
file.
$ vault audit enable file file_path=/vault/vault-audit.log
If the Vault process user does not have permission to write to the file provided in the file_path
parameter, you can observe the error below.
Error enabling audit device: Error making API request.
URL: PUT http://localhost:8200/v1/sys/audit/file
Code: 400. Errors:
* sanity check failed; unable to open "/vault/vault-audit.log" for writing: open /vault/vault-audit.log: permission denied
The following command enables a syslog audit device, specifying the syslog facility and tag to use.
$ vault audit enable syslog tag="vault" facility="AUTH"
If syslog is not accessible on the system, you can observe errors in the operational log like this when Vault tries to write to it, and when you first try to enable it.
[ERROR] enable audit mount failed: path=syslog/ error="Unix syslog delivery error"
[ERROR] core: failed to audit response: request_path=sys/audit/syslog error=1 error occurred:
* no audit backend succeeded in logging the response
The error Unix syslog delivery error
can mean that the syslog service is not enabled on the host or that Vault is not able to access it. This can often be due to restrictions imposed by SELinux configuration on the host, for example. In order to check whether SELinux is actively prohibiting access to a resource, the operating mode can temporarily be changed to permissive using the setenforce utility. A more permanent solution would include enabling SELinux debugging and using packages such as setools and settroubleshoot to obtain information about specific operation denials.
Warning
Audit messages generated for some operations can be quite large, and can be larger than a maximum-size single UDP packet. Because UDP is a connectionless protocol, if a log message is larger than the maximum size UDP packet, then that audit log message will fail silently. Vault will have no knowledge that the message was too large. If possible with your syslog daemon, configure a TCP listener. Because TCP protocol is “connection-oriented” Vault will have awareness whether syslog messages have been successfully received. This can result in a blocked audit device if the TCP connections are unsuccessful. To avoid this possibility, consider using afile
backend and having syslog configured to read entries from the file.Note
A typical audit log entry can be 1kb-3kb, meaning a node servicing 10,000 requests an hour can write 10-30mb of data. We recommend using a log rotation solution likelogrotate
to avoid the local file system logs from becoming full, and transfer the logs to external storage in the event the node loses a disk or is compromised. We also recommend that you configure the audit logs to write to a separate logical volume to avoid any disk IO contention with Vault's internal storage when using integrated storage.For more information on how to use the audit log, please refer to the Audit Usage section of the document.
Configure initial namespaces
After configuring the audit log, the next step is to think about how to organize the data within Vault. Many organizations opt to implement Vault as a service, where a central platform team is responsible for the day-to-day operations of Vault, while development teams simply utilize Vault's capabilities. To enable this model, you need a mechanism to isolate groups of resources within a single cluster, and this is where Vault namespaces fit in.
What are vault namespaces
Namespaces are a method by which a single Vault cluster can be divided into multiple sub-clusters and managed individually. Each namespace can be assigned different login paths and support creating and managing data isolated to their namespace.
When to use vault namespaces
Namespaces are designed to address two use cases: tenant isolation and self-management.
Tenant isolation
There is a need for strong isolation between the users in policies, secrets, and identities, typically as a result of compliance regulations such as GDPR or internal security policy.
Self-Management
There is a need to provide delegated administrative privileges for a team to author their own policies and manage their own namespace. For example, if you have a development team who is comfortable with Vault and wishes to self-manage and operate in their own namespace.
Namespace antipatterns
Vault namespaces are subject to limits and maximums within Vault's backend storage. The effective storage limit on the number of Vault namespaces is a result of the fact that each namespace must have at least two secret engine mounts (for sys and identity), one local secret engine (cubbyhole) and one auth engine mount (token). Depending on your organizations namespace structure (ie. how many auth engines and secret engines you mount under each namespace) the effective storage maximum will vary.
Additionally, administering a large number of namespaces can become difficult and any restructuring can be problematic. There are a few anti-patterns that should be avoided when planning Vault Enterprise namespaces:
- Not clearly defining criteria for a namespace - If namespaces are created ad-hoc without a clear plan or criteria, the namespace structure in an enterprise deployment will quickly become difficult to manage.
- Strong misalignment to organizational structure - Vault namespaces should ideally be roughly aligned to the level of granularity across lines of businesses (LOBs), divisions, teams, services, apps that needs to be reflected in Vault's end-state design. If the namespace layout bears no resemblance to the level of self-service required in the organization, management can become difficult.
- Non-scalable model or over-segmentation - If an organization plans on provisioning a namespace for each application, but the organization has 10,000 applications to onboard, then Vault's storage limits will be hit and management of a large number of namespaces will be difficult.
Namespace performance impacts
In addition to the storage limits listed above, large numbers of namespaces can also have performance impacts on specific Vault workflows. For example, if the number of leases is also growing linearly or exponentially with the number of namespaces and mounts, then API requests might become slower if the growth is unbounded.
Additionally, a large list of namespaces can impact the time to complete a leader election. In order for a node to become an active leader in a Vault cluster, it has to load up all the namespaces and mounts. A large number of namespaces will potentially impact the amount of time it takes to complete this process.
Configure a namespace for the organization
Regardless of your specific use case, it is important to consider how to set up your initial namespace structure as it informs where your primary auth method is mounted and the path where users will login.
The Vault API includes system backend endpoints, which are mounted under the sys/
path. System endpoints let you interact with the internal features of your Vault instance. For security reasons, some of the system backend endpoints are restricted, and can only be called from the root namespace or using a token in the root namespace with elevated permissions. For this reason, we recommend that you create an organization-level namespace under the root namespace to protect these sensitive endpoints from accidental exposure.
The root namespace is reserved for operators with super admin access. This is where cluster level configurations such as replication settings and rotation of the root encryption key are made.
The “org” level namespace is where regular administrators and all other users login to Vault. Regular administrators are granted lesser administrative privileges and are responsible for general day to day operations such as managing secret engines and creating/updating policies.
In order to provide access to the namespaces, you will mount a human auth method in each namespace and configure separate access policies.
Note
Namespace names cannot be changed after creation. You can create a new namespace and / or delete the existing namespace.Note
A HCP Vault cluster is pre-configured with anadmin
namespace by default. This is equivalent to the org
level namespace as mentioned above. You do not have access to the root namespace in HCP Vault, only a few Vault system APIs are available. To initially access the admin
namespace, you will need to generate an admin token via HCP.Steps to configure the org namespace
To set up the org level namespace, login with your root token and execute the command below (substitute demo
with your org name).
$ vault namespace create demo
Key Value
--- -----
custom_metadata map[]
id RKYop
path demo/
$ vault namespace list
Keys
----
demo/
Useful resources
Configure user authentication
Authentication in Vault is the process by which user- or machine-supplied information is verified against an internal or external system. Vault supports multiple auth methods including LDAP, OIDC, and more. Each auth method has a specific use case. In this section, you will configure your first auth method to allow human users to login to Vault.
Clients must authenticate against an auth method to interact with Vault. After successful authentication, a token is returned to the client. The token is associated with policies which control the resources and operations that the client can access.
The mapping between the token and policy is configured in the auth method. This process is described in detail in the policies concepts documentation.
Figure 1: Vault's Authentication Workflow
The token auth method is built-in and is at the core of client authentication. A client can authenticate with Vault through the token auth method. For example, a Vault admin logs in with Vault via token auth method using the initial root token (or admin token if you are running HCP Vault) so that the admin can configure other auth methods.
Note
It is considered a best practice not to persist root tokens. Use the root token only for just enough initial setup. Once you enable an auth method with appropriate policies allowing Vault admins to log in and perform operational tasks, the admins should use the auth method to authenticate instead of using the root token (or admin token for HCP Vault).Configure admin policy
Policies form the basis of the role-based access control (RBAC) system for Vault. Logical operations in Vault are path-based, and policies provide a declarative way to grant or forbid access to certain paths and operations in Vault. If you are not familiar with the concept of Vault policies, we strongly encourage you to review the policies concept documentation and the getting started guide to policy writing.
The first group of users to grant access to Vault are your super administrators. The super administrators have full access to Vault. They are responsible for cluster level configurations and any requests to the restricted API that requires the root namespace. By default Vault does not include any administrative policies. In a new Vault installation, there is only a “default” policy that does not grant access to any functional resources in Vault. You will need to create a super admin policy for this subset of users. Below is an example of a Vault super admin policy.
# * (glob) character matches any prefixes in the path and can only be used at the end
path "*" {
capabilities = ["create", "read", "update", "delete", "list", "sudo"]
}
Note
You do not need to create a super administrator policy for HCP Vault since the root namespace is not accessible.Login to Vault using your root token and run the command below to create a super admin policy under the root namespace.
$ export VAULT_TOKEN=<your root token>
$ vault policy write super-admin super-admin-policy.hcl
Success! Uploaded policy: super-admin
The second group of users to grant access to Vault are your regular administrators. Regular administrators are responsible for day to day Vault operations. They only have access to the org level namespace and do not have access to the root namespace. The paths that the regular administrators can access are governed by the admin policy. Below is an example of a Vault admin policy.
# List existing policies
path "sys/policies/acl"
{
capabilities = ["list"]
}
# Create and manage ACL policies
path "sys/policies/acl/*"
{
capabilities = ["create", "read", "update", "delete", "list"]
}
# Manage auth methods broadly across Vault
path "auth/*"
{
capabilities = ["create", "read", "update", "delete", "list", "sudo"]
}
# Create, update, and delete auth methods
path "sys/auth/*"
{
capabilities = ["create", "update", "delete", "sudo"]
}
# List auth methods
path "sys/auth"
{
capabilities = ["read"]
}
# Managing identity
path "identity/*"
{
capabilities = ["create", "read", "update", "delete", "list"]
}
# Enable and manage the key/value secrets engine at `secret/` path
path "secret/*"
{
capabilities = ["create", "read", "update", "delete", "list"]
}
# Allow managing leases
path "sys/leases/*"
{
capabilities = ["read", "update", "list"]
}
# Manage namespaces
path "sys/namespaces/*" {
capabilities = ["create", "read", "update", "delete", "list"]
}
# Manage secrets engines
path "sys/mounts/*"
{
capabilities = ["create", "read", "update", "delete", "list"]
}
# List existing secrets engines.
path "sys/mounts"
{
capabilities = ["read"]
}
# Configure License
path "sys/license"
{
capabilities = ["create", "read", "update", "delete", "list"]
}
# Configure Vault UI
path "sys/config/ui"
{
capabilities = ["read", "update", "delete", "list"]
}
This example policy is only a subset of Vault's API. If additional administrative capabilities are necessary, edit the admin policy when the need arises.
Login to Vault using your root token and run the commands below to create an admin policy under the org-level namespace.
$ export VAULT_NAMESPACE=<org level namespace>
$ vault policy write admin admin-policy.hcl
Success! Uploaded policy: admin
Tip
Setting the environment variableVAULT_NAMESPACE
tells the CLI to execute all subsequent commands under the specified namespace.Configure the first auth method
With the admin policies created, you are ready to configure your first auth method to allow your Vault super administrators to login.
There are many auth methods available in Vault, some auth methods are targeted towards human users while others are targeted towards machines. Which auth methods you enable depends on the type of clients (human or machine), and the existing processes and systems in place for identity management. For example, if an organization uses Microsoft Active Directory for user identity management, then it makes sense to enable the LDAP auth method for user authentication.
Depending on the use case, multiple auth methods can be enabled in Vault to provide access to the same data. For the initial configuration, you will enable one auth method for the super administrators in the root namespace and one auth method for all other users in the org level namespace.
For human user access, if your organization already has a user authentication system tied into the user lifecycle and Vault supports it (e.g. OIDC, LDAP), you should consider using this as an auth method. In this case, you only have one place to manage your users, and users are less likely to get stranded in Vault either in a group they have moved from or as a legacy user that no longer works in the organization.
Certain auth methods are more popular due to the widespread use of their associated technologies in modern organizations. This section focuses on the following common auth methods for human user access and how to use them.
- OIDC
- LDAP
OIDC auth method
The OIDC auth method allows authentication via a configured identity provider such as Okta or Azure AD, providing a single sign-on experience to Vault services. Authentication requests can be initiated using the Vault UI, CLI, or the API. The OIDC auth method allows a user’s browser to be redirected to a configured identity provider, complete login, and then be routed back to Vault's UI with a newly-created Vault token. This workflow is depicted in Figure 2.
Figure 2: OIDC Auth Workflow
- User requests authentication using the OIDC auth method via Vault's UI or CLI.
- Vault launches a browser to the configured discovery URL for the OIDC provider.
- User submits credentials to the OIDC provider.
- OIDC provider validates credentials and returns an OAuth token to Vault. The OAuth token contains information about the user such as security groups that the user is a member of, which could map to an identity group within Vault.
- Vault maps the security group claims in the OAuth token to pre-configured identity groups within Vault, which is attached to Vault policies.
- Vault attaches policies associated with the identity groups to a Vault token.
- A Vault token is returned.
Configuring OIDC auth method
The steps outlined below are the general workflow for configuring an OIDC auth method. You should perform these steps first in the root namespace for your super administrators, then configure another OIDC auth method or a different human auth method in the org level namespace for regular administrators and all other users. The code snippets in these steps will highlight distinctions between the configuration in the root namespace vs the org level namespace.
Step 1: Enable the auth method
Before you can use an auth method in Vault, you must enable it. You can enable and configure the auth method using the UI, CLI, or the API. We will focus on CLI commands in this document.
Configuration for root namespace
Using the Vault CLI, log in to Vault with your root token and enable the OIDC auth method under the root namespace using the code snippet below.
$ vault auth enable oidc
Configuration for Org namespace
Using the Vault CLI, log in to Vault with your root token and enable the OIDC auth method under the org level namespace using the code snippet below.
$ export VAULT_NAMESPACE=<org level namespace>
$ vault auth enable oidc
Tip
This could be reconfigured for different use cases. For example, OIDC could be enabled multiple times on different paths, for different OIDC providers or different environments. Enabling this auth method at a different path can be achieved using the `-path` flag.Step 2: Configure the auth method
Once the OIDC auth method has been enabled, you are now ready to configure the auth method itself. Vault needs to know how to connect to your identity provider and what credentials it should use. These settings include the oidc_discovery_url
, oidc_client_id
, and the oidc_client_secret
. You can find more details about these settings in the JWT/OIDC auth method API. Your Vault nodes will need the ability to connect to the specified oidc_discovery_url
typically over TCP port 443 (HTTPS) to retrieve OIDC metadata. You must also provide a default Vault role name as part of the configuration. The role does not have to exist at this stage as it will be created in the next step.
Configure the OIDC auth method using the CLI command below. Set the oidc_discovery_ca_pem
parameter to point to the CA certificate used to validate the connections to the OIDC discovery URL. If not set, the system certificates will be used.
$ vault write auth/oidc/config \
oidc_discovery_url="https://sso.example.com" \
oidc_client_id="vault" \
oidc_client_secret="passw0rd" \
oidc_discovery_ca_pem=@ca.pem \
default_role="default"
Tip
The `@` prefix can be used to read data from a file on disk. See the CLI documentation for more information.Step 3: Create the default role
With the OIDC auth method configured, you can now create the default role named in the previous step.
Multiple roles can be created under a single auth method. Each role can be associated with different Vault policies, granting multiple levels of access. The default role created in this step is only used when the user does not specify a role during authentication.
The role contains information that Vault uses to uniquely identify users and the groups to which they are a member of, using claims within the OAuth token sent by the identity provider. The role also specifies which Vault policies should be attached to the tokens issued by this role when a user’s OAuth token matches the claims defined within the role. The default role is commonly associated with Vault's default policy, which is a built-in policy that cannot be removed and provides a set of limited capabilities. Below is an example CLI command to create the default role.
$ vault write auth/oidc/role/default -<<EOF
{
"user_claim": "email",
"groups_claim": "groups",
"allowed_redirect_uris": [ "https://vault.example.com:8200/ui/vault/auth/oidc/oidc/callback", "https://vault.example.com:8250/oidc/callback" ],
"token_policies": "default",
"token_ttl": "1h",
"token_max_ttl": "1h"
}
EOF
The user_claim
parameter is the claim in the OAuth token used to uniquely identify the user. This will be used as the name for the Identity entity alias created due to a successful login. The Identity entity alias is a representation of a client for a specific auth method within Vault's identity system. Please refer to the Identity secrets engine documentation for more information.
The group_claim
parameter is an optional field containing the claim used to uniquely identify the set of groups to which the user belongs. Vault will attempt to match the value from this claim to an external group created within Vault. If a match is found, Vault will add the entity to the external group as a member.
The allowed_redirect_uris
is a list of allowed values where users are redirected after authenticating with the OIDC provider. The URI with port 8200 is for login via the UI whereas the URI with port 8250 is for login via the CLI. These URIs must match the configuration on the OIDC provider configuration.
The token_policies
is the list of Vault policies that would be attached to the token upon a successful login.
The token_ttl
is the duration of time before the generated token expires if not renewed. The format of this parameter follows the duration string format. We recommend that you always provide a token TTL value to override the default token TTL of 32 days (768 hours). See our Vault Tokens documentation for more information.
The token_max_ttl
is the maximum lifetime for the generated token. The token cannot be renewed past the allowed max TTL.
Step 4: Create an external group and policy mapping
The last configuration for OIDC auth is to map groups in your OIDC provider to Vault identity groups. This allows you to centrally manage group membership within your OIDC provider. Vault uses this information to assign the correct policies for each group within your OIDC provider.
The first step in implementing this mapping is to create a Vault external group, which is a Vault representation of a group outside of its identity store. The external group is associated with a set of Vault policies that define the access allowed. The next step is to create a Vault group alias, which ties the Vault external group to a group in your OIDC provider. During authentication, Vault will attempt to match an OIDC provider group to an external group using the group claim specified in step 3. If a match is found, an entity is added as a member of the external group and a token is returned with the associated policy. If a user is removed from the group in your OIDC provider, that change gets reflected in Vault only upon a subsequent login or token renewal operation.
Configuration for root namespace
The commands below create an external group called “vault-super-admins” for the Vault super administrators. This external group is tied to an OIDC group called “vault-super-admins” by the group alias created in the last command. You can find more details on each parameter in our Identity secrets engine API.
$ GROUP_ID=$(vault write -format=json identity/group \
name="vault-super-admins" \
type="external" \
policies="super-admin" | jq -r ".data.id")
$ MOUNT_ACCESSOR=$(vault read -field=accessor sys/mounts/auth/oidc)
$ vault write identity/group-alias \
name="vault-super-admins" \
mount_accessor=$MOUNT_ACCESSOR \
canonical_id=$GROUP_ID
Configuration for org namespace
Before enabling the below configuration, ensure that your Vault CLI is configured to point to the org namespace. Create an external group called “vault-admins” for your regular Vault administrators.
$ GROUP_ID=$(vault write -format=json identity/group \
name="vault-admins" \
type="external" \
policies="admin" | jq -r ".data.id")
$ MOUNT_ACCESSOR=$(vault read -field=accessor sys/mounts/auth/oidc)
$ vault write identity/group-alias \
name="vault-admins" \
mount_accessor=$MOUNT_ACCESSOR \
canonical_id=$GROUP_ID
Step 5: Validate OIDC auth configuration
You can validate the OIDC auth configuration with either the Vault UI or CLI. Login using the CLI with the following command in the next sections.
Configuration for root namespace
Login to the root namespace as a super administrator.
$ vault login -method=oidc
Configuration for Org namespace
Login to the org namespace as a regular administrator.
Note
In order to access the org namespace, super administrators must first login via the root namespace. Once authenticated, the token granted can be used to operate on the org namespace.$ export VAULT_NAMESPACE=<org level namespace>
$ vault login -method=oidc
This command launches a browser window to your OIDC provider login page. The default role is used since no role is specified. Enter your credentials and you should be redirected back to Vault indicating that you have successfully signed in via OIDC. You can validate the policies attached to the returned token with the token information displayed on your CLI.
Waiting for OIDC authentication to complete...
Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.
Key Value
--- -----
token hvs.root.token
token_accessor Igcq6WRvsPk8ViO9rUpEGlrk
token_duration 1h
token_renewable true
token_policies ["default"]
identity_policies ["admin"]
policies ["default" "admin"]
token_meta_email nick@domain.com
token_meta_role default
token_meta_username nick
Note
Users who are mapped in the org namespace can still login to the root namespace. However, their access is restricted by the default policy since there is no external group and policy mapping for these users in the root namespace.Useful resources
- OIDC configuration troubleshooting
- High-level configuration steps for various OIDC providers
- Azure AD with OIDC auth method tutorial
- OIDC authentication with Okta tutorial
LDAP auth method
The LDAP auth method allows authentication using an existing LDAP server and user/password credentials. This enables users to login to Vault using their existing LDAP credentials without having to create new user accounts in Vault. LDAP groups can be directly mapped to Vault policies, which control the paths in Vault that members of the LDAP group can access.
The LDAP auth method connects directly to your organizations’ LDAP servers. Vault must be able to communicate with the LDAP server over TCP and UDP port 389, or on port 636 for LDAPS. When an authentication request is made through the auth method, Vault verifies the provided credentials with the backend LDAP server through a service account. After successful validation of the credentials, Vault issues a token, associates it with the appropriate Vault policy, and returns the token to the user.
Configuring LDAP auth method
The steps outlined below are the general workflow for configuring an LDAP auth method. You should perform these steps first in the root namespace for your super administrators, then configure another LDAP auth method or a different human auth method in the org level namespace for regular administrators and all other users. The code snippets in these steps will highlight distinctions between the configuration in the root namespace vs the org level namespace.
Step 1: Enable the auth method
First step is to enable the auth method. You can enable and configure the auth method using the UI, CLI, or the API. We will focus on CLI commands in this document.
Configuration for root namespace
Using the Vault CLI, login to Vault with your root token and enable the LDAP auth method under the root namespace using the code snippet below.
$ vault auth enable ldap
Configuration for org namespace
Using the Vault CLI, login to Vault with your root token and enable the LDAP auth method under the org level namespace using the code snippet below.
Tip
Setting the environment variableVAULT_NAMESPACE
tells the CLI to execute all subsequent commands under the specified namespace.$ export VAULT_NAMESPACE=<org level namespace>
$ vault auth enable ldap
Step 2: Configure the auth method
Next step is to configure the auth method to allow Vault to communicate with the backend LDAP server(s).
The command below shows an example of a basic LDAP configuration:
- LDAP server running on ldap.example.com, port 389.
- Server supports STARTTLS command to initiate encryption on the standard port.
- CA Certificate stored in file named ldap_ca_cert.pem
- Server does not allow anonymous binds for performing user search (discoverdn=false).
- Bind account used for searching is cn=vault,ou=users,dc=example,dc=com with password My$ecrt3tP4ss.
- User objects are under the ou=Users,dc=example,dc=com organizational unit.
- Username passed to Vault when authenticating maps to the sAMAccountName attribute.
- Group membership will be resolved via the cn attribute of group objects. That search will begin under ou=Groups,dc=example,dc=com.
- Generated Vault tokens expire 1 hour after creation.
- Generated Vault tokens can be renewed for up to 8 hours after creation.
vault write auth/ldap/config \
url="ldap://ldap.example.com" \
userattr="sAMAccountName" \
userdn="ou=Users,dc=example,dc=com" \
groupdn="ou=Groups,dc=example,dc=com" \
groupattr="cn" \
binddn="cn=vault,ou=users,dc=example,dc=com" \
bindpass="My$ecr3tP4ss" \
certificate=@ldap_ca_cert.pem \
insecure_tls=false \
starttls=true \
discoverdn=false \
token_ttl=1h \
token_max_ttl=8h
For more example configurations, see the LDAP auth method documentation. For details on specific parameters, see the LDAP auth API.
Step 3: Assign Vault policies to LDAP groups
Next, you will map existing LDAP groups to the appropriate Vault policies. This allows you to manage group membership and Vault access within LDAP, removing the need to create and manage groups within Vault.
Configuration for root namespace
The command below shows an example where the vault-super-admins
group in LDAP is mapped to the super-admin policy created earlier in this document.
$ vault write auth/ldap/groups/vault-super-admins policies=super-admin
Configuration for Org namespace
The command below shows an example where the vault-admins
group in LDAP is mapped to the admin policy created earlier in this document.
$ vault write auth/ldap/groups/vault-admins policies=admin
Step 4: Validate LDAP auth configuration
You can validate the LDAP auth configuration with either the Vault UI, CLI, or API. Login using the CLI with the following command.
Configuration for root namespace
Login to the root namespace as a super administrator.
$ vault login -method=ldap username=my-super-admin password=passw0rd
Note
In order to access the org namespace, super administrators must first login via the root namespace. Once authenticated, the token granted can be used to operate on the org namespace.Configuration for org namespace
Login to the org namespace as a regular admin and validate that the admin policy is attached to the returned token.
$ vault login -method=ldap username=nwong password=passw0rd
Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.
Key Value
--- -----
token hvs.root.token
token_accessor EbPflE8su5iRb2qD8KydXsUf.7fKdt
token_duration 1h
token_renewable true
token_policies ["default" "admin"]
identity_policies []
policies ["default" "admin"]
token_meta_username nwong
Note
Users who are mapped in the org namespace can still login to the root namespace. However, their access is restricted by the default policy since there is no external group and policy mapping for these users in the root namespace.Reference material:
Revoke the root token
Note
This section is not necessary for HCP VaultFor Vault Enterprise, the root token is attached to the root policy, which is capable of performing every operation for all paths in Vault. After you have validated that you can login using your first auth method, you should revoke the root token to eliminate the risk of exposure.
Revoke the root token using the following CLI command.
$ vault login <your root token>
Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.
Key Value
--- -----
token hvs.root.token
token_accessor JlM24NC1dk6MZLtup1XrKHYM
token_duration ∞
token_renewable false
token_policies ["root"]
identity_policies []
policies ["root"]
$ vault token revoke <your root token>
Success! Revoked token (if it existed)
Tip
In case of an emergency when a root token is absolutely necessary (for example, loss of auth method preventing admin access requiring break glass access to Vault), you should generate a new root token using theoperator generate-root
command. See this tutorial that demonstrates the steps to regenerate a root token.Summary
In this section, you reviewed the best practices for setting up audit devices. Recall that it is important to have multiple audit devices configured to prevent a blocked audit device. Next, you set up the initial namespace structure to secure the root namespace such that only super administrators can access Vault's system APIs. Finally, you configured human auth methods and the appropriate policies for your Vault administrators. The human auth method that you configured should be tied to your organization's identity management system. This allows you to only have one place to manage your users, and users are less likely to get stranded in Vault either in a group they have moved from or as a legacy user that no longer works in the organization.