Performance replication with paths filter
Enterprise Only
Performance Replication requires Vault Enterprise Premium license. To learn how this works with HCP Vault, refer to the HCP Vault Performance Replication tutorial.
Paths filter is a new way of controlling which secrets are moved across clusters and physical regions as a result of replication. With replication filters, users can select which secrets engines will be replicated as part of a Performance Replication relationship.
By default, all non-local secrets engines and associated data are replicated as part of replication. The paths filter feature enables users to allow or deny which secrets engines are replicated, thereby allowing users to further control the movement of secrets across their infrastructure.

Challenge
General Data Protection Regulation (GDPR) is designed to strengthen data protection and privacy for all individuals within the European Union. It requires that personally identifiable data not be physically transferred to locations outside the European Union unless the region or country has an equal rigor of data protection regulation as the EU.
Failure to abide by GDPR will result in fines as high as 20 million EUR or 4% of the global annual revenue (whichever greater).
Solution
Leverage Vault's paths filter feature to abide by data movements and sovereignty regulations while ensuring performance access across geographically distributed regions. You can set filters based on the mount path of the secrets engines as well as namespaces.
Prerequisites
This intermediate Vault operations tutorial assumes that you have some working knowledge of Vault.
You need two Vault Enterprise clusters: one representing the EU cluster, and another representing the US cluster both backed by Consul for storage.
Note
Refer to the Vault High Availability with Consul tutorial for configuring your Vault server.
Scenario Introduction
An organization has a Vault cluster in EU and wish to span across the United States by setting up a secondary cluster and enable the Performance Replication. However, some data must remain in EU and should not be replicated to the US cluster

Leverage the paths filter feature to deny the secrets from being replicated, that are subject to GDPR, from being replicated across the regions.
- Segment GDPR and non-GDPR secrets
- Enable Performance Replication with paths filter
- Secondary cluster re-authentication
- Verify the replication paths filter
- Enable a local secrets engine
- Enable a local auth method
Note
Ensure that GDPR data is segmented by secret mount and deny the movement of those secret mounts to non-GDPR territories.
Step 1: Segment GDPR and non-GDPR secrets
In the EU cluster (primary cluster), enable key/value secrets engines:
- At
EU_GDPR_data
for GDPR data - At
US_NON_GDPR_data
for non-GDPR data localized for US
Also, create the following namespaces:
Enable the key/value v2 secrets engine at the
EU_GDPR_data
path.Enable the key/value v2 secrets engine at the
US_NON_GDPR_data
path.Create a namespace named
office_FR
.Create a namespace named
office_US
.
Step 2: Enable Performance Replication with paths filters
Enable Performance Replication on the primary cluster.
Note
If the primary's cluster address is not directly accessible and must be accessed via an alternate path/address (e.g. through a TCP-based load balancer), use the
primary_cluster_addr
parameter to specify the address to be used by the secondaries. Otherwise, the secondaries use the configured cluster address to connect to the primary. See the Vault High Availability with Consul tutorial for an example Vault server configuration.Generate a secondary token.
Create a paths filter to deny
EU_GDPR_data
(kv-v2) andoffice_FR
(namespace) from being replicated.Enable Performance Replication on the secondary cluster by passing the
wrapping_token
obtained from the primary cluster.Note
This will immediately clear all data in the secondary cluster.
Step 3: Secondary cluster re-authentication
Note
From this point and on, the secondary cluster requires the primary cluster's unseal key to unseal. If the secondary is in an HA cluster, each standby node needs the primary cluster's unseal keys to unseal. The secondary cluster mirrors the configuration of its primary cluster's backends such as auth methods, secrets engines, audit devices, etc. It uses the primary as the source of truth and passes token requests to the primary.
The initial root token on the secondary no longer works; therefore, perform one of the following:
- Option 1: Use the auth methods configured on the primary cluster to log into the secondary
- Option 2: Generate a new root token using the primary's unseal key
- Option 3: Generate a batch token to use across replication clusters
Option 1
On the primary cluster, create a
superuser
policy.Enable
userpass
auth method.Create a new user,
tester
in userpass where the password ischangeme
andsuperuser
policy is attached.Note
If you are not familiar with policies, refer to the policies tutorial.
Log into the secondary cluster using the enabled auth method.
Option 2
On the secondary cluster, generate a new root token using the primary cluster's unseal key.
On the secondary cluster, initialize the root generation operation.
Example output:
Nonce and one-time password (OTP) are generated. The nonce value should be distributed to all unseal key (recovery key if auto-unseal is used) holders. You will need the OTP to decode the generated root token.
Execute the
generate-root
command until the threshold is reached.When prompted, enter the unseal key.
When the threshold of unseal keys (or recovery keys) are supplied, the output includes the encoded root token.
Example output:
Finally, decode the encoded token using OTP.
Example:
Log into the secondary cluster using the generated root token.
Example:
Option 3
Unlike service tokens, batch tokens can be used across the Performance Replication clusters. So, you can generate a batch token on the primary cluster and use it to log into the secondary cluster.
Similar to the option 1, create a policy to attach to the token on the primary cluster.
Generate an orphan batch token which is valid for 24 hours.
**Example output:**Note
The batch token must be an orphan token (
-orphan
) since the secondary cluster will not be able to ensure the validity of its parent token.Log into the secondary cluster using the generated batch token.
Example:
Step 4: Verify the paths filter
Once the replication completes, verify that the secrets stored in the
EU_GDPR_data
never get replicated to the US cluster.
On the EU (primary) cluster, write some secrets for testing.
Write some secret at
US_NON_GDPR_data/secret
.List the existing namespaces.
Now, from the US (secondary) cluster, read the secrets.
Read the secrets at
EU_GDPR_data/secret
.Read the secrets at
US_NON_GDPR_data/secret
.List the existing namespaces.
Notice that
office_US
is the only namespace listed.
Note
Refer to the Monitoring Vault Replication tutorial for replication health check.
Step 5: Enable local secrets engine
When replication is enabled, you can mark a secrets engine or auth method local only. Local secrets engines are not replicated or removed by replication.
Enable the local secrets engine by logging into the secondary cluster and
defining a key/value secrets engine at the path US_ONLY_data
to store
secrets valid only for the US region.
Pass the -local
flag.
Step 6: Enable local auth method
Note
In Vault 1.9, the computation of client metrics has changed so that tokens generated by the local auth methods count towards clients (unique entities) instead of non-entity tokens. This prevents improper billing due to the non-entity tokens issued by the local auth methods. Prior to Vault 1.9, local auth methods contributed towards non-entity token counts. To learn more about client count, see the usage metrics tutorial for more details. If you are unfamiliar with the entity concept, refer to the Identity: Entities and Groups tutorial.
Enable the local auth method by logging into the secondary cluster and
defining a username and password auth method at the path US_ONLY_userpass
to
handle logins valid only for the US region.
Pass the -local
flag.
Security Note
If your goal in marking an auth method as local
is to
comply with GDPR guidelines, then you must take care to not set the data
pertaining to local auth mount or local auth mount aliases in the metadata of
the associated entity. As of Vault 1.9, metadata related to local auth mount
aliases can be stored as custom_metadata
on the alias itself which will also
be retained locally to the cluster. The local auth
methods documentation
has more details.
Help and Reference
- Preparing for GDPR Compliance with HashiCorp Vault webinar
- Preparing for GDPR Compliance with HashiCorp Vault blog post
- Create Paths Filter (API)
- Performance Replication and Disaster Recovery (DR) Replication
- Monitoring Vault Replication tutorial