HashiCorp Cloud Platform
Migration strategy and considerations guidelines
If you have self-managed Vault clusters and want to move your data to Vault running in the HashiCorp Cloud Platform (HCP), there are migration strategies and considerations to keep in mind. This document covers example migration scenarios and examples that can help you prepare to transition from your self-managed clusters to a hosted platform.
There is no migration tool available to move your self-managed clusters to HCP Vault Dedicated.
Migration planning
Migrating data from an existing Vault cluster to another requires a well-thought out plan.
However, the advantages of moving from a self-hosted deployment to HCP Vault Dedicated are significant. Benefits include:
- Major and minor version upgrades with little to no downtime.
- Simplify disaster recovery and performance replication deployments.
- Scale your cluster up and down to meet workload demands.
- Reduce operational overhead for operating system maintenance.
The planning process also provides you with opportunities to understand and address any points of friction or contention that developers may experience with Vault.
To ensure a smooth migration, follow the recommended phases.
Discovery
During the discovery phase, identify the key stakeholders to understand how they use Vault. Workshops, interviews, and shadowing can uncover challenges to address during migration. Work with security teams to perform an initial threat modeling exercise.
Platform design
In the platform design phase, use the information collected during the discovery phase to define patterns and shared requirements such as:
Creating a proper design with the information you collected helps you design your Vault implementation ensuring an optimal organizational fit. Continue gathering feedback throughout the migration processes and iterate the design as needed.
Pipeline design
Once you complete the information-gathering tasks described in the discovery phase and created a platform design, you should design a pipeline for storing and deploying your Vault configuration. Store your Vault configuration in a version control system, shift security strategies left, and follow good practices such as:
- Branching
- Environment promotion
- Quality gates
- Testing
- Pipeline ownership
- Code integrity gates
- Secret scanning
Implementation
The implementation phase involves:
- Create the infrastructure as code and associated modules to test, deploy, and update Vault
- Write policies to manage role-based access control.
- Implement quality tests to identify gaps in the code and address any bugs identified.
Onboarding
Before onboarding production applications, start a pilot program with application teams.
A pilot program helps ensure a successful adoption strategy and builds trust with other teams. A pilot program tests the implementation locally on a development version. Feedback from the pilot program allows you to tweak the design and implementation based on any issues identified. Once stable, onboard applications gradually, one at a time, with support from both application and Vault teams.
Threat modeling
While application teams are conducting the pilot program, work with your security team to perform threat modeling exercises.
Controls include:
- Detective
- Preventive
- Corrective
This exercise ensures controls and processes comply with the security requirements before the final implementation.
Target operating model
As teams scale, existing designs may need re-evaluation. Use the same phased approach during reviews to adapt workflows and keep Vault aligned with business growth.
Migration prerequisites
Before you begin migrating an existing self‑managed Vault clusters to HCP Vault Dedicated, consider the following:
Vault Enterprise version: The version of HCP Vault Dedicated must be greater than, or equal to the self-hosted version of Vault Enterprise.
Cloud provider options: You deploy HCP Vault Dedicated to a HashiCorp managed AWS or Azure account. Select a cloud provider that meets your workloads requirements, or a cloud provider that enables connectivity with your workloads.
Connectivity options: Depending on the cloud provider selected, there are several options to connect your workloads to HCP Vault Dedicated. These include:
- Public cluster: Create a cluster with a publicly accessible endpoint. This option mirrors how most cloud provider secrets management services work. This is useful if you do not manage an AWS or Azure account that supports peering, or transit gateway connectivity (AWS only).
- Public cluster with IP allow list: Create a public cluster and configure the IP allow list to restrict access to specific IP addresses or CIDR ranges. This option is useful if you have static public IP addresses, or you want to allow access only from a specific public CIDR.
- Private cluster with HVN connectivity: Create a private cluster and connect it to your HashiCorp Virtual Network (HVN). You can peer the HVN to your AWS or Azure virtual network, or connect to an AWS transit gateway. This option is useful if you want to keep all traffic private, and your clients are in a cloud provider's virtual network.
- Private cluster with VPN through an intermediary cloud provider: Create a private cluster, peer it to your HVN, and create a VPN connection to the HVN. This option is useful if you want to keep all traffic private, and your clients are on-premises, or in a location where you cannot peer the HVN to your cloud provider's virtual network.
Feature parity: Review any constraints and limitations before migrating to HCP Vault Dedicated. Some features require a specific tier of HCP Vault Dedicated.
Cluster tiers and sizing: Review the resource utilization, and number of clients using your self-managed cluster and select a tier that enables the features you require, and size to meet workload demands.
Vault resources to migrate
The following is a list of resources that you will need to consider migrating. This list is not a comprehensive list of resources and is highly dependent on your specific Vault implementation.
- Namespaces
- Auth methods
- Roles/permissions
- Secrets engines
- Static secrets
- Policies
- Cryptographic keys
- Terraform provider migration
Namespace migration
You may have a namespace layout in your self-managed cluster that you want to replicate within HVD. When using HVD, all customer data including all their child namespace structure are within the admin namespace.
This script creates the same namespace structure, including all namespaces and nested namespaces, as the source self-managed Vault. The script creates the namespaces under the admin namespace in the target HCP Vault Dedicated cluster.
Example namespace migration bash script:
#!/bin/bash
function namespace_loop() {
local CURRENT="${1:-}" # Empty = root
local TOP_LEVEL
TOP_LEVEL=$(vault namespace list ${CURRENT:+-namespace="$CURRENT"} -format=json 2>/dev/null | jq -r '.[]' | sed 's:/$::')
for namespace in $TOP_LEVEL; do
local FULL_SRC_PATH="${CURRENT:+$CURRENT/}$namespace"
local DST_PARENT_PATH="admin${CURRENT:+/$CURRENT}"
# Check if namespace already exists in HDV
if VAULT_TOKEN=$HCP_VAULT_TOKEN vault namespace lookup \
-address="$HCP_VAULT_ADDR" \
-namespace="$DST_PARENT_PATH" "$namespace" &>/dev/null; then
echo "Namespace already exists: $DST_PARENT_PATH/$namespace"
else
echo "Creating: $DST_PARENT_PATH/$namespace"
VAULT_TOKEN=$HCP_VAULT_TOKEN vault namespace create \
-address="$HCP_VAULT_ADDR" \
-namespace="$DST_PARENT_PATH" "$namespace"
fi
# Recurse
namespace_loop "$FULL_SRC_PATH"
done
}
namespace_loop "$1"
Auth methods
Specific auth method configuration depends on the method used. For example, the userpass auth method requires less configuration than the OIDC auth method. You can use the same tools to manage the HCP Vault Dedicated configuration that you use to manage a self-hosted cluster.
Review the HCP Vault Dedicated constraints and limitations for the most up to date information.
When observing the auth methods enabled on your self-managed cluster, it may be useful to utilize available read API endpoints (AWS, Kubernetes, and Okta as examples), which return configurations for the auth methods.
Vault does not return sensitive values such as passwords, API keys, and certificates. You need to create auth methods on HCP Vault Dedicated with the configuration retrieved from read API and sensitive values from original source.
The following auth methods are validated on HCP Vault Dedicated.
Secrets engines
There are some considerations to keep in mind when implementing secrets engines
in HCP Vault Dedicated. Like auth methods, you must enable and configure secrets
engine within HCP Vault Dedicated. Secret engines start in the /admin
namespace. Verify HCP Vault Dedicated supports the secrets
engine.
You can list the secrets engines enabled on your self-managed cluster and read each of their configurations (AWS, KV2, and Databases). Secret engine configuration will not return sensitive values or secrets.
Static secrets
You can store static secrets in the either the Vault key/value (KV) secrets engine. Like self-hosted Vault, HCP Vault Dedicated supports both version 1 and version 2 of the KV secrets engine.
This script migrates the kv secret from the self-managed Vault to the HCP Vault Dedicated.
For nested namespaces, it loops through all namespaces then calls the
migrate_kv_secrets_in_namespace
function under appropriate namespace. It
supports both versions of KV version 1 and version 2.
Example static secret migration bash script:
#!/bin/bash
# Required variables to be set in your terminal:
# export VAULT_TOKEN=<source>
# export HCP_VAULT_TOKEN=<destination>
# export HCP_VAULT_ADDR=https://vault.<region>.hcp.hashicorp.cloud:8200
# Migrates KV secrets from one namespace to HDV
function migrate_kv_secrets_in_namespace() {
SRC_NAMESPACE="$1"
DST_NAMESPACE="admin${SRC_NAMESPACE:+/$SRC_NAMESPACE}"
echo "Migrating namespace: $SRC_NAMESPACE → $DST_NAMESPACE"
# Check if destination namespace exists
if [[ -n "$SRC_NAMESPACE" ]]; then
DST_PARENT_PATH=$(dirname "$DST_NAMESPACE")
DST_LEAF=$(basename "$DST_NAMESPACE")
VAULT_TOKEN="$HCP_VAULT_TOKEN" vault namespace lookup \
-address="$HCP_VAULT_ADDR" \
-namespace="$DST_PARENT_PATH" "$DST_LEAF" &>/dev/null
if [[ $? -ne 0 ]]; then
echo "ERROR: Destination namespace $DST_NAMESPACE does not exist in HDV."
return
fi
fi
# Get KV mounts in this namespace
kv_secrets=$(vault secrets list -namespace="$SRC_NAMESPACE" -format=json |
jq -r 'to_entries[] | select(.value.type=="kv") | .key' | sed 's:/$::')
for kv_mount in $kv_secrets; do
echo "Mount: $kv_mount"
# Detect KV version (default to 1)
kv_version=$(vault secrets list -namespace="$SRC_NAMESPACE" -format=json |
jq -r --arg path "$kv_mount/" '.[$path].options.version // "1"')
# Check if mount already enabled on destination
mount_exists=$(VAULT_TOKEN="$HCP_VAULT_TOKEN" vault secrets list -address="$HCP_VAULT_ADDR" -namespace="$DST_NAMESPACE" -format=json |
jq -r --arg path "$kv_mount/" 'has($path)')
if [[ "$mount_exists" == "true" ]]; then
echo "Mount $kv_mount already enabled in destination. Skipping."
else
VAULT_TOKEN="$HCP_VAULT_TOKEN" vault secrets enable -address="$HCP_VAULT_ADDR" \
-namespace="$DST_NAMESPACE" -version="$kv_version" -path="$kv_mount" kv
fi
# List secrets in mount
keys=$(vault kv list -namespace="$SRC_NAMESPACE" -mount="$kv_mount" -format=json 2>/dev/null | jq -r '.[]' || true)
# Set path for parsing kv data
key_path=$([[ "$kv_version" == "2" ]] && echo ".data.data" || echo ".data")
for key in $keys; do
echo "Secret: $key"
# Read the secret
secret_json=$(vault kv get -namespace="$SRC_NAMESPACE" -mount="$kv_mount" -format=json "$key" 2>/dev/null)
# Extract key-value pairs
kv_pairs=""
keys_inner=$(echo "$secret_json" | jq -r "$key_path | keys[]" || true)
for inner_key in $keys_inner; do
value=$(echo "$secret_json" | jq -r --arg k "$inner_key" "$key_path[\$k]")
kv_pairs="$kv_pairs $inner_key=$value"
done
# Write to destination
[[ -n "$kv_pairs" ]] && VAULT_TOKEN="$HCP_VAULT_TOKEN" vault kv put \
-address="$HCP_VAULT_ADDR" \
-namespace="$DST_NAMESPACE" \
-mount="$kv_mount" \
$key $kv_pairs && echo "Migrated $key"
done
done
}
# Recursively migrate all namespaces and their secrets
function recurse_and_migrate_namespaces() {
local PARENT="$1"
local CURRENT="${PARENT:-}"
migrate_kv_secrets_in_namespace "$CURRENT"
# List child namespaces
namespaces=$(vault namespace list ${CURRENT:+-namespace="$CURRENT"} -format=json 2>/dev/null | jq -r '.[]' | sed 's:/$::')
for ns in $namespaces; do
recurse_and_migrate_namespaces "${CURRENT:+$CURRENT/}$ns"
done
}
# Start
recurse_and_migrate_namespaces
Roles and permissions
The HashiCorp Cloud Platform maintains a separate, and distinct identity and access management solution to support managing resources in your HCP account. HCP IAM users and roles do not provide access to the resources managed in Vault.
Policies
You can migrate policies used for self-hosted Vault to HCP Vault Dedicated. How you configured your self-hosted cluster dictates whether you have to make any changes to the existing policy.
If you create policies in the root namespace, you need to update the paths in each policy to reflect the new namespace structure, based on the admin namespace.
If you create policies directly in a child namespace that permit access to resources in the same namespace, the policy should work without updates.
Refer to the Manage tenants with Vault namespaces tutorial to learn more about managing policies in namespaces.
This script will loop through your self-hosted Vault policies and create them
within HCP Vault Dedicated. For nested namespaces, the script loops through namespaces
then calls the migrate_policies_in_namespace
function under appropriate namespace.
Example bash script to be run on self-managed cluster:
#!/bin/bash
# Required variables to be set in your terminal:
# export VAULT_TOKEN=<source_vault_token>
# export HCP_VAULT_TOKEN=<destination_token>
# export HCP_VAULT_ADDR=https://vault.<region>.hcp.hashicorp.cloud:8200
# Migrate policies from one namespace to the mapped HCP namespace
function migrate_policies_in_namespace() {
local SRC_NAMESPACE="$1"
local DST_NAMESPACE="admin${SRC_NAMESPACE:+/$SRC_NAMESPACE}"
echo "Migrating policies from: '${SRC_NAMESPACE:-root}' → '$DST_NAMESPACE'"
# If not root, check that destination namespace exists in HCP
if [[ -n "$SRC_NAMESPACE" ]]; then
ns_check=$(VAULT_TOKEN="$HCP_VAULT_TOKEN" vault namespace list \
-address="$HCP_VAULT_ADDR" \
-namespace="$(dirname "$DST_NAMESPACE")" \
-format=json 2>/dev/null)
if ! echo "$ns_check" | jq -e --arg ns "$(basename "$DST_NAMESPACE")/" '.[] | select(. == $ns)' >/dev/null; then
echo "Destination namespace '$DST_NAMESPACE' does not exist in HDV."
echo "Exiting policy migration."
exit 1
fi
fi
local policy_names
policy_names=$(vault policy list ${SRC_NAMESPACE:+-namespace="$SRC_NAMESPACE"} 2>/dev/null)
for name in $policy_names; do
# Skip the root policy (not allowed in HDV)
if [ "$name" == "root" ]; then
echo "Skipping 'root' policy (cannot be migrated)"
continue
fi
echo "Migrating policy: $name"
local policy_content
policy_content=$(vault policy read ${SRC_NAMESPACE:+-namespace="$SRC_NAMESPACE"} "$name")
VAULT_TOKEN="$HCP_VAULT_TOKEN" vault policy write \
-address="$HCP_VAULT_ADDR" \
-namespace="$DST_NAMESPACE" \
"$name" - <<< "$policy_content"
echo "Policy '$name' migrated to $DST_NAMESPACE"
done
}
# Recursively walk namespaces and migrate policies
function recurse_and_migrate_policies() {
local PARENT="$1"
local CURRENT="${PARENT:-}"
migrate_policies_in_namespace "$CURRENT"
local namespaces
namespaces=$(vault namespace list ${CURRENT:+-namespace="$CURRENT"} -format=json 2>/dev/null | jq -r '.[]' | sed 's:/$::')
for ns in $namespaces; do
recurse_and_migrate_policies "${CURRENT:+$CURRENT/}$ns"
done
}
# Start recursion from the root namespace
recurse_and_migrate_policies
Cryptographic transit keys
You can migrate most keys using the export functionality. Once you make keys exportable, you cannot reverse the action in the source cluster.
Security consideration
If your security policies prohibit exportable keys, consider marking the key
exportable for the migration. When you import the key to HCP Vault Dedicated,
set allow_rotation=true
but do not set exportable=true
.
Rotate the key once you have onboarded and tested all applications that depend on the key.
This script runs for each transit key. The script makes the key exportable, backs up the key, enables a new transit secrets engine, and restores the key in HCP Vault Dedicated.
Example bash script to get all keys, update configuration and save backups:
#!/bin/bash
# Migrate all transit keys from a given source namespace to corresponding admin/<namespace> in HCP
function migrate_transit_keys_in_namespace() {
local SRC_NAMESPACE="$1"
local DST_NAMESPACE="admin${SRC_NAMESPACE:+/$SRC_NAMESPACE}"
echo "Migrating transit keys from: '${SRC_NAMESPACE:-root}' → '$DST_NAMESPACE'"
# List transit keys in this namespace
local keys
keys=$(vault list -format=json ${SRC_NAMESPACE:+-namespace="$SRC_NAMESPACE"} transit/keys 2>/dev/null | jq -r '.[]')
for key in $keys; do
echo "Processing key: $key"
# Make the source key exportable
vault write ${SRC_NAMESPACE:+-namespace="$SRC_NAMESPACE"} transit/keys/"$key"/config allow_plaintext_backup=true exportable=true
# Read the backup from the source
local backup
backup=$(vault read -format=json ${SRC_NAMESPACE:+-namespace="$SRC_NAMESPACE"} transit/backup/"$key" 2>/dev/null | jq -r '.data.backup')
if [ -z "$backup" ]; then
echo "Skipping: Key '$key' not exportable or backup failed"
continue
fi
# Enable transit engine on destination if not already enabled (suppress error)
VAULT_TOKEN="$HCP_VAULT_TOKEN" vault secrets enable -address="$HCP_VAULT_ADDR" -namespace="$DST_NAMESPACE" transit 2>/dev/null || true
# Restore the key to the destination Vault
VAULT_TOKEN="$HCP_VAULT_TOKEN" vault write -address="$HCP_VAULT_ADDR" -namespace="$DST_NAMESPACE" transit/restore backup="$backup"
echo "Migrated: $key → $DST_NAMESPACE"
done
}
# Recursively walk through namespaces and migrate transit keys
function recurse_and_migrate_transit_keys() {
local PARENT="$1"
local CURRENT="${PARENT:-}"
migrate_transit_keys_in_namespace "$CURRENT"
local namespaces
namespaces=$(vault namespace list ${CURRENT:+-namespace="$CURRENT"} -format=json 2>/dev/null | jq -r '.[]' | sed 's:/$::')
for ns in $namespaces; do
recurse_and_migrate_transit_keys "${CURRENT:+$CURRENT/}$ns"
done
}
# Pre-requisites
# export VAULT_TOKEN=<source_vault_token>
# export HCP_VAULT_TOKEN=<destination_token>
# export HCP_VAULT_ADDR=https://vault.<region>.hcp.hashicorp.cloud:8200
# Start the recursion from root namespace
recurse_and_migrate_transit_keys
If you do not want to make your keys exportable, create new keys in HCP Vault Dedicated.
Terraform provider migration
If you use Vault Terraform provider to manage and deploy a self-managed Vault cluster, you can update your existing configuration to work with HCP Vault Dedicated.
The amount of configuration change associated with updating your Terraform configuration depends on your Vault configuration.
Switch to the HCP Vault Dedicated endpoint using the HCP Vault Dedicated cluster
URL, token, and /admin
namespace.
You must run Terraform from a location that is able to connect to your HCP Vault Dedicated cluster. If your HCP Vault Dedicated cluster is public, use the public URL. If your cluster is private, run Terraform from a connected environment with an active peering, transit gateway, or VPN connection.
You can then follow typical Terraform workflows by running terraform init
,
terraform plan
, and terraform apply
.
Review the Migrate to HCP Vault Dedicated with codified configuration tutorial to learn more.
Post migration
After you complete the migration, verify Vault resources such as secrets engines, and auth methods work as expected. Consider implementing a continuous improvement program to monitor for reported issues and continue to gather feedback from key stakeholders. Update your Vault configuration as needed to address new issues as they arise.
Once you have migrated all workloads to HCP Vault Dedicated, ensure you update all relevant documentation, configuration management databases (CMDB), and operational runbooks to reflect the changes in your Vault architecture.
Update any monitoring and alerting configurations to ensure alignment with the new HCP Vault Dedicated environment.
Summary
There are several things to consider when migrating from self-managed Vault clusters to managed platforms like HCP Vault Dedicated. Most migration strategies require that you perform migration steps manually in the absence of a migration tool. In addition, manual migration involves replicating resources you need to migrate or transitioning resources with varying overhead.
For more information about HCP Vault Dedicated, check out our Documentation and Learn Tutorials.