Integrate HCP Vault Secrets with CICD build and deploy pipelines
HashiCorp Products | HCP Vault Secrets, Packer, HCP Packer, Terraform |
---|---|
Partner Products | N/A |
Maturity | Standardize |
Use case coverage | Manage pipeline secrets with HCP Vault Secrets |
Environment | All |
Tags | HCP Vault Secrets, Packer, HCP Packer, Terraform, CICD |
Publish/Updated date | December, 2024, Version - 0.0 |
Authors | Mark Lewis |
Purpose of this guide
This Integration Pattern provides guidance as to how to use HCP Vault Secrets(opens in new tab) (HVS) in your CI/CD machine image build and deploy pipeline workflow and guides you in the integration of HVS with Packer, HCP Packer, and Terraform in order to do this. For reference, also read the HCP Packer Integration Pattern(opens in new tab).
Target audience
This document is for DevOps practitioners who wish to use a cloud-agnostic, SaaS-based approach to managing CI/CD automation pipeline secrets.
Benefits
- No CSP vendor lock-in: Use this solution with AWS, GCP, Azure, and others.
- Common access pattern, removing the need to learn CSP-specific technologies for pipeline secrets management.
- No need to manage in-house/on-premise secrets management tools for pipeline secrets.
- Use of HCP Vault Secrets includes enterprise-grade, per-tenant AES256-GCM96 data keys and secrets authenticated with HMAC-SHA256, providing encryption in-flight and at rest.
Prerequisites and limitations
Prerequisites
- HCP account
- Consider these tiers(opens in new tab). Standard or Plus tier is recommended.
- If intending to follow this guide in a development environment to enact the example for TLS material management, a suitable CA, and the generation of at least one TLS certificate and private key is required.
- Command line utilities
base64
,gzip
, and the HashiCorp Cloud Platform CLI tool,hcp
. - Read this document(opens in new tab) on the use of the hcp binary.
- This pattern adheres to the design strategy in this document(opens in new tab) on how to divide your secrets management into HCP projects and applications
- Understanding of Linux/UNIX command line utilities and bash is recommended.
Limitations
- Consider the HVS limitations(opens in new tab). The example below highlights use of compression tooling to mitigate maximum secret value size limitations.
Integration architecture
Figure 1: Use of HCP Vault Secrets together with Packer, HCP Packer, and Terraform in a pipeline orchestrating the creation of golden and application-specific image(s), followed by the secure deployment of application clusters. See below for full details.
As detailed in the architecture diagram above, the workflow is as follows.
- Pipeline operators who own both the golden image and application image build pipes write pipeline secrets such as application TLS material into HVS (TLS material will be used in the example below). If you only own the application image build, we recommend building the
hcp
binary into the phoenix image instead of the base image, thereby using the right hand side of the diagram above only. - The standard golden image pipeline is triggered via your strategic orchestrator, where Packer
- Takes a standard distribution Linux image such as a marketplace image.
- Hardens it per organizational/regulatory policy
- Bakes in the hcp binary
- Outputs a golden image and writes metadata to HCP Packer
- Calls Terraform to run a unit test deployment of the new image. Terraform checks HCP Packer metadata
- App teams trigger their application machine image build pipeline. This uses the strategic orchestrator where again, Packer does the following.
- Builds the application phoenix machine image using the latest golden image
- Writes metadata into an HCP Packer channel specific to the image
- Triggers Terraform to unit test the build. Terraform checks HCP Packer for phoenix image metadata.
- App teams then deploy an application cluster using the new phoenix machine image. The deployment uses Terraform which queries the appropriate HCP Packer channel to get the most recent approved image.
- The VMs boot and use the hcp binary to access the HVS-located secrets loaded in the first point above as part of standing the application up across the cluster.
Note
Phoenix is a convenience term used in this document to refer to the recommended, immutable, iteratively rebuilt/patched second-layer application machine images, built from the latest organizational golden image with Packer, and is an iterable image which rises from the ashes of its former self, per the mythical firebird. This reference originates from an article(opens in new tab) by the prominent developer Martin Fowler.
Best practices
Tool use
- We recommend the use of Packer to build both golden and phoenix machine images because it is the de facto image management product.
- We also recommend HCP Packer to store and manage machine image metadata because it provides the following benefits.
- Management of image metadata in one place.
- Allows DevOps teams to revoke vulnerable images easily as well as scale out your image bakery.
- We recommend using Terraform for the following as it is the industry standard, CSP-agnostic deployment technology for the following.
- Testing image builds.
- Deploying all cloud artifacts including the machine images which use HVS for pipeline secrets, such as TLS certificates and private keys.
Immutability and automation
- Bake the
hcp
binary into your golden image. This means the binary will be present in every downstream machine image and deployed VM, allowing full access to HVS secrets in a suitably scoped manner. - Fully automate your golden image build pipeline, all of your phoenix machine image builds and your application cluster VM deployment and configuration logic. To not fully automate the process end-to-end inevitably means human intervention will be required, a practice which does not scale.
- All of the code snippets below are stripped down for ease of reading. When implementing your pipeline secrets in code using the
hcp
CLI and HCP Vault Secrets, we strongly recommend testing every return code of the operations performed, particularly those that write files such as thehcp vault-secrets secrets open ...
with the-o
flag. See the example in Step 1 below. - The
hcp profile set
command will not error if an invalid application profile is specified. This is because to thehcp
binary, this command is used to amend properties available to the tool only. We recommend you automate the checking of the validity of the application name switched as part of the scripting. Consider the following. We recommend the use of the--app
flag to thehcp
CLI which below but both are valid.
$ hcp vault-secrets secrets list --app phoenix_consul # phoenix-consul should be used
ERROR: failed to list secrets: [GET
/secrets/2023-11-28/organizations/{organization_id}/projects/{project_id}/apps/{app_name}/secrets][403] ListAppSecrets
default &{Code:7 Details:[] Message:}
$ hcp vault-secrets secrets list --app phoenix-consul # now works
Secret Name Type Created At Latest Version
ap_southeast_1_node1_consu_example_com_cert_0 kv 2024-10-03T22:00:33.519Z 1
...
Secrets nomenclature
The example in this document uses secrets suffixed with an integer 0 for node 0 of a cluster. This may be unnecessary in your application's case. The example is based on a use case where the nodes in an application cluster need to be numbered because of a role during deployment which only one node must do and the others must not, such as booting the HashiCorp Consul Enterprise ACL system for a whole cluster.
If your use case uses an auto scaling group of some kind where all nodes in the cluster are completely equal, then such a suffix can be omitted. We recommend use of a secrets nomenclature which works for your specific use case.
For a CI/CD pipeline which deploys a cluster of application VMs, we recommend including the following fields. Bear in mind that HVS currently limits secret names to 64 characters matching
[0-9a-zA-Z_]
. Secret name fields should be computed by the automation which loads the secrets into HVS and also by the code baked into the nodes or configuration management tooling which extracts the secrets for use on the node in question. The example follows this nomenclature.- Region
- Environment
- Application name
- Cloud (if using a multi cloud strategy where nodes may be on more than one CSP)
- FQDN elements
- Node number if applicable
HVS has limitations on characters usable for secret names. For the example below, if the TLS cert DNS SAN means the FQDN in instantiated with
fqdn=server.eu-north-- dev-appA-teamX.aws.example-company.com
we need to ensure we manipulate the fields so to avoid illegal characters in the secret names. The following commands or their equivalents should be used to avoid runtime errors.region=$(echo ${fqdn} | cut -d. -f2 | tr - _) # eu_north_1 format required domain=$(echo ${fqdn} | cut -d. -f4 | tr - _) # example_company # The following can then be used to build the HVS secret name to be used. secret_name="${region}_${env}_${app_name}_${team_name}_${csp}_${domain}_cert_${node_number}"
Steps
This section provides the steps needed in order to automate the management and consumption of pipeline secrets using HVS.
The steps below use the example of the storage of a ca.pem
TLS CA bundle file, and TLS certificate/private key files as needed to secure your application as the machines it is deployed on are deployed on the cloud, irrespective of where that cloud is.
Note
The handling of TLS material for this example matches a common scenario where a customer has a CA managed by a security team which issues static TLS certificates for use in internal applications, which then need to be manually rolled. HashiCorp Vault Secrets stores arbitrary secrets and TLS is only used for illustration. If possible, we would recommend automating PKI workloads using Vault Enterprise or HCP Vault.
As we use TLS material management as the example for illustrating the pattern, we assume that you have a CA already available to your organization, and that you know how to generate TLS material using this CA, and have followed the steps to generate the files which are now ready to be securely loaded into HVS and removed from local storage.
For the purposes of this document, we assume that HVS will be used across a number of different application teams within a single project. Consideration for multiple projects is touched on, but is a matter of script-based iteration as you automate the workflow. To this end, we will refer to the notion of the following.
- phoenix_meta: which refers to HVS secrets which are applicable to all phoenix machine image builds. We store the generally applicable CA bundle PEM file which is baked into all machine images here.
- phoenix_appN: this refers to the phoenix machine image builds of application team N. Replace these references with your relevant application teams reference(s) which require baking into their respective machine images.
Step 1: Load static secrets into HVS
To load secrets into HVS, there are several steps required for the script. Running these manually on the command line to get a feel for the workflow is recommended first. Secrets default to static type on HVS and these will be used with application TLS material for the sake of illustration. Rotating and dynamic credentials should be considered if they are relevant in the application deployment pipeline.
- Instantiate the following environment variables:
HCP_PROJECT_ID
,HCP_ORGANIZATION_ID
,HCP_CLIENT_ID
, andHCP_CLIENT_SECRET
scoped to be able to write secrets to the HVS project/app as appropriate. - If using multiple HCP projects, scoped access is required so that the CA bundle can be written to HVS such that application teams in different HCP projects can read it.
- Most customers start with a single HCP project, so scoping is done within that project. However, if you have multiple geographically-located offices which have separate projects, then the CA bundle will need to be written to each project, and sufficient access will be required. For this use case, the script should be able to iterate a number of projects during the writing of secrets.
- For the CA bundle, run the following commands in order.
# Set organization ID
hcp profile set organization_id "${HCP_ORGANIZATION_ID}"
# Set the project, heeding recommendation [1] above
hcp profile set project_id "${HCP_PROJECT_ID}"
# To login using the above credentials
hcp auth login --client-id="${HCP_CLIENT_ID}" --client-secret="${HCP_CLIENT_SECRET}"
# For the phoenix-meta HVS application used to store the CA bundle (or any other generally applicable, relevant content)
hcp vault-secrets secrets delete ca_pem --app phoenix-meta >/dev/null 2>&1 # for idempotence
hcp vault-secrets secrets create ca_pem --app phoenix-meta --data-file=/path/to/fullchain.crt --quiet >/dev/null # idempotently write the ca_pem secret
# If and only if the secret is bigger than 5120kb and thus too big for HVS secret storage item and
# assuming the gzipped result is still within this size:
gzip -c /path/to/fullchain.crt | base64 -w0 > /path/to/fullchain.crt.b64 && hcp vault-secrets secrets create ca_pem --app phoenix-meta --data-file=/path/to/fullchain.crt.b64
- For the certs/private keys, iterate each application's secrets using scripted flow control such as a bash for loop and iterate the application's pipeline secrets idempotently deleting and reloading them as needed.
# Delete
num_secrets=$(hcp vault-secrets secrets list --app ${image} --format pretty | grep ^Secret | wc -l | awk '{print $1}')
if [[ ${num_secrets} > 0 ]]
then
for secret in $(hcp vault-secrets secrets list --app ${image} --format json | jq -r .[].name | egrep 'key_|cert_' | tr '\012' ' ')
do
hcp vault-secrets secrets delete --app ${image} ${secret} >/dev/null 2>&1
rCode=${?}
if [[ ${rCode} > 0 ]]
then
echo "error" # as needed
exit ${rCode}
fi
done
else
echo "nothing to do for ${image}"
fi
# Load - instantiate array of local TLS files in scope; use similar bash for loop as above.
# Run this twice for each of cert/key files found for the FQDN of the app server nodes as needed.
echo -n ${secret_string} | hcp vault-secrets secrets create --app ${image} ${secret_name} --data-file=- --quiet >/dev/null # see below for HVS secret name caveats here
# Logout
hcp auth logout
Note
If you have a secret which is bigger than the 5120kb limit HVS applies, it might be possible to fit the secret in by compressing it. In this case, we recommend ensuring that requisite decompression logic exists in the machine image or configuration management code so that the nascent VMs which need the secret have the capability.
For the example in this document, if you have a CA bundle with an intermediate certificate, the bundle may be longer than 5120kb. In this case, use gzip as commented in the above example code.
Step 2: Build the golden machine image
As identified above, this pattern relies on nascent cloud VMs booting with the hcp
CLI tool available. As such, we recommend baking this into your golden image so that all phoenix images build with the binary installed and ready for use.
We recognise that each customer will build their golden image uniquely, based on organizational policy and requirements. The recommended approach is to include the commands in the official hcp
installation document(opens in new tab) in your automated build pipeline. The document covers all supported operating systems.
Specifically, we highlight the use of package management utilities to install hcp
.
Step 3: Build the phoenix machine image to use the HCP command line tool
Further to Step 2, we anticipate that application teams will have access to Packer (or equivalent) in order to build their phoenix builds from the organizational golden image.
If the reader is from an application team and is unable to influence the installation of the hcp
binary into the golden image, but can influence the phoenix image, we recommend the following.
- Follow the installation instructions in Step 2 but apply the installation to the phoenix build.
- Contact the owners of the golden image and formally request that the binary be included as this allows multiple teams to use the pattern and benefit from it.
Once built, the phoenix image will have the hcp
binary installed.
We recommend that customers bake code into their phoenix image which makes use the hcp
CLI tool on boot to access relevant secrets in HCP Vault Secrets. Use of configuration management tools (such as Ansible, Chef or Puppet and cloudinit) to do this, particularly at scale, are a good idea.
While any post-boot operations which can be baked into the image, should be (to reduce application boot time), the following commands should be included in your build so that they are run post-boot. We do not recommend that they are run by Packer in the build process as this would mean secrets from HVS are then baked into the machine image which represents a security risk.
Whether using bash scripting to enable a Terraform run to call a boot-time script to configure instances as they come up, or an enterprise configuration management tool, configure the setup to use the hcp
CLI tool as follows in order to access pipeline secrets in HVS. The example code accesses the TLS material for the application cluster.
Arrange for the required environment variables to be instantiated in each application node as below. We recommend instantiating these using an HCP Terraform or Terraform Enterprise workspace variable set, but however this is done, it is a requirement that the infrastructural deployment injects the values at deploy time. The values should be scoped so that the application nodes only have access to specific secrets in HVS, so the client ID and secret values will be different to the values used above. We use nominal bash environment variable instantiation in the code block below to illustrate that the booting VM runs code which accesses these values.
export HCP_PROJECT_ID=${HCP_PROJECT_ID}
export HCP_ORGANIZATION_ID=${HCP_ORGANIZATION_ID}
export HCP_CLIENT_ID=${HCP_CLIENT_ID}
export HCP_CLIENT_SECRET=${HCP_CLIENT_SECRET}
Set up the hcp
CLI using the following commands in your automation script
hcp profile set organization_id "${HCP_ORGANIZATION_ID}"
hcp profile set project_id "${HCP_PROJECT_ID}"
hcp auth login --client-id="${HCP_CLIENT_ID}" --client-secret="${HCP_CLIENT_SECRET}"
Access the CA bundle file ca_pem
from the common HVS project using these commands.
hcp vault-secrets secrets open --app phoenix-meta ca_pem -o /opt/example/ca.pem # check for errors
# base64 -d and gunzip if gzipped above
Access the TLS certificate(s) and private key(s) or other pipeline secrets using the below commands to write the secrets into the correct place in the filesystem.
image=phoenix-appN
hcp vault-secrets secrets open --app ${image} eu_north_1_node1_appN_example_com_cert_0 -o /opt/example/cert.pem
hcp vault-secrets secrets open --app ${image} eu_north_1_node0_appN_example_com_key_0 -o /opt/example/key.pem
Your pipeline automation must then confirm the content of the files as written are not null, and have content which conforms to that which is expected, in order to avoid failures in application start up.
Important aspects
Consider the following in your planning.
- Use
chmod
andchown
as needed to ensure that files written to disk using this method are permissioned securely. - Use automated means to ensure that the directory structure referenced in the path specified to the
hcp vault-secrets secrets open
commands exist prior to call them. - Ensure that the directory structure is suitably permissioned to ensure your written secrets stay secure on the application servers.
- Process all return codes of all
chmod
,chown
,mkdir
andhcp
commands to ensure that error handling is performed correctly. Do this in in all environments equally to adhere to 12factor(opens in new tab).