Integrate Vault PKI with Vault Secrets Operator (VSO) on OpenShift
Authors: Andrew Thielen, Jan Repnak and Chris Zembower
This guide explains how to deploy the Vault Secrets Operator (VSO) to automate certificate management for workloads running on OpenShift, providing a Kubernetes-native approach to PKI certificate lifecycle management.
Background and best practices
Applications running in Kubernetes often require TLS certificates to secure communications. These certificates expose services through Ingress or OpenShift Routes, enable internal service-to-service encryption (mTLS), or establish trust with external systems that require client authentication.
While you can automate certificate renewal by integrating your applications directly with the Vault API or creating custom workflows for authentication, retrieval, and storage, these approaches are often complex and difficult to scale across large, containerized environments.
Vault Secrets Operator (VSO) offers a Kubernetes-native way to manage the certificate lifecycle. VSO handles Vault authentication, token management, and certificate operations behind standard Kubernetes resources. For public key infrastructure (PKI) use cases, VSO issues, renews, and formats certificates as Kubernetes TLS Secret objects. You can mount these secrets into pods or ingress resources can reference them.
Red Hat certifies VSO for OpenShift and it provides a secure, production-ready solution for teams that want to standardize TLS certificate management in OpenShift-based environments.
Compared to alternatives like cert-manager, the Vault CSI Provider, Vault Agent Injector, or the External Secrets Operator, Vault Secrets Operator (VSO) delivers a Kubernetes-native experience with distinct advantages for TLS certificate management in OpenShift:
- Declarative and GitOps-aligned: VSO uses custom resource definitions (CRDs) to define secret behavior declaratively, making it well-suited for GitOps workflows and repeatable infrastructure-as-code pipelines.
- Fine-grained security and tenancy: VSO supports namespace isolation and least-privilege access, helping teams align with security best practices for multi-tenant environments.
- OpenShift-ready: VSO complies with OpenShift Security Context Constraints (SCCs) out of the box, avoiding privileged containers or elevated permissions that can block adoption in restricted environments.
- Broad utility: While optimized for TLS secret delivery, VSO also supports other Vault secret types, making it a flexible and strong foundation for managing Vault secrets across applications in Kubernetes and OpenShift.
- First-class Vault integration: HashiCorp builds and maintains VSO, providing enhanced support for Vault-native features like time-to-live (TTL) based dynamic secret rotation, flexible authentication methods, and automated Vault token management.
- Enterprise support: VSO is fully supported by HashiCorp when used with Vault Enterprise or HCP Vault Dedicated, ensuring a production-grade, vendor-backed solution for mission-critical workloads.
Validated architecture
The PKI architecture defined in the HashiCorp Operating Guide for Vault outlines automated certificate management using the Vault Agent for VM-based workflows. This VSO pattern builds on the same concept with a Kubernetes-native approach that integrates directly with Vault’s PKI secrets engine to sync certificates into standard Kubernetes TLS Secret resources. VSO manages Vault authentication and token lifecycle, automatically renews certificates before they expire, and updates the TLS secret so your workloads stay secure without interruption.
This workflow allows applications to consume TLS certificates using native Kubernetes constructs and without being Vault-aware, reducing the need for custom automation or direct API interaction.
The end-to-end certificate workflow with VSO includes the following steps:
- Initial Vault PKI configuration: A Vault administrator configures the required authentication method, policy, and PKI role to allow certificate issuance through Vault.
- Install Vault Secrets Operator: Deploy VSO to your OpenShift cluster using Operator Hub or a helm chart.
- Define the Vault connection: Create a
VaultConnection
custom resource to establish secure connectivity between VSO and your Vault server. - Configure a Kubernetes service account: Create the Kubernetes service account that VSO uses to authenticate to the Vault server.
- Request a certificate: Create a
VaultPKISecret
custom resource that specifies the desired certificate attributes, such as common name, time-to-live (TTL), and PKI role. VSO retrieves the certificate from Vault and creates a standard Kubernetes TLS secret that includes the certificate and private key. - Application consumption: Applications consume the TLS secret using native Kubernetes mechanisms such as volume mounts or environment variables.
- Automatic renewal: VSO monitors certificate expiration and renews certificates automatically. VSO syncs the updated Kubernetes secret in place, and can optionally trigger a rolling restart to ensure applications pick up the new certificate.
People and process considerations
Before implementing the PKI workflow with VSO, it is important to understand the core personas defined in HashiCorp Validated Designs (HVD). These personas map to both the producers and consumers involved in PKI certificate management, reflecting a shared responsibility model.
PKI producers
PKI producers establish and manage the central PKI environment, ensuring operational reliability and policy compliance.
Vault Platform team
- Deploy and maintain Vault Enterprise or HCP Vault Dedicated clusters.
- Configure global Vault policies and access controls.
- Establish and maintain PKI root and intermediate CAs, with appropriate rotation and key management.
OpenShift cluster administrators
- Provision and manage the OpenShift cluster infrastructure.
- Install and maintain system-wide operators, including VSO.
- Implement cluster-level RBAC and security controls.
PKI consumers
PKI consumers are responsible for interacting with the PKI infrastructure to request, retrieve, and manage certificates for their applications and services.
- Application team
- Request and manage certificates for their applications and services.
- Handle configuration and access controls at the OpenShift project level to ensure secure operation.
Implementation guide
The guide assumes that you have set up a Vault Enterprise cluster for PKI issuance by following the best practices defined in the HashiCorp Solution Design Guide for Vault. It also presumes that you have implemented the guidance in HashiCorp Operating Guide for Vault, including engine configurations, tenant isolation, CA hierarchy, authentication methods, ACL policies, and application-specific PKI role configurations. Please contact your account team if you require access to HashiCorp Validated Designs.
Install VSO on OpenShift
Responsibility: OpenShift Cluster Administrators
You can install VSO on OpenShift using either the Red Hat OperatorHub or a helm chart. The official installation guide supports and documents both options.
Red Hat certifies VSO and makes it available through OperatorHub, which uses Universal Based Image (UBI) container images from Red Hat's certified registry by default. We recommend installing through OperatorHub for OpenShift environments, as it enables seamless integration with other OpenShift-native features and operators while ensuring compliance with platform standards.
Installing VSO using helm remains a valid option for users who require more flexibility. For example, customizing configuration at install time or deploying to architectures such as ARM64, which the UBI-based VSO images may not yet support.
While this guide supports both installation methods, it uses the OperatorHub approach because it provides better integration with OpenShift, along with advantages in automation, security, and deployment consistency.
Discover the VSO in OperatorHub: Use the describe
command to inspect the operator's metadata, available channels, and versions. This helps you verify details like supported update channels (for example, stable).
oc describe packagemanifests \
-n openshift-marketplace \
vault-secrets-operator
Deploy the VSO using a subscription: To initiate installation, create a Subscription
resource. Specify the desired operator version within the startingCSV
field. This instructs the Operator Lifecycle Manager (OLM) to deploy VSO in the openshift-operators
namespace using the stable
channel.
oc apply -f - <<EOF
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: vault-secrets-operator
namespace: openshift-operators
spec:
channel: stable
name: vault-secrets-operator
source: certified-operators
sourceNamespace: openshift-marketplace
startingCSV: vault-secrets-operator.<current-version>
installPlanApproval: Manual
EOF
Approve the install plan: To manually install the operator, you must approve the installation plan before you deploy VSO. For this, list the pending installation plans and describe the installation plan to confirm it belongs to VSO. Replace <suffix>
with the actual name of the install plan.
oc get installplans -n openshift-operators
oc describe installplan install-<suffix> -n openshift-operators
oc patch installplan install-<suffix> -n openshift-operators \
--type merge --patch '{"spec":{"approved":true}}'
Monitor the installation status: After creating the subscription, monitor the ClusterServiceVersion
(CSV) resource to track installation progress. A Succeeded
status indicates that the operator installation completed. Also ensure that the VSO pods are up and running.
oc get csv -n openshift-operators
oc get pods -n openshift-operators
Authentication
When integrating Vault with OpenShift, VSO must authenticate securely to the Vault cluster. We recommend using either Kubernetes and JSON Web Token (JWT) auth methods for this use case.
Kubernetes authentication
The Kubernetes auth method offers the tightest integration with Kubernetes for Vault clients. It allows multiple workloads from a single Kubernetes or OpenShift cluster to authenticate uniquely using their service account tokens. During the login process, Vault validates service account tokens by calling the OpenShift cluster's TokenReview API. You can map the service account name and namespace encoded in these tokens to specific Vault policies for fine-grained access control.
Since the Kubernetes auth method uses the TokenReview API, you must consider a few things. First, Vault requires network access to the target OpenShift cluster to invoke the API. You must also ensure the Vault cluster has a valid service account token that grants it access to perform the TokenReview. Several options exist for configuring this required access.
JWT authentication
You can also use the JWT auth method to authenticate OpenShift workloads using service account tokens. You can configure this method with either the OpenShift cluster's OIDC discovery URL (if reachable) or its public key to facilitate validation. On login, Vault verifies the service account token like any other JWT in a standard OIDC authentication flow. Since the TokenReview API is not utilized, the system requires no API permissions, and network access is only necessary if you use the discovery URL. However, this also means that Vault cannot verify if you have invalidated a service account token, so use short time-to-live (TTL) values.
An advantage of the JWT auth method is flexibility. You can add multiple public keys to a single mount to enable authentication for multiple OpenShift clusters, thus allowing the sharing of identities, roles, and policies. In addition, you can map workloads to Vault roles and policies using any claim from the token, not just service account name, namespace, or unique identifier (UID).
However, JWT auth comes with operational overhead. If you use static public keys, you must manually rotate them, which may not be suitable for dynamic or large-scale environments. You can also use JSON Web Key Set (JWKS) or OIDC discovery to automate key management, as shown in this pattern, but this introduces a network dependency with the OpenShift key issuer.
Note: When you configure multiple OpenShift cluster public keys in a single JWT auth mount, Vault treats service accounts with identical names across different clusters as the same client identity. The user_claim
parameter in the JWT role determines how Vault identifies and counts clients. This consolidation can be beneficial for policy management but may create security concerns if different clusters require distinct access controls. To maintain separate identities per cluster, create a dedicated JWT auth mount for each cluster.
Choosing which method
For large-scale or centralized Vault deployments, HashiCorp recommends the JWT auth method because it avoids TokenReview API dependencies, supports multi-cluster key management, and scales without additional API permissions. The Kubernetes auth method remains a valid choice when Vault runs inside the same cluster as VSO and can rely on local TokenReview access.
Configure JWT authentication
Responsibility: Vault Platform Team and OpenShift Cluster Administrators (Shared)
OpenShift configuration
Allow unauthenticated discovery of the token issuer: Grant Vault permission to access the Kubernetes service account issuer configuration without requiring authentication to OpenShift. This is necessary for validating service account tokens using OpenShift's JWKS endpoint.
oc create clusterrolebinding \
service-account-issuer-discovery-unauthenticated \
--clusterrole=system:service-account-issuer-discovery \
--group=system:unauthenticated
This binding allows unauthenticated access to the OIDC discovery endpoints (/.well-known/openid-configuration
and /openid/v1/jwks
), which poses minimal security risk as these endpoints only expose public cryptographic keys used for token validation. The configuration eliminates the need for Vault to authenticate against the Kubernetes API server to retrieve the public keys, simplifying the setup while maintaining security through cryptographic verification.
Vault configuration
Configure JWT auth: To enable Vault to validate Kubernetes service account tokens, configure the JWT auth method with the JWKS endpoint and the Kubernetes API server’s CA certificate. Note that the CA certificate may not be necessary if you already have a trusted CA in place, as is often the case with managed OpenShift platforms.
Note: Vault provides three main configuration options for JWT validation:
oidc_discovery_url
uses OIDC Discovery to automatically fetch keys from the provider's well-known configuration endpoint.jwks_url
points directly to a JWKS endpoint for fetching public keysjwt_validation_pubkeys
stores static public keys directly in the Vault configuration
For this pattern we choose the JWKS endpoint, but you can find detailed configuration options and examples in the JWT/OIDC auth method API documentation.
First, retrieve the JWKS URL from the Kubernetes API.
export JWKS_URL=$(oc get --raw /.well-known/openid-configuration | jq -r .jwks_uri)
(Optional) Next, if required, download the Kubernetes API server’s CA certificate, so Vault can trust the JWKS endpoint.
openssl s_client -showcerts -connect <openshift-api-host>:6443 </dev/null 2>/dev/null | \
awk '/-----BEGIN CERTIFICATE-----/{p++} p==2' | \
openssl x509 -outform PEM > /tmp/openshift-ca.pem
To complete the configuration, enable the JWT auth method in Vault with the JWKS URL and the trusted CA certificate. Enable the JWT auth method in the target Vault namespace, for example, admin/tenant-1
, as recommended in the HashiCorp Validated Designs. This allows Vault to validate Kubernetes service account tokens for authentication. After enabling the method, retrieve, and store the auth mount's accessor ID. You'll use this value later when creating an entity alias to link the service account to an already existing Vault identity.
vault auth enable jwt
vault auth list -format=json \
| jq -r '.["jwt/"].accessor' > /tmp/accessor_jwt.txt
vault write -namespace="admin/tenant-1" auth/jwt/config \
jwks_url="$JWKS_URL" \
jwks_ca_pem=@/tmp/openshift-ca.pem
This configuration allows Vault to validate JWT tokens from Kubernetes service accounts by verifying them against the cluster's public keys.
Note: Monitor the CA certificate used for the JWKS endpoint for changes, and update the Vault JWT auth configuration if you rotate or renew the CA certificate to maintain proper validation of service account tokens.
Create a Vault entity: This guide assumes that a Vault administrator pre-provisions entities for machine authentication, as outlined in the "Access Controls" section of the PKI validated architecture. For reference, the example below shows how to manually create an entity with custom metadata. You can use this metadata to define organizational ACL policies.
vault write -format=json identity/entity \
name="my-app" \
metadata=AppName="my-app" \
metadata=BusinessUnitName="tenant-1" \
metadata=TeamName="team-a" \
metadata=EmailDistributionList="team-a@example.com" \
metadata=TLSDomain="app.tenant-1.example.com" \
| jq -r ".data.id" > /tmp/entity_id.txt
Create a Vault entity alias: Create an entity alias to map the Kubernetes service account identity to a Vault entity. This enables Vault to associate the JWT with an internal identity, make use of entity metadata and apply these to ACL policy templates.
vault write identity/entity-alias \
name="system:serviceaccount:app-1:my-app" \
canonical_id=$(cat /tmp/entity_id.txt) \
mount_accessor=$(cat /tmp/accessor_jwt.txt)
Create a JWT role and PKI policy for Kubernetes authentication: The following example uses a ACL policy template that references the Vault identity metadata created in a previous step. This ensures that only applications associated with a specific team, identified by the TeamName attribute, can issue certificates using a corresponding PKI role.
tee /tmp/pki-policy.hcl <<EOF
path "pki/issue/{{identity.entity.metadata.TeamName}}" {
capabilities = ["update"]
}
EOF
vault policy write pki /tmp/pki-policy.hcl
This policy grants update access to a dynamic PKI role path that matches the authenticated entity’s metadata. For example, if an application has an entity metadata tag TeamName="team-a"
, this policy grants access to the PKI endpoint pki/issue/team-a
.
For more information on policy syntax and templating, refer to the Vault ACL policy documentation.
To let a Kubernetes workload use this policy, create a Vault role for JWT-based authentication. This maps a specific service account to the pki policy, using key parameters:
bound_audiences
ensures Vault accepts only tokens issued for the specified audience, typically matching the Kubernetes API endpointuser_claim
defines which JWT claim to use as the identity (for example,sub
for subject) and this affects client counting in Vaultbound_subject
binds the role to a specific service account, increasing security by requiring the JWT’ssub
claim to matchpolicies
is a list of Vault policies to apply after successful authenticationttl
specifies the Vault token’s time-to-live
vault write auth/jwt/role/my-app \
role_type="jwt" \
bound_audiences="https://kubernetes.default.svc" \
user_claim="sub" \
bound_subject="system:serviceaccount:app-1:my-app" \
policies="pki" \
ttl="1h"
The Vault JWT role restricts access to a specific Kubernetes service account and namespace, and after authentication, applies the pki policy.
See the Vault JWT documentation, and specific Kubernetes integration guidance for more details.
Define the default Vault connection for VSO
Responsibility: OpenShift Cluster Administrators
To define how VSO can reach the Vault servers, configure a default VaultConnection
resource with TLS settings and the trusted CA certificate. Note that the CA certificate may not be necessary if you already have a trusted CA in place or you use HCP Vault Dedicated.
(Optional) Retrieve the Vault server's CA certificate: Download the CA certificate used by the Vault servers. This allows Kubernetes workloads and VSO to establish a trusted TLS connection to Vault.
openssl s_client -showcerts -connect <vault-api-host>:8200 </dev/null 2>/dev/null | \
awk '/-----BEGIN CERTIFICATE-----/{p++} p==2' | \
openssl x509 -outform PEM > /tmp/vault-ca.pem
Create a Kubernetes secret for the Vault CA: Store the Vault CA certificate in a Kubernetes Secret
within the relevant namespace. VSO references this secret for TLS verification.
oc create secret generic vault-cacert \
--namespace=openshift-operators \
--from-literal=ca.crt="$(cat /tmp/vault-ca.pem)"
Configure the VaultConnection resource: Define a VaultConnection
custom resource to specify the Vault address, TLS server name, and the Kubernetes secret containing the CA certificate. This resource tells VSO how to connect securely to the Vault servers.
Note: The resource name default
makes this the cluster-wide default connection configuration, eliminating the need to specify connection details in each VSO resource unless you need to override specific settings for particular workloads.
oc apply -f - <<EOF
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultConnection
metadata:
name: default
namespace: openshift-operators
spec:
address: "https://<vault-address>:8200"
tlsServerName: "<vault-address>"
caCertSecretRef: "vault-cacert"
skipTLSVerify: false
EOF
Request a certificate
Responsibility: Application Team
Prepare the service account: Create an example project for your application and define the Kubernetes service account that VSO uses to authenticate to Vault.
oc new-project app-1 \
--display-name="Application 1" \
--description="My example application"
cat <<EOF | tee /tmp/service-account-my-app.yml
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-app
namespace: app-1
EOF
oc apply -f /tmp/service-account-my-app.yml
Define Vault authentication: Create a VaultAuth
custom resource to configure VSO to authenticate to Vault using the Kubernetes JWT method. This resource references the service account and Vault JWT auth role created earlier.
cat <<EOF | tee /tmp/vault-auth.yaml
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultAuth
metadata:
name: vault-auth
namespace: app-1
spec:
namespace: admin/tenant-1
method: jwt
mount: jwt
jwt:
role: my-app
serviceAccount: my-app
EOF
oc apply -f /tmp/vault-auth.yaml
Request a TLS certificate: Use a VaultPKISecret
custom resource to request a certificate from Vault’s PKI secrets engine and store it as a Kubernetes TLS secret. This enables your application to consume certificates securely without manual intervention.
The following configuration requests a certificate with:
- Common name
foo.tenant-1.example.com
- Vault PKI role
team-a
at mount pathpki
- Output in Privacy-Enhanced Mail (PEM) format
- A resulting Kubernetes TLS secret named
secretpki
in the same namespace - Automated renewal using TTL and expiry offset
- Optional cleanup on deletion
cat <<EOF | tee /tmp/pki-secret.yaml
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultPKISecret
metadata:
name: vault-pki-app
namespace: app-1
spec:
namespace: admin/tenant-1
mount: pki
role: team-a
destination:
name: secretpki
type: kubernetes.io/tls
create: true
commonName: foo.tenant-1.example.com
format: pem
revoke: false
clear: true
expiryOffset: 10s
ttl: 14d
vaultAuthRef: vault-auth
EOF
oc apply -f /tmp/pki-secret.yaml
Inspect the status of the VaultPKISecret
resource to verify that Vault issued and stored the certificate, and confirm that VSO created the Kubernetes TLS secret secretpki
.
oc describe vaultpkisecret.secrets.hashicorp.com/vault-pki-app -n app-1
oc get secrets secretpki -n app-1 -o yaml
Expose an application with TLS-secured Ingress
Responsibility: Application Team
After VSO issues a TLS certificate, create a Kubernetes Ingress that terminates HTTPS and references the VSO-managed kubernetes.io/tls secret. We use the Ingress here as an example, but any application that can consume Kubernetes TLS secrets works.
Note: On OpenShift, the ingress controller automatically converts the Ingress object to an OpenShift Route, so you still receive a native Route without embedding the certificate or key in its manifest. For this pattern, we choose an Ingress resource because Routes cannot natively read Kubernetes TLS secrets today and instead require inline PEM data in the route manifest.
Define the Ingress resource: Create a Kubernetes Ingress manifest that routes external traffic to your application. The following example:
- Routes traffic for
foo.tenant-1.example.com
- References the
secretpki
Kubernetes TLS secret for HTTPS termination - Routes requests to the
my-app
service on port5678
- Includes OpenShift-specific annotations for edge TLS termination and redirecting HTTP to HTTPS
cat <<EOF | tee /tmp/my-app-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app-ingress
namespace: app-1
annotations:
route.openshift.io/termination: edge
route.openshift.io/insecureEdgeTerminationPolicy: Redirect
spec:
tls:
- hosts:
- foo.tenant-1.example.com
secretName: secretpki
rules:
- host: foo.tenant-1.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-app
port:
number: 5678
EOF
oc apply -f /tmp/my-app-ingress.yaml
This creates the Ingress resource and configures it to use the TLS certificate stored in the secretpki
secret.
Verify the Ingress configuration: Check the status and details of the Ingress to confirm it references the correct TLS secret and backend service.
oc describe ingress my-app-ingress -n app-1
Validate the OpenShift Route: On OpenShift, the system translates Ingress resources into Route resources. Verify that OpenShift created the route and it functions as expected. Replace <suffix>
with the actual route name returned by the previous command.
oc get route -n app-1
oc describe route my-app-ingress-<suffix> -n app-1
Verify the certificate: Use openssl to connect to the route and display the served certificate, confirming it matches the expected issuer and subject.
openssl s_client -connect foo.tenant-1.example.com:443 -servername foo.tenant-1.example.com </dev/null 2>/dev/null | openssl x509 -noout -subject -issuer
Operational guidance
VSO runs as a controller in the cluster and you must manage it like other OpenShift-native operators. HashiCorp recommends the following when preparing for operational readiness:
- Telemetry: VSO exposes metrics at the /metrics endpoint in Prometheus format. Collect the default controller-runtime metrics and any additional VSO metrics relevant to your environment. Refer to the telemetry documentation for details on available metrics and configuration.
- Operator updates: Use Operator Lifecycle Manager (OLM) channels to manage VSO version upgrades. Apply updates in a testing environment first, validate functionality, and then promote them to production.
- Multiple OpenShift clusters: Deploy VSO in each OpenShift cluster and connect to a central Vault Enterprise or HCP Vault Dedicated instance. Use the JWT authentication method to establish decoupled, scalable trust between OpenShift clusters and Vault.