• HashiCorp Developer

  • HashiCorp Cloud Platform
  • Terraform
  • Packer
  • Consul
  • Vault
  • Boundary
  • Nomad
  • Waypoint
  • Vagrant
Vault
  • Install
  • Tutorials
  • Documentation
  • API
  • Try Cloud(opens in new tab)
  • Sign up
Vault

Skip to main content
20 tutorials
  • Vault on Kubernetes Reference Architecture
  • Vault on Kubernetes Deployment Guide
  • Vault Installation to Minikube via Helm with Integrated Storage
  • Vault Installation to Minikube via Helm with Consul
  • Vault Installation to Minikube via Helm with TLS enabled
  • Vault Installation to Amazon Elastic Kubernetes Service via Helm
  • Vault Installation to Red Hat OpenShift via Helm
  • Vault Installation to Google Kubernetes Engine via Helm
  • Vault Installation to Azure Kubernetes Service via Helm
  • Deploy Vault on Amazon EKS Anywhere
  • Injecting Secrets into Kubernetes Pods via Vault Agent Containers
  • Mount Vault Secrets through Container Storage Interface (CSI) Volume
  • Configure Vault as a Certificate Manager in Kubernetes with Helm
  • Integrate a Kubernetes Cluster with an External Vault
  • Vault Agent with Kubernetes
  • Troubleshooting Vault on Kubernetes
  • Deploy Consul and Vault on Kubernetes with Run Triggers
  • Automate Terraform Cloud Workflows
  • Vault on Kubernetes Security Considerations
  • Kubernetes Secrets Engine

  • Resources

  • Tutorial Library
  • Certifications
  • Community Forum
    (opens in new tab)
  • Support
    (opens in new tab)
  • GitHub
    (opens in new tab)
  1. Developer
  2. Vault
  3. Tutorials
  4. Vault
  5. Vault Installation to Amazon Elastic Kubernetes Service via Helm

Vault Installation to Amazon Elastic Kubernetes Service via Helm

  • 19min

  • VaultVault

Amazon Elastic Kubernetes Service (EKS) can run and scale Vault in the Amazon Web Services (AWS) cloud or on-premises. Creating a Kubernetes cluster and launching Vault via the Helm chart can be accomplished all from the command-line.

In this tutorial, you create a cluster in AWS, deploy a MySQL server, install Vault in high-availability (HA) mode via the Helm chart and then configure the authentication between Vault and the cluster. Then you deploy a web application with deployment annotations so the application's secrets are installed via the Vault Agent injector service.

Prerequisites

This tutorial requires an AWS account, AWS command-line interface (CLI), Amazon EKS CLI, Kubernetes CLI and the Helm CLI.

First, create an AWS account.

Next, install AWS CLI, Amazon EKS CLI, kubectl CLI and helm CLI.

Install aws with Homebrew.

$ brew install awscli

Install eksctl with Homebrew.

$ brew install eksctl

Install kubectl with Homebrew.

$ brew install kubernetes-cli

Install helm with Homebrew.

$ brew install helm

Install aws with Chocolatey.

$ choco install awscli

Install eksctl with Chocolatey.

$ choco install eksctl

Install kubectl with Chocolatey.

$ choco install kubernetes-cli

Install helm with Chocolatey.

$ choco install kubernetes-helm

Next, configure the aws CLI with credentials.

$ aws configure

This command prompts you to enter an AWS access key ID, AWS secret access key, and default region name.

Tip: The above example uses IAM user authentication. You can use any authentication method described in the AWS provider documentation.

Next, create a keypair to enable you to SSH into created nodes.

$ aws ec2 create-key-pair --key-name learn-vault

Start cluster

A Vault cluster that is launched in high-availability requires a Kubernetes cluster with three nodes.

Provision with Terraform: An alternative way to manage the lifecycle of cluster is with Terraform. Learn more in the Provision an EKS Cluster (AWS) tutorial.

  1. Create a three node cluster named learn-vault.

    $ eksctl create cluster \
        --name learn-vault \
        --nodes 3 \
        --with-oidc \
        --ssh-access \
        --ssh-public-key learn-vault \
        --managed
    

    Example output:

    [ℹ]  eksctl version 0.97.0
    [ℹ]  using region us-west-1
    
    ...snip...
    
    [ℹ]  node "ip-192-168-26-181.us-west-1.compute.internal" is ready
    [ℹ]  node "ip-192-168-34-73.us-west-1.compute.internal" is ready
    [ℹ]  node "ip-192-168-35-238.us-west-1.compute.internal" is ready
    [ℹ]  kubectl command should work with "/Users/yoko/.kube/config", try 'kubectl get nodes'
    [✔]  EKS cluster "learn-vault" in "us-west-1" region is ready
    

    The cluster is created, deployed and then health-checked. When the cluster is ready the command modifies the kubectl configuration so that the commands you issue are performed against that cluster.

    Managing multiple clusters: kubectl enables you to manage multiple clusters through the context configuration. You display the available contexts kubectl config get-contexts and set the context by name kubectl config use-context NAME.

  2. Display the nodes of the cluster.

    $ kubectl get nodes
    
    NAME                                           STATUS   ROLES    AGE   VERSION
    ip-192-168-26-181.us-west-1.compute.internal   Ready    <none>   29m   v1.22.6-eks-7d68063
    ip-192-168-34-73.us-west-1.compute.internal    Ready    <none>   29m   v1.22.6-eks-7d68063
    ip-192-168-35-238.us-west-1.compute.internal   Ready    <none>   29m   v1.22.6-eks-7d68063
    

    The cluster is ready.

Install the MySQL Helm chart

MySQL is a fast, reliable, scalable, and easy to use open-source relational database system. MySQL Server is intended for mission-critical, heavy-load production systems as well as for embedding into mass-deployed software.

  1. Add the Bitnami Helm repository.

    $ helm repo add bitnami https://charts.bitnami.com/bitnami
    "bitnami" has been added to your repositories
    
  2. Install the latest version of the MySQL Helm chart.

    $ helm install mysql bitnami/mysql
    

    Output:

    NAME: mysql
    LAST DEPLOYED: Thu May 19 10:37:43 2022
    NAMESPACE: default
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    NOTES:
    CHART NAME: mysql
    CHART VERSION: 9.0.2
    APP VERSION: 8.0.29
    
    ** Please be patient while the chart is being deployed **
    
    Tip:
    
      Watch the deployment status using the command: kubectl get pods -w --namespace default
    
    Services:
    
      echo Primary: mysql.default.svc.cluster.local:3306
    
    Execute the following to get the administrator credentials:
    
      echo Username: root
      MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode)
    
    To connect to your database:
    
      1. Run a pod that you can use as a client:
    
          kubectl run mysql-client --rm --tty -i --restart='Never' --image  docker.io/bitnami/mysql:8.0.29-debian-10-r21 --namespace default --env MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD --command -- bash
    
      2. To connect to primary service (read/write):
    
          mysql -h mysql.default.svc.cluster.local -uroot -p"$MYSQL_ROOT_PASSWORD"
    

    By default the MySQL Helm chart deploys a single pod a service.

  3. Get all the pods within the default namespace.

    $ kubectl get pods
    NAME                                    READY   STATUS    RESTARTS   AGE
    mysql-0                                 1/1     Running   0          2m58s
    

    Wait until the mysql-0 pod is running and ready (1/1).

    The mysql-0 pod runs a MySQL server.

    Demonstration Only: MySQL should be run with additional pods to ensure reliability when used in production. Refer to the MySQL Helm chart to override default parameters.

  4. Get all the services within the default namespace.

    $  kubectl get services
    NAME                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
    kubernetes                 ClusterIP   10.100.0.1       <none>        443/TCP             3h24m
    mysql                      ClusterIP   10.100.68.110    <none>        3306/TCP            15m
    mysql-headless             ClusterIP   None             <none>        3306/TCP            15m
    

    The mysql service directs request to the mysql-0 pod. Pods within the cluster may address the MySQL server with the address mysql.default.svc.cluster.local.

    The MySQL root password is stored as Kubernetes secret. This password is required by Vault to create credentials for the application pod deployed later.

  5. Create a variable named ROOT_PASSWORD that stores the mysql root user password.

    $ ROOT_PASSWORD=$(kubectl get secret --namespace default mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode)
    

    The MySQL server, addressed through the service, is ready.

Install the Vault Helm chart

The recommended way to run Vault on Kubernetes is via the Helm chart.

  1. Add the HashiCorp Helm repository.

    $ helm repo add hashicorp https://helm.releases.hashicorp.com
    "hashicorp" has been added to your repositories
    
  2. Update all the repositories to ensure helm is aware of the latest versions.

    $ helm repo update
    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "hashicorp" chart repository
    Update Complete. ⎈Happy Helming!⎈
    
  3. Search for all the Vault Helm chart versions.

    $ helm search repo vault --versions
    
    NAME            CHART VERSION   APP VERSION DESCRIPTION
    hashicorp/vault 0.20.0          1.10.3      Official HashiCorp Vault Chart
    hashicorp/vault 0.19.0          1.9.2       Official HashiCorp Vault Chart
    hashicorp/vault 0.18.0          1.9.0       Official HashiCorp Vault Chart
    ## ...
    

    The Vault Helm chart contains all the necessary components to run Vault in several different modes.

    Default behavior: By default, it launches Vault on a single pod in standalone mode with a file storage backend. Enabling high-availability with Integrated Storage requires that you override these defaults.

  4. Install the latest version of the Vault Helm chart in HA mode with integrated storage.

    $ helm install vault hashicorp/vault \
        --set='server.ha.enabled=true' \
        --set='server.ha.raft.enabled=true'
    

    The Vault pods and Vault Agent Injector pod are deployed in the default namespace.

  5. Get all the pods within the default namespace.

    $ kubectl get pods
    NAME                                    READY   STATUS    RESTARTS   AGE
    vault-0                                 0/1     Running   0          30s
    vault-1                                 0/1     Running   0          30s
    vault-2                                 0/1     Running   0          30s
    vault-agent-injector-56bf46695f-crqqn   1/1     Running   0          30s
    

    The vault-0, vault-1, and vault-2 pods deployed run a Vault server and report that they are Running but that they are not ready (0/1). This is because the status check defined in a readinessProbe returns a non-zero exit code.

    The vault-agent-injector pod deployed is a Kubernetes Mutation Webhook Controller. The controller intercepts pod events and applies mutations to the pod if specific annotations exist within the request.

  6. Retrieve the status of Vault on the vault-0 pod.

    $ kubectl exec vault-0 -- vault status
    

    Example output:
    The status command reports that Vault is not initialized and that it is sealed. For Vault to authenticate with Kubernetes and manage secrets requires that that is initialized and unsealed.

    Key                Value
    ---                -----
    Seal Type          shamir
    Initialized        false
    Sealed             true
    Total Shares       0
    Threshold          0
    Unseal Progress    0/0
    Unseal Nonce       n/a
    Version            1.10.3
    Storage Type       raft
    HA Enabled         true
    command terminated with exit code 2
    

Initialize and unseal one Vault pod

Vault starts uninitialized and in the sealed state. Prior to initialization the Integrated Storage backend is not prepared to receive data.

  1. Initialize Vault with one key share and one key threshold.

    $ kubectl exec vault-0 -- vault operator init \
        -key-shares=1 \
        -key-threshold=1 \
        -format=json > cluster-keys.json
    

    The operator init command generates a root key that it disassembles into key shares -key-shares=1 and then sets the number of key shares required to unseal Vault -key-threshold=1. These key shares are written to the output as unseal keys in JSON format -format=json. Here the output is redirected to a file named cluster-keys.json.

  2. Display the unseal key found in cluster-keys.json.

    $ cat cluster-keys.json | jq -r ".unseal_keys_b64[]"
    rrUtT32GztRy/pVWmcH0ZQLCCXon/TxCgi40FL1Zzus=
    

    Insecure operation: Do not run an unsealed Vault in production with a single key share and a single key threshold. This approach is only used here to simplify the unsealing process for this demonstration.

  3. Create a variable named VAULT_UNSEAL_KEY to capture the Vault unseal key.

    $ VAULT_UNSEAL_KEY=$(cat cluster-keys.json | jq -r ".unseal_keys_b64[]")
    

    After initialization, Vault is configured to know where and how to access the storage, but does not know how to decrypt any of it. Unsealing is the process of constructing the root key necessary to read the decryption key to decrypt the data, allowing access to the Vault.

  4. Unseal Vault running on the vault-0 pod.

    $ kubectl exec vault-0 -- vault operator unseal $VAULT_UNSEAL_KEY
    

    Example output: The operator unseal command reports that Vault is initialized and unsealed.

    Key                     Value
    ---                     -----
    Seal Type               shamir
    Initialized             true
    Sealed                  false
    Total Shares            1
    Threshold               1
    Version                 1.10.3
    Storage Type            raft
    Cluster Name            vault-cluster-16efc511
    Cluster ID              649c814a-a505-421d-e4bb-d9175c7e6b38
    HA Enabled              true
    HA Cluster              n/a
    HA Mode                 standby
    Active Node Address     <none>
    Raft Committed Index    31
    Raft Applied Index      31
    

    Insecure operation: Providing the unseal key with the command writes the key to your shell's history. This approach is only used here to simplify the unsealing process for this demonstration.

  5. Retrieve the status of Vault on the vault-0 pod.

    $ kubectl exec vault-0 -- vault status
    Key                     Value
    ---                     -----
    Seal Type               shamir
    Initialized             true
    Sealed                  false
    Total Shares            1
    Threshold               1
    Version                 1.10.3
    Storage Type            raft
    Cluster Name            vault-cluster-16efc511
    Cluster ID              649c814a-a505-421d-e4bb-d9175c7e6b38
    HA Enabled              true
    HA Cluster              https://vault-0.vault-internal:8201
    HA Mode                 active
    Active Since            2022-05-19T17:41:07.226862254Z
    Raft Committed Index    36
    Raft Applied Index      36
    

    The Vault server is initialized and unsealed.

Join the other Vaults to the Vault cluster

The Vault server running on the vault-0 pod is a Vault HA cluster with a single node. To display the list of nodes requires that you are logging in with the root token.

  1. Display the root token found in cluster-keys.json.

    $ cat cluster-keys.json | jq -r ".root_token"
    hvs.3VYhJODbhlQPeW5zspVvBCzD
    
  2. Create a variable named CLUSTER_ROOT_TOKEN to capture the Vault unseal key.

    $ CLUSTER_ROOT_TOKEN=$(cat cluster-keys.json | jq -r ".root_token")
    
  3. Login with the root token on the vault-0 pod.

    $ kubectl exec vault-0 -- vault login $CLUSTER_ROOT_TOKEN
    
    Success! You are now authenticated. The token information displayed below
    is already stored in the token helper. You do NOT need to run "vault login"
    again. Future Vault requests will automatically use this token.
    
    Key                  Value
    ---                  -----
    token                hvs.3VYhJODbhlQPeW5zspVvBCzD
    token_accessor       5sy3tZm3qCQ1ai7wTDOS97XG
    token_duration       ∞
    token_renewable      false
    token_policies       ["root"]
    identity_policies    []
    policies             ["root"]
    

    Insecure operation: The login command stores the root token in a file for the container user. Subsequent commands are executed with that token. This approach is only used here to simplify the cluster configuration demonstration.

  4. List all the nodes within the Vault cluster for the vault-0 pod.

    $ kubectl exec vault-0 -- vault operator raft list-peers
    Node                                    Address                        State     Voter
    ----                                    -------                        -----     -----
    09d9b35d-0336-7de7-cc94-90a1f3a0aff8    vault-0.vault-internal:8201    leader    true
    

    This displays the one node within the Vault cluster. This cluster is addressable through the Kubernetes service vault-0.vault-internal created by the Helm chart. The Vault servers on the other pods need to join this cluster and be unsealed.

  5. Join the Vault server on vault-1 to the Vault cluster.

    $ kubectl exec vault-1 -- vault operator raft join http://vault-0.vault-internal:8200
    Key       Value
    ---       -----
    Joined    true
    

    This Vault server joins the cluster sealed. To unseal the Vault server requires the same unseal key, VAULT_UNSEAL_KEY, provided to the first Vault server.

  6. Unseal the Vault server on vault-1 with the unseal key.

    $ kubectl exec vault-1 -- vault operator unseal $VAULT_UNSEAL_KEY
    
    Key                     Value
    ---                     -----
    Seal Type               shamir
    Initialized             true
    Sealed                  false
    Total Shares            1
    Threshold               1
    Version                 1.10.3
    Storage Type            raft
    Cluster Name            vault-cluster-16efc511
    Cluster ID              649c814a-a505-421d-e4bb-d9175c7e6b38
    HA Enabled              true
    HA Cluster              https://vault-0.vault-internal:8201
    HA Mode                 standby
    Active Node Address     http://192.168.58.131:8200
    Raft Committed Index    76
    Raft Applied Index      76
    

    The Vault server on vault-1 is now a functional node within the Vault cluster.

  7. Join the Vault server on vault-2 to the Vault cluster.

    $ kubectl exec vault-2 -- vault operator raft join http://vault-0.vault-internal:8200
    Key       Value
    ---       -----
    Joined    true
    
  8. Unseal the Vault server on vault-2 with the unseal key.

    $ kubectl exec vault-2 -- vault operator unseal $VAULT_UNSEAL_KEY
    
    Key                     Value
    ---                     -----
    Seal Type               shamir
    Initialized             true
    Sealed                  false
    Total Shares            1
    Threshold               1
    Version                 1.10.3
    Storage Type            raft
    Cluster Name            vault-cluster-16efc511
    Cluster ID              649c814a-a505-421d-e4bb-d9175c7e6b38
    HA Enabled              true
    HA Cluster              https://vault-0.vault-internal:8201
    HA Mode                 standby
    Active Node Address     http://192.168.58.131:8200
    Raft Committed Index    76
    Raft Applied Index      76
    

    The Vault server on vault-2 is now a functional node within the Vault cluster.

  9. List all the nodes within the Vault cluster for the vault-0 pod.

    $ kubectl exec vault-0 -- vault operator raft list-peers
    Node                                    Address                        State       Voter
    ----                                    -------                        -----       -----
    09d9b35d-0336-7de7-cc94-90a1f3a0aff8    vault-0.vault-internal:8201    leader      true
    7078a8b7-7948-c224-a97f-af64771ad999    vault-1.vault-internal:8201    follower    true
    aaf46893-0a93-17ce-115e-f57033d7f41d    vault-2.vault-internal:8201    follower    true
    

    This displays all three nodes within the Vault cluster.

    Voter status: It may take additional time for each node's voter status to return true.

  10. Get all the pods within the default namespace.

    $ kubectl get pods
    NAME                                    READY   STATUS    RESTARTS   AGE
    vault-0                                 1/1     Running   0          5m49s
    vault-1                                 1/1     Running   0          5m48s
    vault-2                                 1/1     Running   0          5m47s
    vault-agent-injector-5945fb98b5-vzbqv   1/1     Running   0          5m50s
    

    The vault-0, vault-1, and vault-2 pods report that they are Running and ready (1/1).

Create a Vault database role

The web application that you deploy in the Launch a web application section, expects Vault to store a username and password at the path secret/webapp/config. To create this secret requires you to login with the root token, enable the key-value secret engine, and store a secret username and password at that defined path.

  1. Enable database secrets at the path database.

    $ kubectl exec vault-0 -- vault secrets enable database
    Success! Enabled the database secrets engine at: database/
    
  2. Configure the database secrets engine with the connection credentials for the MySQL database.

    $ kubectl exec vault-0 -- vault write database/config/mysql \
        plugin_name=mysql-database-plugin \
        connection_url="{{username}}:{{password}}@tcp(mysql.default.svc.cluster.local:3306)/" \
        allowed_roles="readonly" \
        username="root" \
        password="$ROOT_PASSWORD"
    

    Output:

    Success! Data written to: database/config/mysql
    
  3. Create a database secrets engine role named readonly.

    $ kubectl exec vault-0 -- vault write database/roles/readonly \
        db_name=mysql \
        creation_statements="CREATE USER '{{name}}'@'%' IDENTIFIED BY '{{password}}';GRANT SELECT ON *.* TO '{{name}}'@'%';" \
        default_ttl="1h" \
        max_ttl="24h"
    

    The readonly role generates credentials that are able to perform queries for any table in the database.

    Output:

    Success! Data written to: database/roles/readonly
    
  4. Read credentials from the readonly database role.

    $ kubectl exec vault-0 -- vault read database/creds/readonly
    
    Key                Value
    ---                -----
    lease_id           database/creds/readonly/qtWlgBT1YTQEPKiXe7CrotsT
    lease_duration     1h
    lease_renewable    true
    password           WLESe5T-RLkTj-h-lDbT
    username           v-root-readonly-pk168KvLS8sc80Of
    

    Learn more: For more information refer to the Database Secrets Engine tutorial.

    Vault is able to generate crentials within the MySQL database.

Configure Vault Kubernetes authentication

The initial root token is a privileged user that can perform any operation at any path. The web application only requires the ability to read secrets defined at a single path. This application should authenticate and be granted a token with limited access.

Best practice: We recommend that root tokens are used only for initial setup of an authentication method and policies. Afterwards they should be revoked. This tutorial does not show you how to revoke the root token.

Vault provides a Kubernetes authentication method that enables clients to authenticate with a Kubernetes Service Account Token.

  1. Start an interactive shell session on the vault-0 pod.

    $ kubectl exec --stdin=true --tty=true vault-0 -- /bin/sh
    / $
    

    Your system prompt is replaced with a new prompt / $.

    Note: the prompt within this section is shown as $ but the commands are intended to be executed within this interactive shell on the vault-0 container.

  2. Enable the Kubernetes authentication method.

    $ vault auth enable kubernetes
    Success! Enabled kubernetes auth method at: kubernetes/
    

    Vault accepts a service token from any client within the Kubernetes cluster. During authentication, Vault verifies that the service account token is valid by querying a token review Kubernetes endpoint.

  3. Configure the Kubernetes authentication method to use the location of the Kubernetes API.

    For the best compatibility with recent Kubernetes versions, ensure you are using Vault v1.9.3 or greater.

    $ vault write auth/kubernetes/config \
        kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443"
    

    Output:

    Success! Data written to: auth/kubernetes/config
    

    The environment variable KUBERNETES_PORT_443_TCP_ADDR is defined and references the internal network address of the Kubernetes host.

    For a client of the Vault server to read the credentials defined in the Create a Vault database role step requires that the read capability be granted for the path database/creds/readonly.

  4. Write out the policy named devwebapp that enables the read capability for secrets at path database/creds/readonly

    $ vault policy write devwebapp - <<EOF
    path "database/creds/readonly" {
      capabilities = ["read"]
    }
    EOF
    
  5. Create a Kubernetes authentication role named devweb-app.

    $ vault write auth/kubernetes/role/devweb-app \
          bound_service_account_names=internal-app \
          bound_service_account_namespaces=default \
          policies=devwebapp \
          ttl=24h
    

    Output:

    Success! Data written to: auth/kubernetes/role/devweb-app
    

    The role connects a Kubernetes service account, internal-app (created in the next step), and namespace, default, with the Vault policy, devwebapp. The tokens returned after authentication are valid for 24 hours.

  6. Exit the vault-0 pod.

    $ exit
    

Deploy web application

The web application pod requires the creation of the internal-app Kubernetes service account specified in the Vault Kubernetes authentication role created in the Configure Kubernetes authentication step.

  1. Define a Kubernetes service account named internal-app.

    $ cat > internal-app.yaml <<EOF
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: internal-app
    EOF
    
  2. Create the internal-app service account.

    $ kubectl apply --filename internal-app.yaml
    serviceaccount/internal-app created
    
  3. Define a pod named devwebapp with the web application.

    $ cat > devwebapp.yaml <<EOF
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: devwebapp
      labels:
        app: devwebapp
      annotations:
        vault.hashicorp.com/agent-inject: "true"
        vault.hashicorp.com/agent-cache-enable: "true"
        vault.hashicorp.com/role: "devweb-app"
        vault.hashicorp.com/agent-inject-secret-database-connect.sh: "database/creds/readonly"
        vault.hashicorp.com/agent-inject-template-database-connect.sh: |
          {{- with secret "database/creds/readonly" -}}
          mysql -h my-release-mysql.default.svc.cluster.local --user={{ .Data.username }} --password={{ .Data.password }} my_database
    
          {{- end -}}
    spec:
      serviceAccountName: internal-app
      containers:
        - name: devwebapp
          image: jweissig/app:0.0.1
    EOF
    
  4. Create the devwebapp pod.

    $ kubectl apply --filename devwebapp.yaml
    pod/devwebapp created
    

    This definition creates a pod with the specified container running with the internal-app Kubernetes service account. The container within the pod is unaware of the Vault cluster. The Vault Injector service reads the annotations and determines that it should take action vault.hashicorp.com/agent-inject. The credentials, read from Vault at database/creds/readonly, are retrieved by the devwebapp-role Vault role and stored at the file location, /vault/secrets/database-connect.sh, and then mounted on the pod.

    The credentials are requested first by the vault-agent-init container to ensure they are present when the application pod initializes. After the application pod initializes, the injector service creates a vault-agent pod that assist the application in maintaining the credentials during initialization. The credentials requested by the vault-agent-init container are cached, vault.hashicorp.com/agent-cache-enable: "true", and used by vault-agent container.

    Agent Cache: Prior to Vault 1.7 and Vault-K8s 0.9.0 the vault.hashicorp.com/agent-cache-enable parameter was not available. The credentials requested by the vault-agent-init container were requested again by the vault-agent container resulting in multiple credentials issued for the same pod.

    Learn more: For more information about annotations refer to the Injecting Secrets into Kubernetes Pods via Vault Agent Injector tutorial and the Annotations documentation.

  5. Get all the pods within the default namespace.

    $ kubectl get pods
    
    NAME                                    READY   STATUS    RESTARTS   AGE
    devwebapp                               2/2     Running   0          36s
    mysql-0                                 1/1     Running   0          7m32s
    vault-0                                 1/1     Running   0          5m40s
    vault-1                                 1/1     Running   0          5m40s
    vault-2                                 1/1     Running   0          5m40s
    vault-agent-injector-76fff8f7c6-lk6gz   1/1     Running   0          5m40s
    

    Wait until the devwebapp pod reports that is running and ready (2/2).

  6. Display the secrets written to the file /vault/secrets/database-connect.sh on the devwebapp pod.

    $ kubectl exec --stdin=true \
        --tty=true devwebapp \
        --container devwebapp \
        -- cat /vault/secrets/database-connect.sh
    

    The result displays a mysql command with the credentials generated for this pod.

    mysql -h my-release-mysql.default.svc.cluster.local --user=v-kubernetes-readonly-zpqRzAee2b --password=Jb4epAXSirS2s-pnrI9- my_database
    

Clean up

Destroy the cluster.

$ eksctl delete cluster --name learn-vault

The cluster is destroyed.

Next steps

You launched Vault in high-availability mode with a Helm chart. Learn more about the Vault Helm chart by reading the documentation or exploring the project source code.

The pod you deployed used annotations to inject the secret into the file system. Explore how pods can retrieve secrets through the Vault Injector service via annotations, or secrets mounted on ephemeral volumes.

 Previous
 Next

On this page

  1. Vault Installation to Amazon Elastic Kubernetes Service via Helm
  2. Prerequisites
  3. Start cluster
  4. Install the MySQL Helm chart
  5. Install the Vault Helm chart
  6. Initialize and unseal one Vault pod
  7. Join the other Vaults to the Vault cluster
  8. Create a Vault database role
  9. Configure Vault Kubernetes authentication
  10. Deploy web application
  11. Clean up
  12. Next steps
Give Feedback(opens in new tab)
  • Certifications
  • System Status
  • Terms of Use
  • Security
  • Privacy
  • Trademark Policy
  • Trade Controls
  • Give Feedback(opens in new tab)