Vault Integration and Retrieving Dynamic Secrets
Nomad can deploy applications while quickly and safely retrieving dynamic credentials, because Nomad integrates seamlessly with Vault--allowing your application to retrieve dynamic credentials for various tasks.
In this guide, you deploy a web application that needs to authenticate against PostgreSQL to display data from a table to the user.
This tutorial demonstrates
Deploying a development Vault server
Configuring the Nomad cluster's nodes to integrate with a Vault server
Using the appropriate templating syntax to retrieve credentials from Vault
Storing those credentials in the secrets task directory to be consumed by the Nomad task
Prerequisites
To perform the tasks described in this guide, you need to have a Nomad environment with Consul and Vault installed. You can use this Terraform environment to easily provision a sandbox environment. This tutorial expects a cluster with one server node and three client nodes.
Tip
This tutorial is for demo purposes and is only using a single Nomad server with Vault installed alongside. Three or five Nomad server nodes are recommended along with a separate Vault cluster for a production cluster.
Deploy a development Vault server
This tutorial is designed for operators that are using a play-instance of Vault; however with some changes, you can perform these steps in a real cluster.
If you are connecting to an existing Vault server and have a token that enables you to create roles you can skip down to "Log in to Vault"
If you are connecting to an existing Vault server and you are unable to create roles, please work with your operations team and have the appropriate personnel run from "Write a policy for Nomad server tokens" to "Generate the token for the Nomad server"
A Vault operator needs to run the "Enable and configure the Database secrets engine" steps
If you are running a play-instance, start the Vault service. You can use an interactive session running in a terminal, a background process running via
nohup
, or a systemd system-unit.
Once Vault is up and responds to vault status
commands, continue on.
Initialize Vault server
Run the following command to initialize a Vault server and receive an unseal key and initial root token. Be sure to note the unseal key and initial root token as you need these two pieces of information.
The preceding vault operator init
command creates a single Vault unseal key for
convenience. For a production environment, it is recommended that you create at
least five unseal key shares and securely distribute them to independent
operators. The vault operator init
command defaults to five key shares and a
key threshold of three. If you provisioned more than one server, the others become
standby nodes but should still be unsealed.
Unseal Vault
Run the following command and then provide your unseal key to Vault.
The output of unsealing Vault looks similar to the following:
Log in to Vault
Use the login command to authenticate yourself against Vault using the initial root token you received earlier. You need to authenticate to run the necessary commands to write policies, create roles, and configure a connection to your database.
If your login is successful, you will receive output similar to what is shown below:
Write a policy for Nomad server tokens
To use the Vault integration, you must provide a Vault token to your Nomad servers. Although you can provide your root token to easily get started, the recommended approach is to use a token role based token. This first requires writing a policy that you attach to the token you provide to your Nomad servers. By using this approach, you can limit the set of policies that tasks managed by Nomad can access.
For this exercise, use the following policy for the token you create for
your Nomad server. Place this policy in a file named nomad-server-policy.hcl
.
You can now write a policy called nomad-server
by running the following
command.
You should receive the following output.
You generate the actual token in the next few steps.
Create a token role
At this point, you must create a Vault token role that Nomad can use. The token role allows you to limit what Vault policies are accessible by jobs submitted to Nomad. Use the following token role.
Please notice that the access-tables
policy is listed under the
allowed_policies
key. You have not created this policy yet, but it will be used
by the job to retrieve credentials to access the database. A job running in this
Nomad cluster will only be allowed to use the access-tables
policy.
If you would like to allow all policies to be used by any job in the Nomad
cluster except for the ones you specifically prohibit, then use the
disallowed_policies
key instead and simply list the policies that should not
be granted. If you take this approach, be sure to include nomad-server
in the
disallowed policies group. An example of this is shown below:
Save the policy in a file named nomad-cluster-role.json
and create the token
role named nomad-cluster
.
You should receive the following output:
Generate the token for the Nomad server
Run the following command to create a token for your Nomad server:
The -orphan
flag is included when generating the Nomad server token above to
prevent revocation of the token when its parent expires. Vault typically creates
tokens with a parent-child relationship. When an ancestor token is revoked, all
of its descendant tokens and their associated leases are revoked as well.
If everything works, you should have output similar to the following:
Configure Nomad to enable Vault integration
At this point, you are ready to edit the vault stanza in the
Nomad Server's configuration file located at /etc/nomad.d/nomad.hcl
. Provide
the token you generated in the previous step in the vault
stanza of your Nomad
server configuration. The token can also be provided as an environment variable
called VAULT_TOKEN
. Be sure to specify the nomad-cluster-role
in the
create_from_role option. If using Vault Namespaces, modify
both the client and server configuration to include the namespace;
alternatively, it can be provided in the environment variable VAULT_NAMESPACE
.
After following these steps and enabling Vault, the vault
stanza in your Nomad
server configuration will be similar to what is shown below.
Restart the Nomad server.
Note
Nomad servers renew the token automatically.
Vault integration needs to be enabled on the client nodes as well. If you are
using the Terraform environment, this has been configured for you already. You
will see the vault
stanza in your Nomad clients' configuration (located at
/etc/nomad.d/nomad.hcl
) looks similar to the following:
Note that the Nomad clients do not need to be provided with a Vault token.
Deploy database
The next few steps involve configuring a connection between Vault and the
database, you can use Nomad to deploy a database server to connect to. Create a
Nomad job called db.nomad
with the following content:
Run the job as shown below.
Verify the job is running with the following command.
The result of the status command will look similar to the output below.
Enable and configure the database secrets engine
Now you can move on to configuring the connection between Vault and the database.
Enable the database secrets engine
You are using the database secrets engine for Vault in this exercise so that you can generate dynamic credentials for the PostgreSQL database. Run the following command to enable it.
If the previous command was successful, you will see the following output:
Configure the database secrets engine
Create a file named connection.json
with the following content.
The preceding information allows Vault to connect to the database and create users
with specific privileges. You will specify the accessdb
role soon. In a
production setting, it is recommended to give Vault credentials with enough
privileges to generate database credentials dynamically and manage their
lifecycle.
Run the following command to configure the connection between the database secrets engine and the database.
If the operation is successful, there is no output.
Create a Vault role to manage database privileges
Recall from the previous step that you specified accessdb
in the
allowed_roles
key of the connection information. Set up that role now.
Create a file called accessdb.sql
with the following content:
The preceding SQL is used in the creation_statements parameter of the next command to specify the privileges that the dynamic credentials being generated will possess. In this case, the dynamic database user will have broad privileges that include the ability to read from the tables that the application will need to access.
Run the following command to create the role.
You should see receive following output after running the previous command.
Generate PostgreSQL credentials
You should now be able to generate dynamic credentials to access your database. Run the following command to generate a set of credentials:
The previous command should return output similar to what is shown below:
Congratulations, you have configured Vault's connection to your database and can now generate credentials with the previously specified privileges. The next steps are to deploy the application and make sure that it is able to communicate with Vault and obtain the credentials as well.
Create access-tables policy for your job
Recall from the Create a token role that you specified a policy named access-tables
in the allowed_policies
section of the token role. You will create this policy
now and give it the capability to read from the database/creds/accessdb
endpoint (the same endpoint you read from in the previous step to generate
credentials for the database). You will then specify this policy in the Nomad job
which will allow it to retrieve credentials for itself to access the database.
On the Vault server (which could be co-located on the Nomad node), create a file
named access-tables-policy.hcl
with the following content:
Create the access-tables
policy with the following command:
You should see the following output:
Deploy your job with the appropriate policy and templating
Now you are ready to deploy the web application and give it the necessary policy
and configuration to communicate with the database. Create a file called
web-app.nomad
and save the following content in it.
There are a few key points to note here:
The job specifies the
access-tables
policy in the vault stanza of this job. The Nomad client receives a token with this policy attached. Recall from the previous step that this policy allows the application to read from thedatabase/creds/accessdb
endpoint in Vault and retrieve credentials.The job uses the template stanza's vault integration to populate the JSON configuration file that the application needs. The underlying tool being used is Consul Template. You can use Consul Template's documentation to learn more about the syntax needed to interact with Vault. Please note that although the job defines the template inline, you can use the template stanza in conjunction with the artifact stanza to download an input template from a remote source such as an S3 bucket.
The job templates use the
toJSON
function to ensure the password is encoded as a JSON string. Any templated value which may contain special characters (like quotes or newlines) should be passed through thetoJSON
function.Finally, note that the destination of the template is the secrets/ task directory. This ensures the data is not accessible with a command like
nomad alloc fs
or filesystem APIs.
Use the following command to run the job:
Confirm the application is accessing the database
At this point, you can visit your application at the path /names
to confirm
the appropriate data is being accessed from the database and displayed to you.
There are several ways to do this.
- Use the
dig
command to query the SRV record of your service and obtain the port it is using. Thencurl
your service at the appropriate port andnames
path.
The output of the dig command indicates the port the service is on in column three.
This output indicates that the service is at port 30478. This port will vary for each run of the job.
If everything is working correctly, you will receive the following HTML output.
- You can also deploy fabio and visit any Nomad client at its public IP
address using a fixed port. The details of this method are beyond the scope of
this guide, but you can refer to the Load Balancing with Fabio
tutorial for more information on this topic. Alternatively, you could use the
nomad
alloc status command along with the AWS console to determine the public IP and port your service is running (remember to open the port in your AWS security group if you choose this method).
Next steps
In this tutorial you deployed a PostgreSQL with Nomad as a job. You then created and secured the login credentials with Vault.