Consul is a tool for discovering and configuring services in your infrastructure. Consul's key features include service discovery, health checking, a KV store, and robust support for multi-datacenter deployments. Nomad's integration with Consul enables automatic clustering, built-in service registration, and dynamic rendering of configuration files and environment variables. The sections below describe the integration in more detail.
In order to use Consul with Nomad, you will need to configure and install Consul on your nodes alongside Nomad, or schedule it as a system job. Nomad does not run Consul for you.
To enable Consul integration, please refer to the Nomad agent Consul configuration documentation.
Nomad servers and clients will be automatically informed of each other's existence when a running Consul cluster already exists and the Consul agent is installed and configured on each host. Please refer to the Automatic Clustering with Consul guide for more information.
Nomad schedules workloads of various types across a cluster of generic hosts. Because of this, placement is not known in advance and you will need to use service discovery to connect tasks to other services deployed across your cluster. Nomad integrates with Consul to provide service discovery and monitoring.
To configure a job to register with service discovery, please refer to the
service job specification documentation.
Consul service mesh provides service-to-service connection authorization and encryption using mutual Transport Layer Security (TLS). Nomad can automatically provision the components necessary to securely connect your tasks to Consul's service mesh.
Refer to the Consul Service Mesh integration page for more information.
Nomad's job specification includes a
template block that uses a Consul
ecosystem tool called Consul Template. This mechanism creates a convenient
way to ship configuration files that are populated from environment variables,
Consul data, Vault secrets, or just general configurations within a Nomad task.
For more information on Nomad's template block and how it leverages Consul
Template, please see the
template job specification documentation.
The Consul ACL system protects the cluster from unauthorized access. When enabled, both Consul and Nomad must be properly configured in order for their integrations to work.
Refer to the Consul ACL integration page for more information.
Nomad provides integration with Consul Namespaces for
service registrations specified in
service blocks and Consul KV reads in
By default, Nomad will not specify a Consul namespace on service registrations
or KV store reads, which Consul then implicitly resolves to the
namespace. This default namespace behavior can be modified by setting the
namespace field in the Nomad agent Consul
For more control over Consul namespaces, Nomad Enterprise supports configuring
the Consul namespace at the group or task level in
the Nomad job spec as well as the
command line argument for
The Consul namespace used for a set of group or task service registrations
within a group, as well as
template KV store access is determined from the
following hierarchy from highest to lowest precedence:
group and task configuration: Consul namespace field defined in the job at the task or group level.
job run command option: Consul namespace defined in the
-consul-namespacecommand line option on job submission.
job run command environment various: Consul namespace defined as the
CONSUL_NAMESPACEenvironment variable on job submission.
agent configuration: Consul namespace defined in the
namespaceNomad agent Consul configuration parameter.
Consul default: If no Consul namespace options are configured, Consul will automatically make use of the
Nomad Enterprise supports access to multiple Consul clusters. They can be
configured using multiple
consul blocks with different
name values. If a
name is not provided, the cluster configuration is called
default. Nomad automatic clustering uses the
default cluster for service
Jobs that need access to Consul may specify which Consul cluster to use with
Each Nomad client should have a local Consul agent running on the same host, reachable by Nomad. Nomad clients should never share a Consul agent or talk directly to the Consul servers. Nomad is not compatible with Consul Data Plane.
The service discovery feature in Nomad depends on operators making sure that the Nomad client can reach the Consul agent.
Tasks running inside Nomad also need to reach out to the Consul agent if they want to use any of the Consul APIs. Ex: A task running inside a docker container in the bridge mode won't be able to talk to a Consul Agent running on the loopback interface of the host since the container in the bridge mode has its own network interface and doesn't see interfaces on the global network namespace of the host. There are a couple of ways to solve this, one way is to run the container in the host networking mode, or make the Consul agent listen on an interface in the network namespace of the container.
consulbinary must be present in Nomad's
$PATHto run the Envoy proxy sidecar on client nodes.
Consul service mesh using network namespaces is only supported on Linux.
Most supported versions of Nomad are compatible with most recent versions of Consul, with some exceptions.
- Nomad versions 1.6.0+, 1.5.6+, and 1.4.11+ are compatible with any currently supported version of Consul.
- Nomad versions 1.4.4 to 1.4.11 and 1.5.0 to 1.5.6 are compatible with any currently supported version of Consul except 1.13.8.
- Nomad versions 1.4.0 through 1.4.3 are compatible with Consul versions 1.13.0 through 1.13.7, and 1.13.9. Changes to Consul service mesh in version 1.14 are incompatible with Nomad 1.4.3 and earlier.
- Nomad is not compatible with Consul Data Plane.
|Consul 1.13.0 - 1.13.7