Nomad
Configure service discovery
You have two options for service discovery:
- Consul service discovery, which requires access to a Consul cluster. Consul is the default option for service discovery.
- Nomad service discovery, which requires no additional infrastructure.
Refer to the Service discovery in Nomad documentation for a comparison of Nomad and Consul service discovery options.
Workflow
Follow these steps to configure service discovery in your Nomad job specification (jobspec):
- Declare the service discovery provider in the
serviceblock'sproviderparameter. Note that Consul is the default provider. - Optionally configure health checks in the
serviceblock'scheckblock. - Configure access to other services.
This guide uses sections from an example jobspec that deploys the Countdash
application. The application has a frontend countdash-web service that
communicates with a backend countdash-api service.
Declare the service provider
Consul is the default service provider, so you do not have to configure the
service block's provider parameter. This jobspec example
explicitly declares provider = "consul" to illustrate jobspec structure.
If your job uses the Docker task
driver, you need to configure the
host's Docker bridge IP for DNS address in the jobspec's network block. This
configuration allows Nomad allocations to resolve Consul DNS service names.
Verify the Docker bridge IP address with your Consul system administrator, as it
may differ based on your environment.
Configuring DNS forwarding for Docker is part of Consul's installation process. For a detailed explanation, refer to Configuring systemd-resolved for Docker in the Consul documentation.
countdash.nomad.hcl
job "countdash" {
group "countdash-api" {
service {
provider = "consul"
}
network {
...
dns {
servers = ["172.17.0.1"]
}
}
}
}
Configure health checks
Optionally, define health checks with the check block to make
sure that the service catalog only returns healthy instances. Health check
configuration is the same whether you use Consul service discovery or Nomad
service discovery.
countdash.nomad.hcl
job "countdash" {
group "countdash-api" {
service {
name = "countdash-api"
...
check {
name = "Countdash API ready"
type = "http"
path = "/actuator/health"
interval = "5s"
timeout = "5s"
check_restart {
limit = 0
}
}
}
}
}
Configure access to other services
If you are using Consul for service discovery and have not enabled service mesh features, use Consul DNS to discover services and nodes in the Consul catalog. Refer to the Consul documentation's Standard lookups guide for more information. Do not use the client node addresses from Nomad for upstream service resolution.
To find a service in the Consul service catalog, use the following format:
<service_name>.service.<consul_datacenter>.<consul_domain>
Consul's default datacenter is dc1 and default domain is consul. Those
values are configurable, so verify datacenter and domain with your Consul admin.
The countdash-web service needs to communicate with its backend
countdash-api service. The COUNTING_SERVICE_URL variable is populated with
the service's Consul service catalog value.
countdash.nomad.hcl
job "countdash" {
group "countdash-web" {
task "countdash-web" {
...
env {
COUNTING_SERVICE_URL = "http://countdash-api.service.dc1.consul:${var.countdash-api-port}"
PORT="${var.countdash-web-port}"
}
}
}
You may also use template
blocks blocks with the
Consul Template service function to locate services in the Consul service
discovery catalog. Refer to the Consul Template service function
documentation for syntax and
examples. You may use the template as a configuration file or have its content
loaded as environment variables to configure connection information in
applications. Nomad includes Consul Template so you do not have to install it
separately.
Find services in the service catalog
Use the consul catalog services command to list services in the Consul catalog.
$ consul catalog services
consul
countdash-api
countdash-web
nomad
nomad-client
Refer to the Consul documentation for more information about using Consul's CLI.
Find your deployed service's public IP address
Use the Consul v1/catalog/service/:service_name> API
endpoint to find the public IP
address of your running service. This example finds the countdash-web
service's public IP address.
Replace these placeholders:
<consul-http-address>:<port>: Consul's public IP address and port, such ashttp://13.58.60.124<consul-management-token>: The value of your Consul token, which has appropriate privileges to query the Consul catalog. Refer to Consul's HTTP API Structure guide for token details.
curl --location 'http://<consul-http-address>:<port>/v1/catalog/service/countdash-web?passing' \>
--header 'X-Consul-Token: <consul-management-token>' | \
jq -r '.[] | "\(.ServiceAddress):\(.ServicePort)"'
Output returns the public IPv4 address, which in this example is an AWS EC2 public IP and port.
ec2-3-145-209-63.us-east-2.compute.amazonaws.com:9002
Use service tags
You may specify service blocks multiple times with the same name but for
different ports. When you query the service name, the service discovery catalog
returns all instances of the service. To restrict results, you may assign tags to services to group them.
This example exposes an application on two ports for different protocols.
job "..." {
# ...
group "..." {
network {
port "http" {}
port "grpc" {}
}
service {
name = "my-app"
port = "http"
tags = ["http"]
# ...
}
service {
name = "my-app"
port = "grpc"
tags = ["grpc"]
# ...
}
}
}
By assigning different tags, you may reach the port for each protocol with the
http.my-app and grpc.my-app service queries.
To query for a service with a specific tag, prepend the tag to the Consul DNS query format.
<tag>.<service_name>.service.<consul_datacenter>.<consul_domain>
For example, grpc.my-app.service.dc1.consul.
Refer to Standard lookups in the Consul documentation for more information about formatting Consul DNS addresses.
Canary deployment tags
When using canary or
blue/green for job
specification upgrades, you may specify a different set of tags for the canary
allocations with the canary_tags
parameter configuration.
During a deployment, Nomad registers the new allocations with the tags set in
canary_tags while non-canaries use the values in tags. Having different sets
of tags lets you create separate load balancing routing rules to preview
canaries. Refer to the "Load balancer deployment considerations"
guide for more information.
Nomad registers services with either tags or canary_tags, but you must set
them in both fields in order to share values.
Countdash example job specs
These example Countdash application jobspecs deploy a job called countdash to
a Nomad cluster running on AWS EC2 instance (Ubuntu 22.04, AMD64 architecture).
The job deploys two services, countdash-api and countdash-web, which are
available in the Consul service catalog or Nomad service catalog depending on
the configured service discovery provider.
You may review the frontend and backend service code in the HashiCorp
demo-consul-101
repository.
The web application runs on port 9002, so make sure to open that port in your AWS security group.
If you want to run the job on infrastructure other than AWS, you should update
each group's service.address parameter value to
attr.unique.network.ip-address or similar. Refer to the service block's
address parameter reference
for more information.
countdash.nomad.hcl
variable "countdash-api-port" {
description = "Countdash API Port"
default = 9001
}
variable "countdash-web-port" {
description = "Countdash web port"
default = 9002
}
job "countdash" {
group "countdash-api" {
count = 1
network {
port "countdash-api" {
static = var.countdash-api-port
}
dns {
servers = ["172.17.0.1"]
}
}
service {
name = "countdash-api"
provider = "consul"
port = "countdash-api"
address = attr.unique.platform.aws.local-ipv4
check {
name = "Countdash API ready"
type = "http"
path = "/actuator/health"
interval = "5s"
timeout = "5s"
check_restart {
limit = 0
}
}
}
task "countdash-api" {
driver = "docker"
meta {
service = "countdash-api"
}
config {
image = "hashicorpdev/counter-api:v3"
ports = ["countdash-api"]
mount {
type = "bind"
source = "local/application.properties"
target = "/application.properties"
}
}
template {
data = "server.port=${var.countdash-api-port}"
destination = "local/application.properties"
}
resources {
memory = 500
}
}
}
group "countdash-web" {
count = 1
network {
port "countdash-web" {
static = var.countdash-web-port
}
dns {
servers = ["172.17.0.1"]
}
}
service {
name = "countdash-web"
provider = "consul"
port = "countdash-web"
address = attr.unique.platform.aws.public-hostname
check {
name = "Countdash web ready"
type = "http"
path = "/"
interval = "5s"
timeout = "5s"
}
}
task "countdash-web" {
driver = "docker"
meta {
service = "countdash-web"
}
env {
COUNTING_SERVICE_URL = "http://countdash-api.service.dc1.consul:${var.countdash-api-port}"
PORT="${var.countdash-web-port}"
}
config {
image = "hashicorpdev/counter-dashboard:v3"
auth_soft_fail = true
ports = ["countdash-web"]
}
}
}
}