Nomad
csi_plugin Stanza
| Placement | job -> group -> task -> csi_plugin | 
The "csi_plugin" stanza allows the task to specify it provides a Container Storage Interface plugin to the cluster. Nomad will automatically register the plugin so that it can be used by other jobs to claim volumes.
csi_plugin {
  id             = "csi-hostpath"
  type           = "monolith"
  mount_dir      = "/csi"
  health_timeout = "30s"
}
csi_plugin Parameters
- id- (string: <required>)- This is the ID for the plugin. Some plugins will require both controller and node plugin types (see below); you need to use the same ID for both so that Nomad knows they belong to the same plugin.
- type- (string: <required>)- One of- node,- controller, or- monolith. Each plugin supports one or more types. Each Nomad client node where you want to mount a volume will need a- nodeplugin instance. Some plugins will also require one or more- controllerplugin instances to communicate with the storage provider's APIs. Some plugins can serve as both- controllerand- nodeat the same time, and these are called- monolithplugins. Refer to your CSI plugin's documentation.
- mount_dir- (string: <required>)- The directory path inside the container where the plugin will expect a Unix domain socket for bidirectional communication with Nomad.
- health_timeout- (duration: <optional>)- The duration that the plugin supervisor will wait before restarting an unhealthy CSI plugin. Must be a duration value such as- 30sor- 2m. Defaults to- 30sif not set.
Note: Plugins running as node or monolith require root
privileges (or CAP_SYS_ADMIN on Linux) to mount volumes on the
host. With the Docker task driver, you can use the privileged = true
configuration, but no other default task drivers currently have this
option.
Recommendations for Deploying CSI Plugins
CSI plugins run as Nomad tasks, but after mounting the volume are not in the data path for the volume. Tasks that mount volumes write and read directly to the volume via a bind-mount and there is no communication between the job and the CSI plugin. But when an allocation that mounts a volume stops, Nomad will need to communicate with the plugin on that allocation's node to unmount the volume. This has implications on how to deploy CSI plugins:
- If you are stopping jobs on a node, you must stop tasks that claim volumes before stopping the - nodeor- monolithplugin for those volumes. You should run- nodeor- monolithplugins as- systemjobs and use the- -ignore-systemflag on- nomad node drainto ensure that the plugins are running while the node is being drained.
- Only one plugin instance of a given plugin ID and type (controller or node) should be deployed on any given client node. Use a constraint as shown below. 
- Some plugins will create volumes only in the same location as the plugin. For example, the AWS EBS plugin will create and mount volumes only within the same Availability Zone. You should deploy these plugins with a unique-per-AZ - plugin_idto allow Nomad to place allocations in the correct AZ.
csi_plugin Examples
job "plugin-efs" {
  datacenters = ["dc1"]
  # you can run node plugins as service jobs as well, but running
  # as a system job ensures all nodes in the DC have a copy.
  type = "system"
  # only one plugin of a given type and ID should be deployed on
  # any given client node
  constraint {
    operator = "distinct_hosts"
    value = true
  }
  group "nodes" {
    task "plugin" {
      driver = "docker"
      config {
        image = "amazon/aws-efs-csi-driver:v.1.3.2"
        args = [
          "--endpoint=unix://csi/csi.sock",
          "--logtostderr",
          "--v=5",
        ]
        # all CSI node plugins will need to run as privileged tasks
        # so they can mount volumes to the host. controller plugins
        # do not need to be privileged.
        privileged = true
      }
      csi_plugin {
        id             = "aws-efs0"
        type           = "node"
        mount_dir      = "/csi"  # this path /csi matches the --endpoint
                            # argument for the container
        health_timeout = "30s"
      }
    }
  }
}