Terraform Enterprise Admin CLI Commands
The Active/Active operational mode disables the Replicated Admin Console. Instead, it provides admin CLI commands to change the configuration, stop the application safely, and produce support bundles. You must use SSH to log in to a node in the Active/Active cluster to run these commands.
Admin CLI commands are available on installations using the Standalone operational mode.
tfe-admin is an alias for
replicated admin, and can be used interchangeably.
This command generates a support bundle for all nodes.
The support bundle will be created in
For External Services and Active/Active installations, the support bundles will be uploaded to the same object store bucket that is used to store Terraform state files. The support bundles for a specific run of the admin command will all be uploaded to a directory with the same JobID, which is a timestamp in RFC3339 format. If you are sending a support bundle to HashiCorp Support, package and send all associated bundles to ensure that we have all the necessary information.
Example upload structure
│ └── replicated-support702524260.tar.gz
This command will quiesce the current node and remove it from service. It will allow current work to complete and safely stop the node from picking up any new jobs from the Redis queue, allowing the application to be safely stopped. Currently, it only affects
localhost (it does not support running on one node to drain other nodes).
Note: There is no reverse drain command - a restart is needed to restore the node.
tfe-admin app-config -k <KEY> -v <VALUE>
This command allows you to use the CLI to make real-time application changes, such as
capacity_concurrency. You must provide both an allowable
<KEY> (setting name) and
<VALUE> (new setting value). Run
replicatedctl app-config export for a complete list of the current
For the configuration changes to take effect, you must restart the Terraform Enterprise application on each node instance. To restart Terraform Enterprise:
replicatedctl app stopto stop the application.
replicatedctl app statusto confirm the application is stopped.
replicatedctl app startto start the application.
Note: You should ensure that any ad hoc changes made in this fashion are captured in the standard node build configuration, as the next time you build/rebuild a node only the configuration stored for that purpose will be in effect and ad hoc changes could be lost.
Hint: Adding a function to your Linux start-up like an alias can give you a short cut to the admin
app-config command only requiring a single command and parameters, such as:
# shortcut: tfe-app-config <KEY> <VALUE>
tfe-admin app-config -k "$1" -v "$2"
This command lists the IP addresses of all active nodes in the installation. Nodes send a heartbeat every 5 seconds to signal that they are active. If Terraform Enterprise does not receive a heartbeat from a node within 30 seconds, it considers the node inactive and removes the node from the list.
tfe-admin rotate-encryption-password CURRENT_PASSWORD NEW_PASSWORD
This command rotates the encryption password in use by Terraform Enterprise.
To prevent sensitive information from being stored in the shell history, temporarily write the current and new encryption passwords to files and read them upon execution, deleting the temporary files when finished:
tfe-admin rotate-encryption-password "$(cat current_password.txt)" "$(cat new_password.txt)"
A successful encryption password rotation will show the following output:
Encryption password successfully rotated!
Updating the `enc_password` application configuration on 2 node(s) to reflect the new encryption password.
You must update any installation or automation processes to reflect the new encryption password!
An unsuccessful encryption password rotation will show an error:
Error rotating encryption password:
exit status 1
Encryption password not rotated!
Error reading previous Vault configuration: failed decrypting unseal key: could not decrypt ciphertext: chacha20poly1305: message authentication failed
This command lists the license information and workspace count for the TFE installation on which it is run.
There are additional commands available for checking status and troubleshooting directly on nodes. You can use them to confirm successful installation or to check on the status of a running node as part of troubleshooting activities. Also, there are additional command aliases available that allow you to run more abbreviated versions of commands like just
support-bundle. Run an
alias command with no parameters to see the list of available command aliases.
This command tests and reports on the status of the major TFE services. Each will be listed as PASS or FAIL. If any are marked as FAIL, your TFE implementation is NOT healthy and additional action must be taken.
replicatedctl system status
Displays status info on the Replicated sub-system. Key values to note are that status values return as "ready". This reports on the status of the system on the node instance that it is run on.
replicatedctl app status
Displays status info on the TFE application. Key values to note are that
DesiredState are both "started" and
IsTransitioning is false. This reports on the status of the application on the node instance that it is run on.
The mechanism used to upgrade the TFE node instances is to fully repave the instances (destroy and rebuild entirely). This is another reason why using automation to build the instances is important. Currently, the safest way to perform and upgrade is to shut down all node instances, rebuild one node to validate a successful upgrade, and then scale to additional nodes (currently max 5).
These are the steps required to repave the node instances:
- Run the
node-draincommand as described previously on each node to complete active work and stop new work from being processed.
- Update the instance build configuration such as setting a new
ReleaseSequenceto upgrade versions and/or make any other alterations such as patching the base image used for building the instances.
- Follow the instructions in Terraform Enterprise Active/Active to scale down to zero nodes and proceed through scaling up to one node, validating success, and then scaling additional nodes.
If planned and orchestrated efficiently, the total downtime for the repaving will be the amount of time it has taken to build one node as processing will resume as soon as the first node is functional.