Terraform
- Terraform Enterprise
- 1.0.x
- v202507-1
- v202506-1
- v202505-1
- v202504-1
- v202503-1
- v202502-2
- v202502-1
- v202501-1
- v202411-2
- v202411-1
- v202410-1
- v202409-3
- v202409-2
- v202409-1
- v202408-1
- No versions of this document exist before v202408-1. Click below to redirect to the version homepage.
- v202407-1
- v202406-1
- v202405-1
- v202404-2
- v202404-1
- v202402-2
- v202402-1
- v202401-2
- v202401-1
- v202312-1
- v202311-1
- v202310-1
- v202309-1
- v202308-1
- v202307-1
- v202306-1
- v202305-2
- v202305-1
- v202304-1
- v202303-1
- v202302-1
- v202301-2
- v202301-1
- v202212-2
- v202212-1
- v202211-1
- v202210-1
- v202209-2
- v202209-1
- v202208-3
- v202208-2
- v202208-1
- v202207-2
- v202207-1
- v202206-1
Logging
Terraform Enterprise generates logs that you can use for auditing security-related events and logs that you can use to monitor the health of the application. Learn about the types of events and logs available in Terraform Enterprise and how to forward them to external monitoring systems.
Audit logs
Audit logs in Terraform Enterprise to help you monitor security-related events, such as when someone accesses the admin console or calls system API endpoints. Audit log availability depends on system configuration and operational status. The system might not capture audit events during maintenance, failures, or misconfigurations.
Data considerations
Audit logs contain operational data including IP addresses and user agent information.
Event types
Terraform Enterprise logs the following event types:
- Authentication success: Successful logins and logouts through the admin console
- Authentication failure: Failed login attempts, invalid tokens, and expired sessions
- CSRF violation: Cross-site request forgery attempts detected
Audit log format
Each audit log entry includes standardized fields for consistent analysis:
| Field | Description | Example |
|---|---|---|
timestamp | Time the event was recorded in ISO 8601 format | 2025-10-15T11:12:50.282Z |
level | Log severity for the event | INFO |
component | Terraform Enterprise subsystem emitting the event | terraform-enterprise.audit |
event_type | Type of audit event | auth.login.success |
method | HTTP method used | POST |
resource | API endpoint accessed | /api/v1/admin/login |
source_ip | Client IP address | 10.0.1.100 |
user_agent | Client application identifier (quoted when it contains spaces) | "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/141.0.0.0 Safari/537.36" |
status_code | HTTP response status code | 200 |
user_agent | Client application identifier | "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/141.0.0.0 Safari/537.36" |
request_id | Request correlation identifier | mGYliJZQknQKqznpjqCAnPEJbVZEDMCP |
actor_id | Token identifier | 7275113b-093b-46dd-9261-e2df7c07752a |
The following example shows a typical audit log entry for a successful login to the admin console:
2025-10-15T11:12:50.282Z [INFO] terraform-enterprise.audit: audit event: event_type=auth.login.success method=POST resource=/api/v1/admin/login source_ip=167.71.253.50 user_agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/141.0.0.0 Safari/537.36" status_code=200 request_id=mGYliJZQknQKqznpjqCAnPEJbVZEDMCP actor_id=7275113b-093b-46dd-9261-e2df7c07752a
Retention
Terraform Enterpries automatically rotates log files. No specific retention period is guaranteed for audit logs.
If you need to retain audit logs for extended periods to meet compliance or security requirements, configure log forwarding to send audit events to an external log management system. Without log forwarding, audit logs may be lost during automatic log rotation.
Refer to External log forwarding to configure external log storage.
Service logs
Terraform Enterprise writes service logs directly to standard output and standard error. This allows you to forward logs using native tooling for your deployment platform. Terraform Enterprise stores individual service logs in the
/var/log/terraform-enterprise directory inside the container:
/var/log/terraform-enterprise
├── atlas.log
├── nginx.log
├── sidekiq.log
└── vault.log
Service log format
Each service log is a plain text file containing the logs for that service. Logs are collated and logged to the container's standard output in JSON format. Each log entry contains two fields:
component: The name of the individual service that emitted the log entry.log: The contents of the log message.
An example set of log entries emitted by a Terraform Enterprise container would appear as follows:
{"log":"2023-09-18 02:39:05 [INFO] msg=Worker start worker=AuthenticationTokenDeletionWorker","component":"sidekiq"}
{"log":"2023-09-18T02:39:05.098Z pid=156 tid=2pos class=FailedJobWorker jid=1010d28ac591979d9decb61f INFO: start","component":"sidekiq"}
{"log":"2023-09-18 02:39:05 [INFO] msg=Worker start worker=FailedJobWorker","component":"sidekiq"}
{"log":"2023-09-18 02:39:05 [INFO] msg=Worker finish worker=AuthenticationTokenDeletionWorker","component":"sidekiq"}
{"log":"2023-09-18T02:39:05.114Z pid=156 tid=2pyc class=AuthenticationTokenDeletionWorker jid=515e8a727a3e4948e9dbb04a elapsed=0.034 INFO: done","component":"sidekiq"}
{"log":"2023-09-18 02:39:05 [INFO] agent_jobs_processed=[] agent_jobs_errored=[] msg=Worker finish worker=FailedJobWorker","component":"sidekiq"}
{"log":"2023-09-18T02:39:05.118Z pid=156 tid=2pos class=FailedJobWorker jid=1010d28ac591979d9decb61f queue=default elapsed=0.02 INFO: done","component":"sidekiq"}
{"log":"2023-09-18 02:39:13 [INFO] [3efaaec9-48d4-4517-9fde-127f80faacb4] [dd.service=atlas dd.trace_id=1904097642804464614 dd.span_id=0 ddsource=ruby] {\"method\":\"GET\",\"path\":\"/\",\"format\":\"html\",\"status\":301,\"allocations\":493,\"duration\":0.72,\"view\":0.0,\"db\":0.0,\"location\":\"https://tfe.example.com/session\",\"dd\":{\"trace_id\":\"1904097642804464614\",\"span_id\":\"0\",\"env\":\"\",\"service\":\"atlas\",\"version\":\"\"},\"ddsource\":[\"ruby\"],\"uuid\":\"3efaaec9-48d4-4517-9fde-127f80faacb4\",\"remote_ip\":\"1.2.3.4\",\"request_id\":\"3efaaec9-48d4-4517-9fde-127f80faacb4\",\"user_agent\":\"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36\",\"user\":null,\"auth_source\":null}","component":"atlas"}
{"log":"2023-09-18 02:39:13 [INFO] [3cb89cfa-7d7f-4aeb-9e60-2256b016a839] [dd.service=atlas dd.trace_id=4370203755142829190 dd.span_id=0 ddsource=ruby] {\"method\":\"GET\",\"path\":\"/session\",\"format\":\"html\",\"status\":200,\"allocations\":3895,\"duration\":7.3,\"view\":5.77,\"db\":0.59,\"dd\":{\"trace_id\":\"4370203755142829190\",\"span_id\":\"0\",\"env\":\"\",\"service\":\"atlas\",\"version\":\"\"},\"ddsource\":[\"ruby\"],\"uuid\":\"3cb89cfa-7d7f-4aeb-9e60-2256b016a839\",\"remote_ip\":\"1.2.3.4\",\"request_id\":\"3cb89cfa-7d7f-4aeb-9e60-2256b016a839\",\"user_agent\":\"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36\",\"user\":null,\"auth_source\":null}","component":"atlas"}
{"log":"1.2.3.4 - - [18/Sep/2023:02:39:13 +0000] \"GET / HTTP/1.1\" 301 117 \"-\" \"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36\"","component":"nginx"}
{"log":"1.2.3.4 - - [18/Sep/2023:02:39:13 +0000] \"GET /session HTTP/1.1\" 200 1735 \"-\" \"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36\"","component":"nginx"}
{"log":"Storing the encrypted Vault token in Redis","component":"vault"}
The formats of individual service logs are internal implementation details and subject to change at any release.
External log forwarding
We strongly recommend using an external log forwarding solution that aligns with your existing observability solutions. Depending on your deployment platform, native or third-party solutions, such as host-level monitoring agents, may be an appropriate solution for log aggregation and forwarding. HashiCorp does not provide support for third-party log forwarding solutions.
Docker
Docker supports many logging drivers. Refer to the Docker logging driver list for available options.
Kubernetes
Kubernetes supports several architectures for log-forwarding. Refer to the Kubernetes logging architectures documentation for available options.
Native log forwarding
You can inject FluentBit [OUTPUT] configuration directives so that Terraform Enterprise can use FluentBit plugins to forward log data directly to a number of external destinations.
FluentBit configuration must be provided the Terraform Enterprise container in a file mounted to the container. That is, the configuration value must point to a filesystem path on the Docker container where the FluentBit configuration is located; the configuration must not contain the actual configuration itself. This means it is the responsibility of the Terraform Enterprise operator to mount the configuration snippet to the Docker container.
| Key | Description | Specific Format Required |
|---|---|---|
| TFE_LOG_FORWARDING_CONFIG_PATH | Filesystem path on the Terraform Enterprise container containing FluentBit [OUTPUT] configuration | Yes, string. |
Limitations
The FluentBit solution provided in legacy Replicated Terraform Enterprise deployments emitted log entries that contained additional metadata keys, such as hostname and IP address. This allowed for additional observability value from log entries, as operators could identify the source of log entries. Unlike Replicated deployments, logs emitted by the FluentBit plugins made available in Terraform Enterprise Flexible Deployments do not contain additional metadata attached to each log entry. This is due to the isolated nature of the FluentBit process within the Terraform Enterprise Docker container; by definition, processes within the Docker container are not exposed to host-level details.
Because of this, we strongly recommend using an external log forwarding solution that aligns with your existing observability solutions. See external log forwarding for further discussion.
Additionally, note that built-in log forwarding is only available for Docker-deployed Terraform Enterprise installations. Terraform Enterprise deployed on Kubernetes does not support leveraging the built-in FluentBit.
Supported log destinations
You can only forward logs to one of the following external destinations. Refer to the example configuration for each destination for additional guidance.
Amazon CloudWatch
Sending to Amazon CloudWatch is only supported when Terraform Enterprise is located within AWS due to how Fluent Bit reads AWS credentials.
This example configuration forwards all logs to Amazon CloudWatch. Refer to the
cloudwatch_logs Fluent Bit output plugin documentation
for more information.
[OUTPUT]
Name cloudwatch_logs
Match *
region us-east-1
log_group_name example-log-group
log_stream_name example-log-stream
auto_create_group On
Note: In Terraform Enterprise installations using AWS external services,
Fluent Bit will have access to the same AWS_ACCESS_KEY_ID and
AWS_SECRET_ACCESS_KEY environment variables that are used for object storage.
Amazon S3
Sending to Amazon S3 is only supported when Terraform Enterprise is located within AWS due to how Fluent Bit reads AWS credentials.
This example configuration forwards all logs to Amazon S3. Refer to the
s3 Fluent Bit output plugin documentation
for more information.
[OUTPUT]
Name s3
Match *
bucket example-bucket
region us-east-1
total_file_size 250M
s3_key_format /$TAG/%Y/%m/%d/%H/%M/%S/$UUID.gz
s3_key_format_tag_delimiters .-
Note: In Terraform Enterprise installations using AWS external services,
Fluent Bit will have access to the same AWS_ACCESS_KEY_ID and
AWS_SECRET_ACCESS_KEY environment variables that are used for object storage.
Azure Blob Storage
This example configuration forwards all logs to Azure Blob Storage. Refer to the
azure_blob Fluent Bit output plugin documentation
for more information.
[OUTPUT]
name azure_blob
match *
account_name example-account-name
shared_key example-access-key
path logs
container_name example-container-name
auto_create_container on
tls on
Azure Log Analytics
This example configuration forwards all logs to Azure Log Analytics. Refer to
the azure Fluent Bit output plugin documentation
for more information.
[OUTPUT]
name azure
match *
Customer_ID example-log-analytics-workspace-id
Shared_Key example-access-key
Datadog
This example configuration forwards all logs to Datadog. Refer to the
datadog Fluent Bit output plugin documentation
for more information.
[OUTPUT]
Name datadog
Match *
Host http-intake.logs.datadoghq.com
TLS on
compress gzip
apikey example-api-key
dd_service terraform_enterprise
dd_source docker
dd_tags environment:development,owner:engineering
Forward
This example configuration forwards all logs to a listening Fluent Bit or
Fluentd instance. Refer to the
forward Fluent Bit output plugin documentation
for more information.
[OUTPUT]
Name forward
Match *
Host fluent.example.com
Port 24224
Google Cloud Platform Cloud Logging
Sending to Google Cloud Platform Cloud Logging is only supported when Terraform Enterprise is located within GCP due to how Fluent Bit reads GCP credentials.
This example configuration forwards all logs to Google Cloud Platform Cloud
Logging (formerly known as Stackdriver). Refer to the
stackdriver Fluent Bit output plugin documentation
for more information.
[OUTPUT]
Name stackdriver
Match *
location us-east1
namespace terraform_enterprise
node_id example-hostname
resource generic_node
Note: In Terraform Enterprise installations using GCP external services,
Fluent Bit will have access to the GOOGLE_SERVICE_CREDENTIALS environment
variable that points to a file containing the same GCP Service Account JSON
credentials that are used for object storage.
Splunk Enterprise HTTP Event Collector (HEC)
This example configuration forwards all logs to Splunk Enterprise via the HTTP
Event Collector (HEC) interface. Refer to the
splunk Fluent Bit output plugin documentation
for more information.
[OUTPUT]
Name splunk
Match *
Host example-splunk-hec-endpoint
Port 8088
Splunk_Token example-splunk-token
Syslog
This example configuration forwards all logs to a Syslog-compatible endpoint.
Refer to the
syslog Fluent Bit output plugin documentation
for more information.
[OUTPUT]
Name syslog
Match *
host example-syslog-host
port 514
mode tcp
syslog_message_key log
syslog_severity_key PRIORITY
syslog_hostname_key _HOSTNAME
syslog_appname_key SYSLOG_IDENTIFIER
syslog_procid_key _PID