Event Filtering and Sink Configuration
Boundary 0.8 increases observability with the general availability of event-logging for operators, allowing for more fine-grained visibility when managing Boundary clusters. System information can now be logged in a well-defined, structured format that provides operators increased visibility into emitted events.
When enabled, event logs are the only type of logging Boundary performs and
standard system information and debug logs will no longer appear in stdout
.
Event logs are filterable by event type and other defined expressions, although
HCLog output is still currently available.
This tutorial demonstrates the basics of how to define and configure a logging event sink, and then visualize events using Elasticsearch and Kibana.
Prerequisites
This tutorial assumes that you understand how to Start a Development Environment.
Docker is installed
Docker-Compose is installed
Tip
Docker Desktop 20.10 and above include the Docker Compose binary, and does not require separate installation.
A Boundary binary greater than 0.8.1 in your
PATH
Terraform 0.13.0 or greater in your
PATH
Logging in Dev mode
Boundary can be started in dev mode using the -event-allow-filter
option to
specify what kinds of events should be logged. It is important to know what
kinds of events are emitted by Boundary in order to know what should be logged.
For example, requests to the "/data/request_info/path"
endpoint that contain
":authenticate"
reference authentication events. Requests to the "/data/op"
endpoint that contain ".createClientConn"
reference client connection events.
boundary dev
can be started with these pre-configured event sinks like this:
These events are then emitted to stdout
. By default, boundary dev
logs all
events to stdout
, unless otherwise specified using -event-allow-filter
.
Get Setup
The lab environment for this tutorial uses Docker Compose to deploy these containers:
- Boundary controller server
- Boundary worker server
- Boundary Postgres database
- Elasticsearch
- Kibana
- Filebeat
- A Postgres target
This tutorial includes an "ELK" stack (or really a EFK stack) with:
- elasticsearch for persisting and searching event logs
- filebeat to collect and send event logs to elasticsearch
- kibana to visualize events
To learn more about the various Boundary components, refer back to the Start a Development Environment tutorial.
Deploy the lab environment
The lab environment can be downloaded or cloned from the following Github repository:
In your terminal, clone the repository to get the example files locally:
Move into the
learn-boundary-event-logging
folder.Ensure that you are in the correct directory by listing its contents.
The repository contains the following files:
auditlogs/
: A shared directory for log files.deploy
: A script used to deploy and tear down the Docker-Compose configuration.filebeat.docker.yml
: The filbeat config for sending event logs to elasticsearchcompose/docker-compose.yml
: The Docker-Compose configuration file describing how to provision and network the boundary cluster.compose/controller.hcl
: The controller configuration file.compose/worker.hcl
: The worker configuration file.postgres/postgresql.conf
: The Boundary database config file.terraform/main.tf
: The terraform provisioning instructions using the Boundary provider.
This tutorial makes it easy to launch the test environment with the
deploy
script.Any resource deprecation warnings in the output can safely be ignored.
The user details are printed in the shell output, and can also be viewed by inspecting the
terraform/terraform.tfstate
file.You will need the user1
auth_method_id
to authenticate via the CLI and establish sessions later on. Export this value as an environment variable:You can tear down the environment at any time by executing
./deploy cleanup
.To verify that the environment deployed correctly, print the running docker containers in the
boundary
deployment.First, export the Docker Compose project name,
boundary
, as an environment variable.Then print the containers created using Compose.
This tutorial will examine the configuration of event sink logs on the controller and worker containers.
Event sinks
An event sink is a location where events can be written to. Sinks can be configured to allow or deny event types using filter syntax.
Common event types include cloudevents and hclog, which can be encoded as text and json.
To better understand events, examine the stderr
on the running controller
container by checking its logs.
These events were emitted as part of the provisioning process when the deploy
script was executed.
An event sink is set up in the configuration file for a controller or worker
server. Below are the contents of the events
stanza in the
compose/controller.hcl
configuration file:
The types of events that should be emitted by Boundary are declared at the top
of the events
stanza. In this example audit, observation and sysevents are all
enabled.
audit_enabled
: Audit events can specify what data is included and different options for redacting and encrypting that data.observation_enabled
: Specifies if observation events should be emitted.sysevents_enabled
: Specifies if system events should be emitted.
The sink
stanza is used to declare a location for emitted events to be sent.
Two types of sinks are available:
stderr
: The stderr sink configures Boundary to send events to a stderr.file
: The file sink configures Boundary to send events to a file.
The sink stanza can be repeated to make Boundary send events to multiple sinks, but each file sink must have a unique path + file_name.
Default events
When no event stanza is specified, the following default is used:
While this configuration is the default, if other sinks are configured it must
be declared explicitly to send events to stderr
.
If logs should be printed to stderr
on the controller or workers, the
following configuration must be present:
File sinks
The second sink in the events
stanza declares a file sink:
Each sink declares the type of events that should be written to it. Here all
"audit"
events will be written in the cloudevents-json
format.
In the file
block the path
and file_name
attributes declare where this
file should be stored on the local filesystem. Note that this file will be
written to /logs/controller.log
within the controller container. The lab
environment for this tutorial defined in the docker-compose.yml
file sets up the
learn-boundary-event-logging/auditlogs/
path as a shared directory for the
controller and worker docker containers, available on the hosts at /logs
.
This can be verified by printing the contents of the log file on the
controller.
The contents of this file should also be available within the
learn-boundary-event-logging/auditlogs/controller.log
file on your local
machine. Use the auditlogs/
directory to view the log files for the rest of
this tutorial.
Event sink filtering
Event sinks can be configured to filter events, so that a subset of events can be sent to a sink. This is useful for tracking when Boundary produces certain events relevant to operators or sysadmins, such as authentication or session management events.
For example, below is an authentication event:
For authentication events, the "path"
for the API request contains
:authenticate
. In this example the full "path"
is
"/v1/auth-methods/ampw_IihbGh5KA1:authenticate"
.
To create a sink for these events, the following filter captures events with a
"path"
containing :authenticate
:
"/data/request_info/path" contains ":authenticate"
Define an authentication sink
Next, define a new file sink that only captures authentication events.
Open the compose/controller.hcl
config file.
Uncomment lines 84 - 96, which define the following sink:
Save this file.
Notice the allow_filters
syntax for authentication events:
The allow_filters
sink attribute is a logical "or" operator, meaning that only
the defined events will be captured. Sinks also support the deny_filters
attribute, which instead defines what events should not be captured. Event
sink filtering uses the standard filter
syntax used elsewhere
in Boundary.
Warning
HCL configuration files require the use of double-quotes when defining
parameters. This means the filter must be surrounded with double-quotes, and
then escape syntax (\
) used when a literal "
is written.
The earlier filter:
"/data/request_info/path" contains ":authenticate"
must then be written as:
"\"/data/request_info/path\" contains \":authenticate\""
Use escape syntax when defining any filters within a Boundary HCL config file.
Restart the controller to apply the new config:
Next, authenticate as user1
using the password password
.
Check the shared directory and locate the new
learn-boundary-event-logging/auditlogs/auth.log
file. It should contain a
single event from the recent authentication as user1. Future authentication
events will be logged here, too.
Define an authorize session sink
Another useful event sink might be dedicated to requests to authorize sessions
to targets. These events are already captured as audit
events in the
controller's log file.
An example of a session authorization request is printed below.
For session authorization events, the "path"
for the API request contains
:authorize-session
. In this example the full "path"
is
"/v1/targets/postgres:authorize-session"
.
To create a sink for these events, the following filter captures events with a
"path"
containing :authorize-session
:
"/data/request_info/path" contains ":authorize-session"
In addition to authorizations, Boundary sessions also produce the following events:
- AuthorizeConnection
- ActivateSession
- ConnectConnection
- LookupSession
- CancelSession
- CloseConnection
For these session events, the "method"
for the API request contains
SessionService
. An example of an AuthorizeConnection request is printed below:
In this example the full "method"
is
"/controller.servers.services.v1.SessionService/AuthorizeConnection"
.
The following filter captures events related to session management by filtering
for a "method"
containing SessionService
:
"/data/request_info/method" contains "SessionService"
Now that you understand the filter syntax needed to capture session events, define a new file sink that captures session events, including authorizations and session services.
Open the compose/controller.hcl
config file.
Copy the the following sink and paste it beneath the auth-sink in the
compose/controller.hcl
:
Ensure that the session sink is pasted within the events{} stanza. There
should be a closing }
following the copy-paste of the above filter.
Note that escape syntax is used again when defining the filter.
The full contents of the events stanza within the compose/controller.hcl
file
is printed below for reference.
Save this file and restart the controller container to apply the new configuration.
To test this event sink, a postgres target is included in the Docker Compose deployment.
Wait a moment for the controller to restart, and then establish a session to
the postgres
target as the postgres
user using boundary connect postgres
.
Enter the password postgres
when prompted.
Enter exit
to close the connection.
Check the shared auditlogs/
directory and locate the new
learn-boundary-event-logging/auditlogs/sessions.log
file.
It should contain an authorize-session
event and several SessionService
events, including LookupSession
, ActivateSession
, AuthorizeConnection
,
CloseConnection
and CancelConnection
. The sink filter defined earlier
captures these events, which all contain SessionService
in the
data.request_info.method
JSON data. If the operator only wanted to capture a
subset of these events, a more granular filter could be created to allow only
those events, such as AuthorizeConnection
.
Define a worker sink
Next you will set up an event sink for the worker.
Begin by checking the logs on the boundary-worker-1
container.
You will notice a single .createClientConn
event. This is produced by the
default events behavior, which sends everything to stderr
when no event
configuration is defined.
Open the compose/worker.hcl
configuration file. Add the following events
stanza to the end of the file:
Save this file.
This configuration is nearly identical to the controller event sink. It sends
all events to stderr
in the cloudevents-json
format, and defines a file sink
for all event types ("*"
), instead of just audit events. It will save the file
as auditlogs/worker.log
in the shared docker-compose directory, like the
controller does.
Restart the worker container.
Thie worker primarily logs events from connection errors, such as when the
worker attempts to dial the worker on port 9201
. This may happen when either
the controller or worker are restarted.
Examine the auditevents/worker.log
file. It should contain a single
createClientConn
event from when the worker established a connection with the
controller upon restart.
Restart the controller container.
The worker.log
file should now contain several rpc error
messages, stating
Error while dialing unable to dial to controller
. The worker will stop
producing these events once the controller is available again.
Event Visualization
Event visualization enables operators to source data in various formats and then search, analyze and visualize that data in real time. Elastic provides a common architecture for event visualization built on the "elastic stack", or an "ELK stack". Typically this setup is comprised of Elasticsearch, Kibana, Beats, and Logstash.
This tutorial utilizes an "EFK" stack, where Logstash is replaced with Filebeat for collecting and sending event logs to Elasticsearch. Kibana is a frontend application that provides search and visualization for the events indexed by Elasticsearch.
Configure Kibana
Kibana is pre-configured for this tutorial. You do not need to set up Elasticsearch or Kibana.
If you want to learn more about how Elasticsearch and Kibana were deployed, examine the following files:
auditlogs/
: A shared directory for log files.deploy
: A script used to deploy and tear down the EFK stack.filebeat.docker.yml
: The filbeat config for sending event logs to elasticsearchcompose/docker-compose.yml
: The Docker-Compose configuration file describing how to provision and network the EFK stack containers.compose/.env
: A set of tunable environment variables for deploying Elasticsearch and Kibana.
The compose/docker-compose
file describes the configurations for the
setup-elastic
, elasticsearch
, kibana
and filebeat
containers. Filebeat
has a dedicated config file located at
learn-boundary-event-logging/filebeat.docker.yml
. A set of environment
variables that control the deployment are located in compose/.env
. In order to
provide access for Elasticsearch to the logs created by Docker, the deploy
script changes the permissions on the auditlogs/
directory to allow read and
write access to everyone.
This basic Elasticsearch configuration utilizes Filebeat to send all .log
files from the auditlogs/
directory to https://elasticsearch:9200
, where
Elasticsearch is listening for data sources. Kibana acts as a frontend for
Elasticsearch, and is accessible on http://localhost:5601
.
This configuration is already correct. Open your web browser and navigate to http://localhost:5601/app/management/kibana/dataViews to view the Kibana dashboard.
Login using the following credentials (these are defined in compose/.env
):
- Email or username:
elastic
- Password:
elastic
Create a data view
Upon logging in you should be presented with the page stating "You have data is Elasticsearch", prompting you to create a new data view. If you do not see this page, visit http://localhost:5601/app/management/kibana/dataViews directly.
Click + Create data view.
Under the Create data view page, enter filebeat-*
into the Name field. A
message should appear stating that "Your index pattern matches 1 source."
Note
If a data view is not automatically discovered, check the
permissions on the learn-boundary-event-logging/auditlogs/
directory. Execute
chmod 777 auditlogs/
and refresh the data views page.
Leave the Timestamp field set to @timestamp
. Click Create data view
when finished.
You will be redirected to the filebeat-*
Management page.
Open a new browser tab, and navigate to http://localhost:5601/app/discover#.
The discover dashboard shows recent events, allowing you to inspect their details and search for events over a specific time period.
Visualize audit logs
Earlier the following file sinks were configured:
- auth-sink
- controller-audit-sink
- worker-audit-sink
- session-sink
These sinks resulted in the creation of the following files in the auditlogs/
directory:
This data has been imported into Kibana, and can be searched for in the Discover dashboard.
Similar to how the sink was written based on the content of the log entry,
common search queries can be constructed by examining the request_info.method
or request_info.path
json data from the log.
Click on the Search box and then enter in the following query:
Click Update after the search query has been entered.
This view shows all log entries that describe a request made to the
v1/SessionService/ActivateSession
endpoint.
Unlike Boundary's filtering syntax, KQL requires exact matching for search values (although fields can use wildcard and fuzzy matching). This means we cannot easily search for all entries containing ":activate", like the event sink does.
By default, Kibana uses the Kibana Query Language
(KQL) to parse
queries by default. KQL supports boolean and
, or
, and not
operators to
create complex queries.
For KQL, this means searching for all the following events directly:
json.data.request_info.path: "/v1/targets/postgres:authorize-session"
json.data.request_info.method: "/controller.servers.services.v1.SessionService/ActivateSession"
json.data.request_info.method: "/controller.servers.services.v1.SessionService/AuthorizeConnection"
json.data.request_info.method: "/controller.servers.services.v1.SessionService/CancelSession"
json.data.request_info.method: "/controller.servers.services.v1.SessionService/CloseConnection"
json.data.request_info.method: "/controller.servers.services.v1.SessionService/ConnectConnection"
json.data.request_info.method: "/controller.servers.services.v1.SessionService/LookupSession"
To search for all the events collected by the session-sink
file sink using
KQL, enter the following search query:
This query may seem awkward. A simpler query can be made using Lucene query syntax, which accepts regular expressions in queries. Lucene is also built into Kibana, and can be enabled by disabling KQL.
Locate the KQL button to the right of the search bar. Click on it, and the toggle the KQL switch off to enable Lucene.
The KQL button should now be replaced with Lucene.
Enter the following Lucene query to search for the session-sink events:
Both the KQL and Lucene searches will return all the events created by the session-sink filter:
"/data/request_info/path" contains ":authorize-session"
"/data/request_info/method" contains "SessionService"
Other useful information can also be gathered with Kibana, such as metrics related to health checks.
For example, the controller container has a healthcheck defined in the
compose/docker-compose.yml
file. It queries the http://boundary:9203/health
endpoint every 10 seconds to determine if the controller is healthy:
To view these requests, perform the following search with Lucene:
Click Update after the search query has been entered.
This query can also be perfomed using the following KQL query:
Cleanup and teardown
The Boundary cluster containers and network resources can be cleaned up
using the provided deploy
script.
Check your work with a quick docker ps
and ensure there are no more containers
with the boundary-
prefix leftover. If unexpected containers still exist,
execute docker rm -f CONTAINER_NAME
against each to remove them.