Filebeat To Graylog
Filebeat To Graylog
Filebeat To Graylog
If you run the audit daemon on your Linux distribution you might notice that some
of the most valuable information produced by auditd is not transmitted when you
enable syslog forwarding to Graylog. By default, these messages are written to
/var/log/audt/audit.log, which is written to file by the auditd process directly
and not sent via syslog. In this post, we will walk through the steps to capture
this information and bring it into your Graylog instance, to get insight into what
users do on your Linux servers. This is similar to our earlier blog post, �Back to
Basics: Enhance Windows Security with Sysmon and Graylog�, but now for Linux.
Please check to make sure you do not violate any policies in your environment,
running the collector as root is by far the simplest solution.
FILEBEAT 5.X
Like any other log file that should be transported with Filebeat, the best solution
would be to use one prospector that includes the configuration specific for that
file. More details can be found in the Filebeat documentation
filebeat:
prospectors:
- encoding: plain
fields:
collector_node_id: c00010.lan
type: auditd
ignore_older: 0
paths:
- /var/log/audit/audit.log
scan_frequency: 10s
tail_files: true
type: log
output:
logstash:
hosts:
- graylog001.lan:5044
- graylog002.lan:5044
- graylog003.lan:5044
loadbalance: true
#
# to enhance security of this sensitive data, enable client certificates
# and certificate verification
# ssl.certificate_authorities: ["/etc/ca.crt"]
# ssl.certificate: "/etc/client.crt"
# ssl.key: "/etc/client.key"
# ssl.verification_mode: trueFilebeat 6.x
In version 6, Filebeat introduced the concept of modules. Modules are designed to
work in an Elastic Stack environment and provide pre-built parsers for logstash and
dashboards for Kibana. However, since Graylog does the parsing, analysis and
visualization in place of Logstash and Kibana, neither of those two components
apply.
They also create a dedicated index in Elasticsearch, but Graylog also manages all
indices in Elasticsearch so, for most Graylog users, these modules are of little
benefit.
The configuration file settings stay the same with Filebeat 6 as they were for
Filebeat 5.
GRAYLOG COLLECTOR-SIDECAR
Use the Collector-Sidecar to configure Filebeat if you run it already in your
environment. Just add a new configuration and tag to your configuration that
include the audit log file. Keep in mind to add type auditd to the configuration,
so that the rules below will work.
Make use of your own certification authority and create certificates for the
Graylog Input that can be verified by Filebeat when it connects to the input.
In addition, you could create client certificates that Graylog will accept messages
only from Clients that authenticate via certificate.
Graylog and the collector would need to have their specific certificate and the
certification authority certificate for verification of the certificates.
RULES
rule "auditd_identify_and_tag"
?
// we use only one rule to identify if this is an auditd log file
// in all following rules it is possible to check just this single field.
//
// following rules can just check for:
// has_field("is_auditd")
?
when
?
// put any identifier you have for the auditd log file
// in this rule
has_field("facility") AND
to_string($message.facility) == "filebeat" AND
//
// the following rule only work if the auditd log file is
// in the default location
//
// has_field("file") AND
// to_string($message.file) == "/var/log/audit/audit.log" AND
// you need to adjust that if you change the field in the collector
configuration!
has_field("type") AND
to_string($message.type) == "auditd"
then
set_field("is_auditd", true);
end
rule "auditd_kv_ex_prefix"
when
has_field("is_auditd")
then
end
rule "auditd_extract_time_sequence"
when
has_field("is_auditd") AND
has_field("auditd_msg")
then
set_fields(
grok(
pattern: "audit\\(%{NUMBER:auditd_log_epoch}:%
{NUMBER:auditd_log_sequence}\\):",
value: to_string($message.auditd_msg),
only_named_captures: true
)
);
// if the epoch was extracted successfully, create a human readable timestamp
// be aware that the milliseconds will be cut-off as a bug in the lib that is
used
// the time zone might be adjusted to your wanted timezone, default UTC
set_field("auditd_log_time",
flex_parse_date(
value: to_string($message.auditd_log_epoch),
default: now(),
timezone: "UTC"
)
);
remove_field("auditd_msg");
end
PIPELINE
Next, create a new processing pipeline with three stages. In the first stage, place
the rule auditd_identify_and_tag in the second stage auditd_kv_ex_prefix and in the
third auditd_extract_time_sequence. After this pipeline is connected to a stream of
messages (System >Pipelines > Manage Pipelines > Edit > Edit Connections) it will
start working. It should look similar to the following picture.
For instance, one possibly useful bit of information you might want to monitor is
what ciphers are used when connecting to the system (QuickValues on auditd_cipher
or search _exists_:auditd_cipher)