Difference between revisions of "Logstash configuration examples"
(One intermediate revision by the same user not shown) | |||
Line 3: | Line 3: | ||
[[File:Configuration examples.png|256px|Logstash configuration examples]] | [[File:Configuration examples.png|256px|Logstash configuration examples]] | ||
− | You'll find some ''Logstash'' configuration example below. To use these | + | You'll find some ''Logstash'' configuration example below. To use some of these examples you need to add the [[Logstash grok expressions]]. |
Latest revision as of 16:43, 5 February 2015
You'll find some Logstash configuration example below. To use some of these examples you need to add the Logstash grok expressions.
Contents
Configuration file(s)
Logstash can have many configuration files.
It is recommended to have 1 file per log indice.
Depending on your taste you can choose between the following setup:
- 1 indice per log file ==> 1 Logstash configuration file per log file
- 1 indice for all ==> only 1 Logstash configuration, then you rely on tags
Anyway, configuration file(s) must be in /etc/logstash/conf.d/*.conf
Basic config file
A configuration file must always have 3 sections: input, filter, output.
example:
vim /etc/logstash/conf.d/logstash.conf
The following example will process a bunch of log4j files.
## List of complete inputs | filters | output available on the official website:
## http://logstash.net/docs/latest/index
## Configuration syntax: http://logstash.net/docs/latest/configuration
###### Data sources to process #####
input {
file {
path => [ "/home/qa1/catalina.base/logs/vehco/*.log" ]
type => "vehco-qa1"
}
}
filter {
# REMINDERS:
# >> you can check on Kibana the field name to use for each filter.
# >> you can find the list of GROK pattern over here: https://github.com/elasticsearch/logstash/blob/v1.4.2/patterns/grok-patterns
# All lines that does not start with %{TIMESTAMP} or ' ' + %{TIMESTAMP} belong to the previous event
multiline {
pattern => "(([\s]+)20[0-9]{2}-)|20[0-9]{2}-"
negate => true
what => "previous"
}
# QA1
if [type] == "vehco-qa1" {
grok {
patterns_dir => ["/etc/logstash/grok"]
match => [ "message", "%{LOG4J}" ]
add_tag => "vehco-log-qa1"
}
# Something wrong occurred !!!
if "_grokparsefailure" in [tags] {
grok {
patterns_dir => "/etc/logstash/grok"
match=>[ "message","(?<content>(.|\r|\n)*)" ]
add_tag => "vehco-log-qa1-grok_error"
}
}
}
}
output {
elasticsearch {
cluster => "VEHCO"
protocol => "http"
# port => ""
host => "172.16.50.223"
node_name => "vehco-qa"
index => "vehco-qa-%{+YYYY.MM.dd}"
}
}
[!] Note 1:
Grok will normally break on rule match == it will stop processing after the 1st pattern that matches and return success.
[!] Note 2:
You can set generic blob expression as INPUT filters.
Multi-lines
Some logs can be on N lines. They are called multi-lines. Multi-line filter must always be before any GROK filter!
Handle spaces
A new event must NOT start with a space.
# All lines starting with a space belong to the previous event
multiline {
pattern => "^\s"
negate => false
what => "previous"
}
Java exceptions
This will make all exceptions belong to the previous event.
# All exceptions belong to the previous event
multiline {
pattern => "(([^\s]+)Exception.+)|(at:.+)"
negate => false
what => "previous"
}
LOG4J trick
If you only expect Log4j logs then you know that each line that does NOT start with a %{TIMESTAMP} is NOT a new event.
# All lines that does not start with %{TIMESTAMP} or ' ' + %{TIMESTAMP} belong to the previous event
multiline {
pattern => "(([\s]+)20[0-9]{2}-)|20[0-9]{2}-"
negate => true
what => "previous"
}
Grok failure
If your Grok expression is wrong the line will be tagged as '_grokparsefailure' .
Since you know how to detect error, you can attempt to apply an alternate filter on the log.
filter {
# myApplication
if [type] == "myApp" {
grok {
...
}
# Something wrong occurred !!! :O Do something else instead!
if "_grokparsefailure" in [tags] {
grok {
patterns_dir => "/etc/logstash/grok"
match=>[
"message","(?<content>(.|\r|\n)*)"
]
}
}
}
}
Common Logstash configurations
Apache2
Requirements:
- Make sure your logs are in "/var/log/apache2" or adjust the paths
- Make sure your using the COMBINED logs (default in Apache 2.4+)
Logstash configuration extract:
input {
file {
path => [ "/var/log/apache2/access.log", "/var/log/apache2/ssl_access.log", "/var/log/apache2/other_vhosts_access.log" ]
type => "apache-access"
}
file {
path => "/var/log/apache2/error.log"
type => "apache-error"
}
}
filter {
# ------------------------ Parse services logs into fields ---------------------------
# APACHE 2
if [type] == "apache-access" {
# To process log data (message's content) using some regex or precompiled GROK pattern
grok {
match => [ "message", "%{COMBINEDAPACHELOG}"]
}
# To extract log's time according to a date pattern
date {
match => [ "timestamp", "dd/MMM/YYYY:HH:mm:ss Z"]
}
# Extraction browser information, if available.
if [agent] != "" {
useragent {
source => "agent"
}
}
if [clientip] != "" {
geoip {
source => "clientip"
target => "apache_clientip"
add_tag => [ "geoip" ]
}
}
}
if [type] == "apache-error" {
grok {
match => [ "message", "%{APACHEERRORLOG}"]
# Directory where to find the custom patterns
patterns_dir => ["/etc/logstash/grok"]
}
if [clientip] != "" {
geoip {
source => "clientip"
target => "apache_clientip"
add_tag => [ "geoip" ]
}
}
}
}
output {
...
}
IpTables
Requirements:
- Make sure you are logging dropped packets into a dedicated file. See Firewall log dropped
Logstash configuration extract:
input {
file {
path => "/var/log/iptables.log"
type => "iptables"
}
}
filter {
# IPTABLES
if [type] == "iptables" {
grok {
match => [
"message", "%{IPTABLES_IP}",
"message", "%{IPTABLES_ICMP}",
"message", "%{IPTABLES_GENERIC}"
]
patterns_dir => ["/etc/logstash/grok"]
}
# Something wrong occurred !!! :O
if "_grokparsefailure" in [tags] {
grok {
patterns_dir => "/etc/logstash/grok"
match=>["message", "%{IPTABLES_ERROR}" ]
add_tag => "iptables-grok_error"
}
}
# Default 'geoip' == src_ip. That means it's easy to display the DROPPED INPUT :)
if [src_ip] != "" {
geoip {
source => "src_ip"
add_tag => [ "geoip" ]
target => "src_geoip"
}
}
if [dst_ip] != "" {
geoip {
source => "dst_ip"
add_tag => [ "geoip" ]
target => "dst_geoip"
}
}
}
}
output {
...
}
Fail2ban
Logstash configuration extract:
input {
file {
path => "/var/log/fail2ban.log"
type => "fail2ban"
}
}
filter {
# Fail2ban
if [type] == "fail2ban" {
grok {
match => ["message", "%{FAIL2BAN}"]
patterns_dir => ["/etc/logstash/grok"]
}
if [ban_ip] != "" {
geoip {
source => "ban_ip"
add_tag => [ "geoip" ]
target => "ban_geoip"
}
}
}
}
output {
...
}
Syslog
Logstash configuration extract:
input {
file {
path => [ "/var/log/syslog", "/var/log/auth.log", "/var/log/mail.info" ]
type => "syslog"
}
}
filter {
# SYSLOG
if [type] == "syslog" {
grok {
match => ["message", "%{SYSLOGBASE}"]
}
}
}
output {
...
}
Tomcat
... To be done ...
Log4J
input {
file {
path => [ "/home/beta3/catalina.base/logs/vehco/*.log" ]
type => "myApp"
}
}
filter {
# All lines that does not start with %{TIMESTAMP} or ' ' + %{TIMESTAMP} belong to the previous event
multiline {
pattern => "(([\s]+)20[0-9]{2}-)|20[0-9]{2}-"
negate => true
what => "previous"
}
# myApplication
if [type] == "myApp" {
grok {
patterns_dir => ["/etc/logstash/grok"]
match => [
"message", "%{LOG4J}"
]
add_tag => "myApp-log"
}
# Something wrong occurred !!! :O
if "_grokparsefailure" in [tags] {
grok {
patterns_dir => "/etc/logstash/grok"
match=>[
"message","(?<content>(.|\r|\n)*)"
]
}
}
}
}
output {
...
}
VEHCO specific patterns
Now that you have some specific GROK patterns, you need to update your Logstash configuration.
input {
file {
path => [ "/var/log/vehco/*.log" ]
type => "vehco-rtd"
}
}
filter {
# VEHCO-RTD
if [type] == "vehco-rtd" {
grok {
patterns_dir => ["/etc/logstash/grok"]
match => [
"message", "%{RTD_TERMINAL}",
"message", "%{RTD_AUTH_START}",
"message", "%{RTD_AUTH_DONE}"
]
}
}
}
output {
...
}