#Title: Logging RoR events into separate log files
-
It is possible to log RoR event in different files, to help analysing, and also keeping an eye on access.
-
Requirements
- elasticsearch > 2.4.x
- readonlyrest (RoR) > 1.16.13
- Respect the edition of yml (indents)
- that’s true, l4j is end of life, but 2.x users have no choice
Story:
as you know, elasticsearch 2.x uses l4j while elasticsearch 5.x uses l4j2
-
It means you will need to define your logging settings by following rules of l4j
Don’t worry, if you have no clue about l4j and you just want to log who access to what , this guide is here for that.
logging.yml
- This file is located in your elasticsearch config folder.
The first setting to look at is
es.logger.level: INFO
Switching it to WARN or DEBUG or TRACE will increase the verbosity of your logging.
- section
logger:
add the line
tech.beshu.ror: INFO, access_log_file
It will redirect log to the appender “access_log_file” defined later.
you can change the verbosity from INFO to DEBUG, ERROR, WARN as you wish.
- section
additivity:
add the line
tech.beshu.ror: false
This setting will tell to the logging subsystem to do not write anymore RoR logs into the main elasticsearch.log file.
- section
appender:
add the block
access_log_file:
type: dailyRollingFile
file: ${path.logs}/${cluster.name}_access.log
datePattern: "'.'yyyy-MM-dd"
layout:
type: pattern
conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"
filter:
1:
type: org.apache.log4j.filter.ExpressionFilter
Expression: "MSG ~= USR:elkg_kibana_test"
acceptOnMatch: False
2:
type: org.apache.log4j.filter.ExpressionFilter
Expression: "MSG ~= USR:logstash_BE_TEST"
acceptOnMatch: False
This block will act like the l4j2 properties described in documentation.
It will create a file in your logs folder like :
- the current day : Clustername_access.log
- days of the past : Clustername_access.log.YYYY-MM-DD
and everyday a new logs is created.
now, about these filters :
I wrote 2, to let you adjust as you need.
In this example, we say to l4j to do not log loglines which contain the strings “USR:elkg_kibana_test” or “USR:logstash_BE_TEST”
The objective of this example is to log
Classified INFO events which comes from tech.beshu.ror to a separated file named Clustername_access.log for the current day, and excluding all event related to users USR:elkg_kibana_test and USR:logstash_BE_TEST.
that’s true, it is a long sentence, but I could not resume better than this.
here a complete logging.yml example to give you an overall idea.
# you can override this using by setting a system property, for example -Des.logger.level=DEBUG
es.logger.level: INFO
rootLogger: ${es.logger.level}, console, file
logger:
# log action execution errors for easier debugging
action: INFO
# deprecation logging, turn to DEBUG to see them
deprecation: INFO, deprecation_log_file
# reduce the logging for aws, too much is logged under the default INFO
com.amazonaws: WARN
# aws will try to do some sketchy JMX stuff, but its not needed.
com.amazonaws.jmx.SdkMBeanRegistrySupport: ERROR
com.amazonaws.metrics.AwsSdkMetrics: ERROR
org.apache.http: INFO
org.elasticsearch.http: INFO
# gateway
#gateway: DEBUG
#index.gateway: DEBUG
# peer shard recovery
#indices.recovery: DEBUG
# discovery
#discovery: TRACE
index.search.slowlog: TRACE, index_search_slow_log_file
index.indexing.slowlog: TRACE, index_indexing_slow_log_file
#Plugin readonly rest logging
#plugin.readonlyrest.acl.blocks.rules.impl: INFO, access_log_file
#plugin.readonlyrest.acl: DEBUG, access_log_file
#tech.beshu.ror.acl : INFO, access_log_file
#tech.beshu.ror.es : INFO, access_log_file
#tech.beshu.ror.requestcontext : INFO, access_log_file
tech.beshu.ror: INFO, access_log_file
additivity:
index.search.slowlog: false
index.indexing.slowlog: false
deprecation: false
#plugin.readonlyrest.acl.blocks.rules.impl: false
#plugin.readonlyrest.acl: false
#tech.beshu.ror.acl: false
#tech.beshu.ror.es: false
tech.beshu.ror: false
appender:
console:
type: console
layout:
type: consolePattern
conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"
file:
type: dailyRollingFile
file: ${path.logs}/${cluster.name}.log
datePattern: "'.'yyyy-MM-dd"
layout:
type: pattern
conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %.10000m%n"
# Use the following log4j-extras RollingFileAppender to enable gzip compression of log files.
# For more information see https://logging.apache.org/log4j/extras/apidocs/org/apache/log4j/rolling/RollingFileAppender.html
#file:
#type: extrasRollingFile
#file: ${path.logs}/${cluster.name}.log
#rollingPolicy: timeBased
#rollingPolicy.FileNamePattern: ${path.logs}/${cluster.name}.log.%d{yyyy-MM-dd}.gz
#layout:
#type: pattern
#conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"
deprecation_log_file:
type: dailyRollingFile
file: ${path.logs}/${cluster.name}_deprecation.log
datePattern: "'.'yyyy-MM-dd"
layout:
type: pattern
conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"
index_search_slow_log_file:
type: dailyRollingFile
file: ${path.logs}/${cluster.name}_index_search_slowlog.log
datePattern: "'.'yyyy-MM-dd"
layout:
type: pattern
conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"
index_indexing_slow_log_file:
type: dailyRollingFile
file: ${path.logs}/${cluster.name}_index_indexing_slowlog.log
datePattern: "'.'yyyy-MM-dd"
layout:
type: pattern
conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"
#Plugin readonly rest logging
access_log_file:
type: dailyRollingFile
file: ${path.logs}/${cluster.name}_access.log
datePattern: "'.'yyyy-MM-dd"
layout:
type: pattern
conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"
filter:
1:
type: org.apache.log4j.filter.ExpressionFilter
Expression: "MSG ~= USR:elkg_kibana_test"
acceptOnMatch: False
2:
type: org.apache.log4j.filter.ExpressionFilter
Expression: "MSG ~= USR:logstash_BE_TEST"
acceptOnMatch: False
if you need more details about l4j properties or functions, you can go to apache website.