Access log in a separate file

Hi, all
**
ROR Version**: Enterprise 1.66.1_es8.18.3

Kibana Version: 8.18.3

Elasticsearch Version: 8.18.3

Steps to reproduce the issue
I try use old config

appender.access_log_rolling.type = RollingFile
appender.access_log_rolling.name = access_log_rolling
appender.access_log_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_access.log
appender.access_log_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %marker%.-10000m%n
appender.access_log_rolling.layout.type = PatternLayout
appender.access_log_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_access-%d{yyyy-MM-dd}-%i.log.gz
appender.access_log_rolling.policies.type = Policies
appender.access_log_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.access_log_rolling.policies.time.interval = 1
appender.access_log_rolling.policies.time.modulate = true
appender.access_log_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.access_log_rolling.policies.size.size = 1024MB
appender.access_log_rolling.strategy.type = DefaultRolloverStrategy
appender.access_log_rolling.strategy.action.type = Delete
appender.access_log_rolling.strategy.action.basepath = ${sys:es.logs.base_path}
appender.access_log_rolling.strategy.action.condition.type = IfFileName
appender.access_log_rolling.strategy.action.condition.glob = ${sys:es.logs.cluster_name}_access-*
appender.access_log_rolling.strategy.action.condition.nested_condition.type = IfAccumulatedFileSize
appender.access_log_rolling.strategy.action.condition.nested_condition.exceeds = 2GB
logger.access_log_rolling.name = tech.beshu.ror
logger.access_log_rolling.level = info
logger.access_log_rolling.appenderRef.access_log_rolling.ref = access_log_rolling
logger.access_log_rolling.additivity = false
logger.access_log_rolling.filter.regex.type = RegexFilter
logger.access_log_rolling.filter.regex.regex = .*(USR:(kibana|beat|logstash)),.*|.*(name:( \'LOCALHOST\-only\ access\')),.*
logger.access_log_rolling.filter.regex.onMatch = DENY
logger.access_log_rolling.filter.regex.onMismatch = ACCEPT

And i try use new config:

appender.readonlyrest_audit_rolling.type = RollingFile
appender.readonlyrest_audit_rolling.name = readonlyrest_audit_rolling
appender.readonlyrest_audit_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}readonlyrest_audit.log
appender.readonlyrest_audit_rolling.layout.type = PatternLayout
appender.readonlyrest_audit_rolling.layout.pattern = [%d{ISO8601}] %m%n
appender.readonlyrest_audit_rolling.filePattern = readonlyrest_audit-%i.log.gz
appender.readonlyrest_audit_rolling.policies.type = Policies
appender.readonlyrest_audit_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.readonlyrest_audit_rolling.policies.size.size = 3GB
appender.readonlyrest_audit_rolling.strategy.type = DefaultRolloverStrategy
appender.readonlyrest_audit_rolling.strategy.max = 4

logger.readonlyrest_audit.name = readonlyrest_audit   # required logger name, must be the same as the one defined in `readonlyrest.yml` 
logger.readonlyrest_audit.appenderRef.ror_audit.ref = readonlyrest_audit_rolling    
logger.readonlyrest_audit.additivity = false 

readonlyrest.yml

readonlyrest:
    audit:
      enabled: true
      outputs: 
      - type: index
        cluster: [http://1.1.1.1:9100" , "http://2.2.2.2:9100", "http://3.3.3.3:9100" ]
        index_template: "'xcs-readonlyrest'-yyyy-MM-dd"
        serializer: tech.beshu.ror.requestcontext.QueryAuditLogSerializer
      - type: index # local cluster index
        index_template: "'.readonlyrest-audit'-yyyy-MM-dd"
        serializer: tech.beshu.ror.requestcontext.QueryAuditLogSerializer
      - type: log
        logger_name: readonlyrest_audit

Expected result:

Access logs should be written to a separate file.
for example: /var/log/elasticsearch-1/cluster_name_access.log

Actual Result:

All logs are written to one file:

for example: /var/log/elasticsearch-1/cluster_name.log

Maybe I’m doing something wrong?

Thank you for your answers and help.

{“customer_id”: “6c4a385b-2ae8-4f02-a9cd-ef24addfb5b3”, “subscription_id”: “32d4073f-dc2f-4056-a868-842727c637cd”}

Hello,

Thanks for the report. We’ve recently noticed similar issues with logging and we’ve just updated the configuration example in our documentation: Audit configuration | ReadonlyREST

Could you please try the updated example and let us know if it resolves the issue?

# Logger name, required, must be the same as the one defined in `readonlyrest.yml` audit configuration.
# If a custom logger name is not defined there, then the default logger name is "readonlyrest_audit"
logger.readonlyrest_audit.name = readonlyrest_audit
logger.readonlyrest_audit.appenderRef.readonlyrest_audit_rolling.ref = readonlyrest_audit_rolling
# set to false to use only desired appenders
logger.readonlyrest_audit.additivity = false

(Changes: removed comments at the end of the lines, changed appender ref, removed trailing spaces)

Regards,
Michał

Hi,
Thank you for your help.

No, it didn’t help me.

After I specified the lines in the readonlyrest config:

      - type: log
        logger_name: readonlyrest_audit

Access log started duplicating to a file

And it looks like this:

[2025-11-25T09:01:48,543][INFO ][t.b.r.a.l.AccessControlListLoggingDecorator] [host] ...
[2025-11-25T09:01:48,547][INFO ][readonlyrest_audit       ] [host] ...

Maybe the problem is in the spaces after the name?

I tried to specify it in the config

logger.readonlyrest_audit.name = "readonlyrest_audit       "

But that didn’t help either. Now each log is written twice to the file.

Hello,

In that case, could you please provide the full log4j2.properties file (anonymized if needed — although this file should not contain sensitive data)? I will try to reproduce the issue. The default logger name is just “readonlyrest_audit”. It is displayed with trailing spaces because this segment of the logger is padded to a minimum width of 25 characters ([%-25c{1.}]).

Best regards,
Michał

/etc/elasticsearch-1/log4j2.properties:

status = error

# log action execution errors for easier debugging
logger.action.name = org.elasticsearch.action
logger.action.level = debug

appender.console.type = Console
appender.console.name = console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker%m%n

appender.readonlyrest_audit_rolling.type = RollingFile
appender.readonlyrest_audit_rolling.name = readonlyrest_audit_rolling
appender.readonlyrest_audit_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}readonlyrest_audit.log
appender.readonlyrest_audit_rolling.layout.type = PatternLayout
appender.readonlyrest_audit_rolling.layout.pattern = [%d{ISO8601}] %m%n
appender.readonlyrest_audit_rolling.filePattern = readonlyrest_audit-%i.log.gz
appender.readonlyrest_audit_rolling.policies.type = Policies
appender.readonlyrest_audit_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.readonlyrest_audit_rolling.policies.size.size = 1GB
appender.readonlyrest_audit_rolling.strategy.type = DefaultRolloverStrategy
appender.readonlyrest_audit_rolling.strategy.max = 4

# Logger name, required, must be the same as the one defined in `readonlyrest.yml` audit configuration.
# If a custom logger name is not defined there, then the default logger name is "readonlyrest_audit"
logger.readonlyrest_audit.name = "readonlyrest_audit       "
logger.readonlyrest_audit.appenderRef.readonlyrest_audit_rolling.ref = readonlyrest_audit_rolling
# set to false to use only desired appenders
logger.readonlyrest_audit.additivity = false

appender.rolling.type = RollingFile
appender.rolling.name = rolling
appender.rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.log
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker%.-10000m%n
appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.log.gz
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size = 1024MB

#appender.rolling.strategy.max = 50
#appender.rolling.strategy.type = DefaultRolloverStrategy
#appender.rolling.strategy.action.type = Delete
#appender.rolling.strategy.action.basepath = ${sys:es.logs.base_path}
#appender.rolling.strategy.action.condition.type = IfLastModified
#appender.rolling.strategy.action.condition.age = 7D
#appender.rolling.strategy.action.PathConditions.type = IfFileName
#appender.rolling.strategy.action.PathConditions.glob = ${sys:es.logs.cluster_name}-*

rootLogger.level = info
rootLogger.appenderRef.console.ref = console
rootLogger.appenderRef.rolling.ref = rolling

appender.deprecation_rolling.type = RollingFile
appender.deprecation_rolling.name = deprecation_rolling
appender.deprecation_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation.log
appender.deprecation_rolling.layout.type = PatternLayout
appender.deprecation_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker%.-10000m%n
appender.deprecation_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation-%i.log.gz
appender.deprecation_rolling.policies.type = Policies
appender.deprecation_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.deprecation_rolling.policies.size.size = 1GB
appender.deprecation_rolling.strategy.type = DefaultRolloverStrategy
appender.deprecation_rolling.strategy.max = 4
logger.deprecation.name = org.elasticsearch.deprecation
logger.deprecation.level = warn
logger.deprecation.appenderRef.deprecation_rolling.ref = deprecation_rolling
logger.deprecation.additivity = false

appender.index_search_slowlog_rolling.type = RollingFile
appender.index_search_slowlog_rolling.name = index_search_slowlog_rolling
appender.index_search_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_search_slowlog.log
appender.index_search_slowlog_rolling.layout.type = PatternLayout
appender.index_search_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] [%node_name]%marker%.-10000m%n
appender.index_search_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_search_slowlog-%d{yyyy-MM-dd}.log.gz
appender.index_search_slowlog_rolling.policies.type = Policies
appender.index_search_slowlog_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.index_search_slowlog_rolling.policies.time.interval = 1
appender.index_search_slowlog_rolling.policies.time.modulate = true

logger.index_search_slowlog_rolling.name = index.search.slowlog
logger.index_search_slowlog_rolling.level = trace
logger.index_search_slowlog_rolling.appenderRef.index_search_slowlog_rolling.ref = index_search_slowlog_rolling
logger.index_search_slowlog_rolling.additivity = false

appender.index_indexing_slowlog_rolling.type = RollingFile
appender.index_indexing_slowlog_rolling.name = index_indexing_slowlog_rolling
appender.index_indexing_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_indexing_slowlog.log
appender.index_indexing_slowlog_rolling.layout.type = PatternLayout
appender.index_indexing_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] [%node_name]%marker%.-10000m%n
appender.index_indexing_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_indexing_slowlog-%d{yyyy-MM-dd}.log.gz
appender.index_indexing_slowlog_rolling.policies.type = Policies
appender.index_indexing_slowlog_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.index_indexing_slowlog_rolling.policies.time.interval = 1
appender.index_indexing_slowlog_rolling.policies.time.modulate = true

logger.index_indexing_slowlog.name = index.indexing.slowlog.index
logger.index_indexing_slowlog.level = trace
logger.index_indexing_slowlog.appenderRef.index_indexing_slowlog_rolling.ref = index_indexing_slowlog_rolling
logger.index_indexing_slowlog.additivity = false

#appender.rolling.strategy.max = 50
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.action.type = Delete
appender.rolling.strategy.action.basepath = ${sys:es.logs.base_path}
appender.rolling.strategy.action.PathConditions.type = IfFileName
appender.rolling.strategy.action.PathConditions.glob = ${sys:es.logs.cluster_name}*
                                                                                        
appender.rolling.strategy.action.condition.type = IfFileName
appender.rolling.strategy.action.condition.glob = ${sys:es.logs.cluster_name}*
#delete old logs after 7 days or when exceeding 39.5 GB
appender.rolling.strategy.action.condition.nested_condition.type = IfAny
appender.rolling.strategy.action.condition.nested_condition.fileSize.type = IfAccumulatedFileSize
appender.rolling.strategy.action.condition.nested_condition.fileSize.exceeds = 39.5GB
appender.rolling.strategy.action.condition.nested_condition.lastMod.type = IfLastModified
appender.rolling.strategy.action.condition.nested_condition.lastMod.age = 7D

I’m still in the process of tweaking the config for version 8

Hello,

I analysed the issue using the full slf4j2 config provided in the previous answer. It seems, that one line needs to be fixed:

Instead of incorrect:
logger.readonlyrest_audit.name = "readonlyrest_audit "

We need (without trailing spaces and without quotes):
logger.readonlyrest_audit.name = readonlyrest_audit

Here is the full fixed file that works correctly on my test environment:

status = error

# log action execution errors for easier debugging
logger.action.name = org.elasticsearch.action
logger.action.level = debug

appender.console.type = Console
appender.console.name = console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker%m%n

appender.readonlyrest_audit_rolling.type = RollingFile
appender.readonlyrest_audit_rolling.name = readonlyrest_audit_rolling
appender.readonlyrest_audit_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}readonlyrest_audit.log
appender.readonlyrest_audit_rolling.layout.type = PatternLayout
appender.readonlyrest_audit_rolling.layout.pattern = [%d{ISO8601}] %m%n
appender.readonlyrest_audit_rolling.filePattern = readonlyrest_audit-%i.log.gz
appender.readonlyrest_audit_rolling.policies.type = Policies
appender.readonlyrest_audit_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.readonlyrest_audit_rolling.policies.size.size = 1GB
appender.readonlyrest_audit_rolling.strategy.type = DefaultRolloverStrategy
appender.readonlyrest_audit_rolling.strategy.max = 4

# Logger name, required, must be the same as the one defined in `readonlyrest.yml` audit configuration.
# If a custom logger name is not defined there, then the default logger name is "readonlyrest_audit"
logger.readonlyrest_audit.name = readonlyrest_audit
logger.readonlyrest_audit.appenderRef.readonlyrest_audit_rolling.ref = readonlyrest_audit_rolling
# set to false to use only desired appenders
logger.readonlyrest_audit.additivity = false

appender.rolling.type = RollingFile
appender.rolling.name = rolling
appender.rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.log
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker%.-10000m%n
appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.log.gz
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size = 1024MB

#appender.rolling.strategy.max = 50
#appender.rolling.strategy.type = DefaultRolloverStrategy
#appender.rolling.strategy.action.type = Delete
#appender.rolling.strategy.action.basepath = ${sys:es.logs.base_path}
#appender.rolling.strategy.action.condition.type = IfLastModified
#appender.rolling.strategy.action.condition.age = 7D
#appender.rolling.strategy.action.PathConditions.type = IfFileName
#appender.rolling.strategy.action.PathConditions.glob = ${sys:es.logs.cluster_name}-*

rootLogger.level = info
rootLogger.appenderRef.console.ref = console
rootLogger.appenderRef.rolling.ref = rolling

appender.deprecation_rolling.type = RollingFile
appender.deprecation_rolling.name = deprecation_rolling
appender.deprecation_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation.log
appender.deprecation_rolling.layout.type = PatternLayout
appender.deprecation_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker%.-10000m%n
appender.deprecation_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation-%i.log.gz
appender.deprecation_rolling.policies.type = Policies
appender.deprecation_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.deprecation_rolling.policies.size.size = 1GB
appender.deprecation_rolling.strategy.type = DefaultRolloverStrategy
appender.deprecation_rolling.strategy.max = 4
logger.deprecation.name = org.elasticsearch.deprecation
logger.deprecation.level = warn
logger.deprecation.appenderRef.deprecation_rolling.ref = deprecation_rolling
logger.deprecation.additivity = false

appender.index_search_slowlog_rolling.type = RollingFile
appender.index_search_slowlog_rolling.name = index_search_slowlog_rolling
appender.index_search_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_search_slowlog.log
appender.index_search_slowlog_rolling.layout.type = PatternLayout
appender.index_search_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] [%node_name]%marker%.-10000m%n
appender.index_search_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_search_slowlog-%d{yyyy-MM-dd}.log.gz
appender.index_search_slowlog_rolling.policies.type = Policies
appender.index_search_slowlog_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.index_search_slowlog_rolling.policies.time.interval = 1
appender.index_search_slowlog_rolling.policies.time.modulate = true

logger.index_search_slowlog_rolling.name = index.search.slowlog
logger.index_search_slowlog_rolling.level = trace
logger.index_search_slowlog_rolling.appenderRef.index_search_slowlog_rolling.ref = index_search_slowlog_rolling
logger.index_search_slowlog_rolling.additivity = false

appender.index_indexing_slowlog_rolling.type = RollingFile
appender.index_indexing_slowlog_rolling.name = index_indexing_slowlog_rolling
appender.index_indexing_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_indexing_slowlog.log
appender.index_indexing_slowlog_rolling.layout.type = PatternLayout
appender.index_indexing_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] [%node_name]%marker%.-10000m%n
appender.index_indexing_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_indexing_slowlog-%d{yyyy-MM-dd}.log.gz
appender.index_indexing_slowlog_rolling.policies.type = Policies
appender.index_indexing_slowlog_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.index_indexing_slowlog_rolling.policies.time.interval = 1
appender.index_indexing_slowlog_rolling.policies.time.modulate = true

logger.index_indexing_slowlog.name = index.indexing.slowlog.index
logger.index_indexing_slowlog.level = trace
logger.index_indexing_slowlog.appenderRef.index_indexing_slowlog_rolling.ref = index_indexing_slowlog_rolling
logger.index_indexing_slowlog.additivity = false

#appender.rolling.strategy.max = 50
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.action.type = Delete
appender.rolling.strategy.action.basepath = ${sys:es.logs.base_path}
appender.rolling.strategy.action.PathConditions.type = IfFileName
appender.rolling.strategy.action.PathConditions.glob = ${sys:es.logs.cluster_name}*

appender.rolling.strategy.action.condition.type = IfFileName
appender.rolling.strategy.action.condition.glob = ${sys:es.logs.cluster_name}*
#delete old logs after 7 days or when exceeding 39.5 GB
appender.rolling.strategy.action.condition.nested_condition.type = IfAny
appender.rolling.strategy.action.condition.nested_condition.fileSize.type = IfAccumulatedFileSize
appender.rolling.strategy.action.condition.nested_condition.fileSize.exceeds = 39.5GB
appender.rolling.strategy.action.condition.nested_condition.lastMod.type = IfLastModified
appender.rolling.strategy.action.condition.nested_condition.lastMod.age = 7D

Using that fixed file, audit logs are written to readonlyrest_audit.log and the audit logs are no longer written to the main log file:

Please let me know whether the fixed slf4j2 file works ok.

Regards,
Michał

I already tried this option, and it didn’t work.

But I copied your config anyway to test it. So far, the result is the same.

log file is used: /var/log/elasticsearch-1/cluster.log

#du -sh /var/log/elasticsearch-1/readonlyrest_audit.log 
0 /var/log/elasticsearch-1/readonlyrest_audit.log

I tried using a different name than readonlyrest_audit, but the result is the same.

Hello,

Here’s the link to the ror-sandbox PR [RORDEV-1872] Reproduce issue using log4j2 config provided by customer by mgoworko · Pull Request #89 · beshu-tech/ror-sandbox · GitHub . It contains the fixed log4j2 configuration from my previous answer. It can be started by executing ror-demo-cluster/run.sh script. The script will ask for ES version and ROR version (I used ES 8.18.3 and ROR 1.66.1, as specified in the issue report).

After logging-in to Elastic (admin:admin on local port 15601), I checked the log files using command:
docker exec -it $(docker ps -q --filter "expose=9200") ls -l /usr/share/elasticsearch/logs

With result:

-rw-rw-r-- 1 elasticsearch elasticsearch  91351 Nov 27 17:08 gc.log
-rw-rw-r-- 1 elasticsearch elasticsearch   2902 Nov 27 17:04 gc.log.00
-rw-rw-r-- 1 elasticsearch elasticsearch   2902 Nov 27 17:04 gc.log.01
-rw-rw-r-- 1 elasticsearch elasticsearch  89170 Nov 27 17:08 readonlyrest_audit.log
-rw-rw-r-- 1 elasticsearch elasticsearch 618726 Nov 27 17:08 ror-es-cluster.log
-rw-rw-r-- 1 elasticsearch elasticsearch    999 Nov 27 17:04 ror-es-cluster_deprecation.log
-rw-rw-r-- 1 elasticsearch elasticsearch      0 Nov 27 17:04 ror-es-cluster_index_indexing_slowlog.log
-rw-rw-r-- 1 elasticsearch elasticsearch      0 Nov 27 17:04 ror-es-cluster_index_search_slowlog.log

The audit logs are correctly written only to the readonlyrest_audit.log file. Please compare this example with your configuration and let me know if the issue still persist.

Regards,
Michał

1 Like

I found warnings in the logs when starting:

[2025-11-27T17:57:26,488][WARN ][stderr                   ] [host-client]SLF4J(W): No SLF4J providers were found.
[2025-11-27T17:57:26,488][WARN ][stderr                   ] [host-client]SLF4J(W): Defaulting to no-operation (NOP) logger implementation
[2025-11-27T17:57:26,489][WARN ][stderr                   ] [host-client]SLF4J(W): See https://www.slf4j.org/codes.html#noProviders for further details.
[2025-11-27T17:57:26,489][WARN ][stderr                   ] [host-client]SLF4J(W): Class path contains SLF4J bindings targeting slf4j-api versions 1.7.x or earlier.
[2025-11-27T17:57:26,490][WARN ][stderr                   ] [host-client]SLF4J(W): Ignoring binding found at [jar:file:/usr/share/elasticsearch/plugins/readonlyrest/log4j-slf4j-impl-2.11.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
[2025-11-27T17:57:26,490][WARN ][stderr                   ] [host-client]SLF4J(W): See https://www.slf4j.org/codes.html#ignoredBindings for an explanation.

Could this be the reason?

I also have Java version 17.0.15. But changing Java does not affect the problem.

curl -k 'https://localhost:9100/_cat/nodes?h=n,v,j'
host-client 8.18.3 17.0.15

No, it should not be the problem.

Do you confirm you used the log4j config provided by Michal in your env?

Do you see any differences between you env and the one in the docker compose?

and please show us what this command prints:

cd /usr/share/elasticsearch && find . -name “*log4j*”

Empty. But I did:

# find /usr/share/elasticsearch | grep log4j
/usr/share/elasticsearch/lib/log4j2-ecs-layout-1.2.0.jar
/usr/share/elasticsearch/lib/elasticsearch-log4j-8.18.3.jar
/usr/share/elasticsearch/lib/log4j-api-2.19.0.jar
/usr/share/elasticsearch/modules/repository-gcs/log4j-1.2-api-2.19.0.jar
/usr/share/elasticsearch/modules/repository-s3/log4j-1.2-api-2.19.0.jar
/usr/share/elasticsearch/modules/repository-url/log4j-1.2-api-2.19.0.jar
/usr/share/elasticsearch/modules/x-pack-core/log4j-1.2-api-2.19.0.jar
/usr/share/elasticsearch/modules/x-pack-ent-search/log4j-slf4j-impl-2.19.0.jar
/usr/share/elasticsearch/plugins/readonlyrest/log4j-slf4j-impl-2.11.2.jar
/usr/share/elasticsearch/plugins/readonlyrest/log4j-api-scala_3-13.1.0.jar

Yes, I confirm that I used the config provided by Michal.
For my part, I am also looking for possible reasons for the abnormal behavior.

Could you please show us what permissions has your es logs folder? Maybe the file cannot be created for some reason

We tested it using JDK 17.0.15. It works on that Java version on our test environment too.

Can you please re-check and confirm the directories of the config, binaries and logs? In the first message in this thread the log directory was /var/log/elasticsearch-1 (with -1 suffix). Can you please re-check that all directories are consistent and corresponding to the same instance of ES?

Also, the command provided by Mateusz (cd /usr/share/elasticsearch && find . -name “*log4j*”) should be executed in the context of the ES instance we are interested in. Maybe it is elasticsearch-1 (I’m guessing the directory based on the path from the first message).

And let us know about value in ES_PATH_CONF or (preferred) show us logs from your es instance start.

At the moment we don’t have enough information to help you with the diagnostic. We are pretty sure it’s not a problem with the plugin but with your setup.

I took a new server and installed a new Elasticsearch without our settings.

yum install elasticsearch-8.18.3

# cat /etc/elasticsearch/elasticsearch.yml

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
xpack.security.enabled: false
xpack.security.enrollment.enabled: false
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
#network.host: 192.168.0.1
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Allow wildcard deletion of indices:
#
#action.destructive_requires_name: false

# cat /etc/elasticsearch/log4j2.properties

status = error

appender.console.type = Console
appender.console.name = console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %m%consoleException%n

######## Server JSON ############################
appender.rolling.type = RollingFile
appender.rolling.name = rolling
appender.rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_server.json
appender.rolling.layout.type = ECSJsonLayout
appender.rolling.layout.dataset = elasticsearch.server

appender.readonlyrest_audit_rolling.type = RollingFile
appender.readonlyrest_audit_rolling.name = readonlyrest_audit_rolling
appender.readonlyrest_audit_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}readonlyrest_audit.log
appender.readonlyrest_audit_rolling.layout.type = PatternLayout
appender.readonlyrest_audit_rolling.layout.pattern = [%d{ISO8601}] %m%n
appender.readonlyrest_audit_rolling.filePattern = readonlyrest_audit-%i.log.gz
appender.readonlyrest_audit_rolling.policies.type = Policies
appender.readonlyrest_audit_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.readonlyrest_audit_rolling.policies.size.size = 3GB
appender.readonlyrest_audit_rolling.strategy.type = DefaultRolloverStrategy
appender.readonlyrest_audit_rolling.strategy.max = 4

logger.readonlyrest_audit.name = readonlyrest_audit   
logger.readonlyrest_audit.appenderRef.ror_audit.ref = readonlyrest_audit_rolling    
logger.readonlyrest_audit.additivity = false 


appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.json.gz
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size = 128MB
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.fileIndex = nomax
appender.rolling.strategy.action.type = Delete
appender.rolling.strategy.action.basepath = ${sys:es.logs.base_path}
appender.rolling.strategy.action.condition.type = IfFileName
appender.rolling.strategy.action.condition.glob = ${sys:es.logs.cluster_name}-*
appender.rolling.strategy.action.condition.nested_condition.type = IfAccumulatedFileSize
appender.rolling.strategy.action.condition.nested_condition.exceeds = 2GB
################################################
######## Server -  old style pattern ###########
appender.rolling_old.type = RollingFile
appender.rolling_old.name = rolling_old
appender.rolling_old.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.log
appender.rolling_old.layout.type = PatternLayout
appender.rolling_old.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %m%n

appender.rolling_old.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.log.gz
appender.rolling_old.policies.type = Policies
appender.rolling_old.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling_old.policies.time.interval = 1
appender.rolling_old.policies.time.modulate = true
appender.rolling_old.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling_old.policies.size.size = 128MB
appender.rolling_old.strategy.type = DefaultRolloverStrategy
appender.rolling_old.strategy.fileIndex = nomax
appender.rolling_old.strategy.action.type = Delete
appender.rolling_old.strategy.action.basepath = ${sys:es.logs.base_path}
appender.rolling_old.strategy.action.condition.type = IfFileName
appender.rolling_old.strategy.action.condition.glob = ${sys:es.logs.cluster_name}-*
appender.rolling_old.strategy.action.condition.nested_condition.type = IfAccumulatedFileSize
appender.rolling_old.strategy.action.condition.nested_condition.exceeds = 2GB
################################################

rootLogger.level = info
rootLogger.appenderRef.console.ref = console
rootLogger.appenderRef.rolling.ref = rolling
rootLogger.appenderRef.rolling_old.ref = rolling_old

######## Deprecation JSON #######################
appender.deprecation_rolling.type = RollingFile
appender.deprecation_rolling.name = deprecation_rolling
appender.deprecation_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation.json
appender.deprecation_rolling.layout.type = ECSJsonLayout
# Intentionally follows a different pattern to above
appender.deprecation_rolling.layout.dataset = deprecation.elasticsearch
appender.deprecation_rolling.filter.rate_limit.type = RateLimitingFilter

appender.deprecation_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation-%i.json.gz
appender.deprecation_rolling.policies.type = Policies
appender.deprecation_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.deprecation_rolling.policies.size.size = 1GB
appender.deprecation_rolling.strategy.type = DefaultRolloverStrategy
appender.deprecation_rolling.strategy.max = 4

appender.header_warning.type = HeaderWarningAppender
appender.header_warning.name = header_warning
#################################################

logger.deprecation.name = org.elasticsearch.deprecation
logger.deprecation.level = WARN
logger.deprecation.appenderRef.deprecation_rolling.ref = deprecation_rolling
logger.deprecation.appenderRef.header_warning.ref = header_warning
logger.deprecation.additivity = false

######## Search slowlog JSON ####################
appender.index_search_slowlog_rolling.type = RollingFile
appender.index_search_slowlog_rolling.name = index_search_slowlog_rolling
appender.index_search_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs\
  .cluster_name}_index_search_slowlog.json
appender.index_search_slowlog_rolling.layout.type = ECSJsonLayout
appender.index_search_slowlog_rolling.layout.dataset = elasticsearch.index_search_slowlog

appender.index_search_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs\
  .cluster_name}_index_search_slowlog-%i.json.gz
appender.index_search_slowlog_rolling.policies.type = Policies
appender.index_search_slowlog_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.index_search_slowlog_rolling.policies.size.size = 1GB
appender.index_search_slowlog_rolling.strategy.type = DefaultRolloverStrategy
appender.index_search_slowlog_rolling.strategy.max = 4
#################################################

#################################################
logger.index_search_slowlog_rolling.name = index.search.slowlog
logger.index_search_slowlog_rolling.level = trace
logger.index_search_slowlog_rolling.appenderRef.index_search_slowlog_rolling.ref = index_search_slowlog_rolling
logger.index_search_slowlog_rolling.additivity = false

######## Indexing slowlog JSON ##################
appender.index_indexing_slowlog_rolling.type = RollingFile
appender.index_indexing_slowlog_rolling.name = index_indexing_slowlog_rolling
appender.index_indexing_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}\
  _index_indexing_slowlog.json
appender.index_indexing_slowlog_rolling.layout.type = ECSJsonLayout
appender.index_indexing_slowlog_rolling.layout.dataset = elasticsearch.index_indexing_slowlog


appender.index_indexing_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}\
  _index_indexing_slowlog-%i.json.gz
appender.index_indexing_slowlog_rolling.policies.type = Policies
appender.index_indexing_slowlog_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.index_indexing_slowlog_rolling.policies.size.size = 1GB
appender.index_indexing_slowlog_rolling.strategy.type = DefaultRolloverStrategy
appender.index_indexing_slowlog_rolling.strategy.max = 4
#################################################


logger.index_indexing_slowlog.name = index.indexing.slowlog.index
logger.index_indexing_slowlog.level = trace
logger.index_indexing_slowlog.appenderRef.index_indexing_slowlog_rolling.ref = index_indexing_slowlog_rolling
logger.index_indexing_slowlog.additivity = false


logger.org_apache_pdfbox.name = org.apache.pdfbox
logger.org_apache_pdfbox.level = off

logger.org_apache_poi.name = org.apache.poi
logger.org_apache_poi.level = off

logger.org_apache_fontbox.name = org.apache.fontbox
logger.org_apache_fontbox.level = off

logger.org_apache_xmlbeans.name = org.apache.xmlbeans
logger.org_apache_xmlbeans.level = off

logger.entitlements_ingest_attachment.name = org.elasticsearch.entitlement.runtime.policy.PolicyManager.ingest-attachment.ALL-UNNAMED
logger.entitlements_ingest_attachment.level = error


logger.entitlements_repository_gcs.name = org.elasticsearch.entitlement.runtime.policy.PolicyManager.repository-gcs.ALL-UNNAMED
logger.entitlements_repository_gcs.level = error


logger.com_amazonaws.name = com.amazonaws
logger.com_amazonaws.level = warn

logger.com_amazonaws_jmx_SdkMBeanRegistrySupport.name = com.amazonaws.jmx.SdkMBeanRegistrySupport
logger.com_amazonaws_jmx_SdkMBeanRegistrySupport.level = error

logger.com_amazonaws_metrics_AwsSdkMetrics.name = com.amazonaws.metrics.AwsSdkMetrics
logger.com_amazonaws_metrics_AwsSdkMetrics.level = error

logger.com_amazonaws_auth_profile_internal_BasicProfileConfigFileLoader.name = com.amazonaws.auth.profile.internal.BasicProfileConfigFileLoader
logger.com_amazonaws_auth_profile_internal_BasicProfileConfigFileLoader.level = error

logger.com_amazonaws_services_s3_internal_UseArnRegionResolver.name = com.amazonaws.services.s3.internal.UseArnRegionResolver
logger.com_amazonaws_services_s3_internal_UseArnRegionResolver.level = error

logger.entitlements_repository_s3.name = org.elasticsearch.entitlement.runtime.policy.PolicyManager.repository-s3.ALL-UNNAMED
logger.entitlements_repository_s3.level = error



appender.audit_rolling.type = RollingFile
appender.audit_rolling.name = audit_rolling
appender.audit_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_audit.json
appender.audit_rolling.layout.type = PatternLayout
appender.audit_rolling.layout.pattern = {\
                "type":"audit", \
                "timestamp":"%d{yyyy-MM-dd'T'HH:mm:ss,SSSZ}"\
                %varsNotEmpty{, "cluster.name":"%enc{%map{cluster.name}}{JSON}"}\
                %varsNotEmpty{, "cluster.uuid":"%enc{%map{cluster.uuid}}{JSON}"}\
                %varsNotEmpty{, "node.name":"%enc{%map{node.name}}{JSON}"}\
                %varsNotEmpty{, "node.id":"%enc{%map{node.id}}{JSON}"}\
                %varsNotEmpty{, "host.name":"%enc{%map{host.name}}{JSON}"}\
                %varsNotEmpty{, "host.ip":"%enc{%map{host.ip}}{JSON}"}\
                %varsNotEmpty{, "event.type":"%enc{%map{event.type}}{JSON}"}\
                %varsNotEmpty{, "event.action":"%enc{%map{event.action}}{JSON}"}\
                %varsNotEmpty{, "authentication.type":"%enc{%map{authentication.type}}{JSON}"}\
                %varsNotEmpty{, "user.name":"%enc{%map{user.name}}{JSON}"}\
                %varsNotEmpty{, "user.run_by.name":"%enc{%map{user.run_by.name}}{JSON}"}\
                %varsNotEmpty{, "user.run_as.name":"%enc{%map{user.run_as.name}}{JSON}"}\
                %varsNotEmpty{, "user.realm":"%enc{%map{user.realm}}{JSON}"}\
                %varsNotEmpty{, "user.realm_domain":"%enc{%map{user.realm_domain}}{JSON}"}\
                %varsNotEmpty{, "user.run_by.realm":"%enc{%map{user.run_by.realm}}{JSON}"}\
                %varsNotEmpty{, "user.run_by.realm_domain":"%enc{%map{user.run_by.realm_domain}}{JSON}"}\
                %varsNotEmpty{, "user.run_as.realm":"%enc{%map{user.run_as.realm}}{JSON}"}\
                %varsNotEmpty{, "user.run_as.realm_domain":"%enc{%map{user.run_as.realm_domain}}{JSON}"}\
                %varsNotEmpty{, "user.roles":%map{user.roles}}\
                %varsNotEmpty{, "apikey.id":"%enc{%map{apikey.id}}{JSON}"}\
                %varsNotEmpty{, "apikey.name":"%enc{%map{apikey.name}}{JSON}"}\
                %varsNotEmpty{, "authentication.token.name":"%enc{%map{authentication.token.name}}{JSON}"}\
                %varsNotEmpty{, "authentication.token.type":"%enc{%map{authentication.token.type}}{JSON}"}\
                %varsNotEmpty{, "cross_cluster_access":%map{cross_cluster_access}}\
                %varsNotEmpty{, "origin.type":"%enc{%map{origin.type}}{JSON}"}\
                %varsNotEmpty{, "origin.address":"%enc{%map{origin.address}}{JSON}"}\
                %varsNotEmpty{, "realm":"%enc{%map{realm}}{JSON}"}\
                %varsNotEmpty{, "realm_domain":"%enc{%map{realm_domain}}{JSON}"}\
                %varsNotEmpty{, "url.path":"%enc{%map{url.path}}{JSON}"}\
                %varsNotEmpty{, "url.query":"%enc{%map{url.query}}{JSON}"}\
                %varsNotEmpty{, "request.method":"%enc{%map{request.method}}{JSON}"}\
                %varsNotEmpty{, "request.body":"%enc{%map{request.body}}{JSON}"}\
                %varsNotEmpty{, "request.id":"%enc{%map{request.id}}{JSON}"}\
                %varsNotEmpty{, "action":"%enc{%map{action}}{JSON}"}\
                %varsNotEmpty{, "request.name":"%enc{%map{request.name}}{JSON}"}\
                %varsNotEmpty{, "indices":%map{indices}}\
                %varsNotEmpty{, "opaque_id":"%enc{%map{opaque_id}}{JSON}"}\
                %varsNotEmpty{, "trace.id":"%enc{%map{trace.id}}{JSON}"}\
                %varsNotEmpty{, "x_forwarded_for":"%enc{%map{x_forwarded_for}}{JSON}"}\
                %varsNotEmpty{, "transport.profile":"%enc{%map{transport.profile}}{JSON}"}\
                %varsNotEmpty{, "rule":"%enc{%map{rule}}{JSON}"}\
                %varsNotEmpty{, "put":%map{put}}\
                %varsNotEmpty{, "delete":%map{delete}}\
                %varsNotEmpty{, "change":%map{change}}\
                %varsNotEmpty{, "create":%map{create}}\
                %varsNotEmpty{, "invalidate":%map{invalidate}}\
                }%n
# "node.name" node name from the `elasticsearch.yml` settings
# "node.id" node id which should not change between cluster restarts
# "host.name" unresolved hostname of the local node
# "host.ip" the local bound ip (i.e. the ip listening for connections)
# "origin.type" a received REST request is translated into one or more transport requests. This indicates which processing layer generated the event "rest" or "transport" (internal)
# "event.action" the name of the audited event, eg. "authentication_failed", "access_granted", "run_as_granted", etc.
# "authentication.type" one of "realm", "api_key", "token", "anonymous" or "internal"
# "user.name" the subject name as authenticated by a realm
# "user.run_by.name" the original authenticated subject name that is impersonating another one.
# "user.run_as.name" if this "event.action" is of a run_as type, this is the subject name to be impersonated as.
# "user.realm" the name of the realm that authenticated "user.name"
# "user.realm_domain" if "user.realm" is under a domain, this is the name of the domain
# "user.run_by.realm" the realm name of the impersonating subject ("user.run_by.name")
# "user.run_by.realm_domain" if "user.run_by.realm" is under a domain, this is the name of the domain
# "user.run_as.realm" if this "event.action" is of a run_as type, this is the realm name the impersonated user is looked up from
# "user.run_as.realm_domain" if "user.run_as.realm" is under a domain, this is the name of the domain
# "user.roles" the roles array of the user; these are the roles that are granting privileges
# "apikey.id" this field is present if and only if the "authentication.type" is "api_key"
# "apikey.name" this field is present if and only if the "authentication.type" is "api_key"
# "authentication.token.name" this field is present if and only if the authenticating credential is a service account token
# "authentication.token.type" this field is present if and only if the authenticating credential is a service account token
# "cross_cluster_access" this field is present if and only if the associated authentication occurred cross cluster
# "event.type" informs about what internal system generated the event; possible values are "rest", "transport", "ip_filter" and "security_config_change"
# "origin.address" the remote address and port of the first network hop, i.e. a REST proxy or another cluster node
# "realm" name of a realm that has generated an "authentication_failed" or an "authentication_successful"; the subject is not yet authenticated
# "realm_domain" if "realm" is under a domain, this is the name of the domain
# "url.path" the URI component between the port and the query string; it is percent (URL) encoded
# "url.query" the URI component after the path and before the fragment; it is percent (URL) encoded
# "request.method" the method of the HTTP request, i.e. one of GET, POST, PUT, DELETE, OPTIONS, HEAD, PATCH, TRACE, CONNECT
# "request.body" the content of the request body entity, JSON escaped
# "request.id" a synthetic identifier for the incoming request, this is unique per incoming request, and consistent across all audit events generated by that request
# "action" an action is the most granular operation that is authorized and this identifies it in a namespaced way (internal)
# "request.name" if the event is in connection to a transport message this is the name of the request class, similar to how rest requests are identified by the url path (internal)
# "indices" the array of indices that the "action" is acting upon
# "opaque_id" opaque value conveyed by the "X-Opaque-Id" request header
# "trace_id" an identifier conveyed by the part of "traceparent" request header
# "x_forwarded_for" the addresses from the "X-Forwarded-For" request header, as a verbatim string value (not an array)
# "transport.profile" name of the transport profile in case this is a "connection_granted" or "connection_denied" event
# "rule" name of the applied rule if the "origin.type" is "ip_filter"
# the "put", "delete", "change", "create", "invalidate" fields are only present
# when the "event.type" is "security_config_change" and contain the security config change (as an object) taking effect

appender.audit_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_audit-%d{yyyy-MM-dd}-%i.json.gz
appender.audit_rolling.policies.type = Policies
appender.audit_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.audit_rolling.policies.time.interval = 1
appender.audit_rolling.policies.time.modulate = true
appender.audit_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.audit_rolling.policies.size.size = 1GB
appender.audit_rolling.strategy.type = DefaultRolloverStrategy
appender.audit_rolling.strategy.fileIndex = nomax

logger.xpack_security_audit_logfile.name = org.elasticsearch.xpack.security.audit.logfile.LoggingAuditTrail
logger.xpack_security_audit_logfile.level = info
logger.xpack_security_audit_logfile.appenderRef.audit_rolling.ref = audit_rolling
logger.xpack_security_audit_logfile.additivity = false

logger.xmlsig.name = org.apache.xml.security.signature.XMLSignature
logger.xmlsig.level = error
logger.samlxml_decrypt.name = org.opensaml.xmlsec.encryption.support.Decrypter
logger.samlxml_decrypt.level = fatal
logger.saml2_decrypt.name = org.opensaml.saml.saml2.encryption.Decrypter
logger.saml2_decrypt.level = fatal

logger.entitlements_xpack_security.name = org.elasticsearch.entitlement.runtime.policy.PolicyManager.x-pack-security.org.elasticsearch.security
logger.entitlements_xpack_security.level = error


logger.entitlements_inference.name = org.elasticsearch.entitlement.runtime.policy.PolicyManager.x-pack-inference.software.amazon.awssdk.profiles
logger.entitlements_inference.level = error

# cat /etc/elasticsearch/readonlyrest.yml

readonlyrest:
    access_control_rules:
    - name: "LOCALHOST-only access"
      hosts: ["127.0.0.1", "localhost"]

# ls -la /etc/elasticsearch/

total 64
drwxr-s---.   3 root          elasticsearch  4096 Dec  3 18:00 .
drwxr-xr-x. 115 root          root           8192 Dec  3 17:59 ..
-rw-rw----.   1 root          elasticsearch   199 Dec  1 17:54 elasticsearch.keystore
-rw-rw----.   1 root          elasticsearch  1042 Jun 18 22:13 elasticsearch-plugins.example.yml
-rw-rw----.   1 root          elasticsearch  2817 Dec  1 18:08 elasticsearch.yml
-rw-rw----.   1 root          elasticsearch  3074 Jun 18 22:13 jvm.options
drwxr-s---.   2 root          elasticsearch     6 Jun 18 22:16 jvm.options.d
-rw-rw----.   1 root          elasticsearch 19875 Dec  3 17:57 log4j2.properties
-rw-r-----.   1 elasticsearch elasticsearch   117 Dec  2 16:06 readonlyrest.yml
-rw-rw----.   1 root          elasticsearch   473 Jun 18 22:13 role_mapping.yml
-rw-rw----.   1 root          elasticsearch   197 Jun 18 22:13 roles.yml
-rw-rw----.   1 root          elasticsearch     0 Jun 18 22:13 users
-rw-rw----.   1 root          elasticsearch     0 Jun 18 22:13 users_roles

Configuration files for launch:

# cat /etc/sysconfig/elasticsearch

################################
# Elasticsearch
################################

# Elasticsearch home directory
#ES_HOME=/usr/share/elasticsearch

# Elasticsearch Java path
#ES_JAVA_HOME=

# Elasticsearch configuration directory
# Note: this setting will be shared with command-line tools
ES_PATH_CONF=/etc/elasticsearch

# Elasticsearch PID directory
#PID_DIR=/var/run/elasticsearch

# Additional Java OPTS
#ES_JAVA_OPTS=

# Configure restart on package upgrade (true, every other setting will lead to not restarting)
#RESTART_ON_UPGRADE=true

Command to run

# systemctl start elasticsearch

# ls -la /var/log/elasticsearch/

total 212
drwxr-s---.  2 elasticsearch elasticsearch  4096 Dec  3 17:57 .
drwxr-xr-x. 16 root          root          12288 Dec  3 00:00 ..
-rw-r--r--.  1 elasticsearch elasticsearch     0 Dec  3 17:57 elasticsearch_audit.json
-rw-r--r--.  1 elasticsearch elasticsearch  1591 Dec  3 17:57 elasticsearch_deprecation.json
-rw-r--r--.  1 elasticsearch elasticsearch     0 Dec  3 17:57 elasticsearch_index_indexing_slowlog.json
-rw-r--r--.  1 elasticsearch elasticsearch     0 Dec  3 17:57 elasticsearch_index_search_slowlog.json
-rw-r--r--.  1 elasticsearch elasticsearch 42669 Dec  3 17:59 elasticsearch.log
-rw-r--r--.  1 elasticsearch elasticsearch 97966 Dec  3 17:59 elasticsearch_server.json
-rw-r--r--.  1 elasticsearch elasticsearch 38852 Dec  3 17:57 gc.log
-rw-r--r--.  1 elasticsearch elasticsearch  3047 Dec  3 17:57 gc.log.00
-rw-r--r--.  1 elasticsearch elasticsearch  3056 Dec  3 17:57 gc.log.01
-rw-r--r--.  1 elasticsearch elasticsearch     0 Dec  3 17:57 readonlyrest_audit.log

Access logs should be written to the readonlyrest_audit.log file. But this file is empty, as can be seen from the output above.

Access logs are now written to file elasticsearch.log

Before running elasticsearch, I deleted all files from this folder.

API output:

curl localhost:9200
{
  "name" : "host",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "aPTYDyjUSbCu9G4xP9sMLQ",
  "version" : {
    "number" : "8.18.3",
    "build_flavor" : "default",
    "build_type" : "rpm",
    "build_hash" : "28fc77664903e7de48ba5632e5d8bfeb5e3ed39c",
    "build_date" : "2025-06-18T22:08:41.171261054Z",
    "build_snapshot" : false,
    "lucene_version" : "9.12.1",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "You Know, for Search"
}
# curl 'localhost:9200/_cat/nodes?h=n,v,j&v'
node version  java
host 8.18.3   24
# curl 'localhost:9200/_cat/plugins'
host readonlyrest 1.67.3

I’m sorry, this is a secondary task, so I can’t find time for it every day.

Maybe the log file can tell us something?

[2025-12-03T17:57:24,642][INFO ][o.e.n.j.JdkVectorLibrary ] [host] vec_caps=1
[2025-12-03T17:57:24,667][INFO ][o.e.n.NativeAccess       ] [host] Using native vector library; to disable start with -Dorg.elasticsearch.nativeaccess.enableVectorLibrary=false
[2025-12-03T17:57:24,687][INFO ][o.e.n.NativeAccess       ] [host] Using [jdk] native provider and native methods for [Linux]
[2025-12-03T17:57:24,780][INFO ][o.a.l.i.v.PanamaVectorizationProvider] [host] Java vector incubator API enabled; uses preferredBitSize=256; FMA enabled
[2025-12-03T17:57:24,857][INFO ][o.e.b.Elasticsearch      ] [host] Bootstrapping Entitlements
[2025-12-03T17:57:28,653][INFO ][o.e.n.Node               ] [host] version[8.18.3], pid[1433833], build[rpm/28fc77664903e7de48ba5632e5d8bfeb5e3ed39c/2025-06-18T22:08:41.171261054Z], OS[Linux/5.15.0-309.180.4.el9uek.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/24/24+36-3646]
[2025-12-03T17:57:28,653][INFO ][o.e.n.Node               ] [host] JVM home [/usr/share/elasticsearch/jdk], using bundled JDK [true]
[2025-12-03T17:57:28,653][INFO ][o.e.n.Node               ] [host] JVM arguments [-Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=CLDR, -Dorg.apache.lucene.vectorization.upperJavaFeatureVersion=24, -Des.distribution.type=rpm, -Des.java.type=null, --enable-native-access=org.elasticsearch.nativeaccess,org.apache.lucene.core, --enable-native-access=ALL-UNNAMED, --illegal-native-access=deny, -XX:ReplayDataFile=/var/log/elasticsearch/replay_pid%p.log, -Des.entitlements.enabled=true, -XX:+EnableDynamicAgentLoading, -Djdk.attach.allowAttachSelf=true, --patch-module=java.base=lib/entitlement-bridge/elasticsearch-entitlement-bridge-8.18.3.jar, --add-exports=java.base/org.elasticsearch.entitlement.bridge=org.elasticsearch.entitlement,java.logging,java.net.http,java.naming,jdk.net, -XX:+UseG1GC, -Djava.io.tmpdir=/tmp/elasticsearch-480381667597147711, --add-modules=jdk.incubator.vector, -XX:+HeapDumpOnOutOfMemoryError, -XX:+ExitOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,level,pid,tags:filecount=32,filesize=64m, -Xms3884m, -Xmx3884m, -XX:MaxDirectMemorySize=2036334592, -XX:G1HeapRegionSize=4m, -XX:InitiatingHeapOccupancyPercent=30, -XX:G1ReservePercent=15, --module-path=/usr/share/elasticsearch/lib, --add-modules=jdk.net, --add-modules=jdk.management.agent, --add-modules=ALL-MODULE-PATH, -Djdk.module.main=org.elasticsearch.server]
[2025-12-03T17:57:28,654][INFO ][o.e.n.Node               ] [host] Default Locale [en_US]
[2025-12-03T17:57:31,482][INFO ][t.b.r.b.LogPluginBuildInfoMessage$] [host] Starting ReadonlyREST plugin v1.67.3 on Elasticsearch v8.18.3
[2025-12-03T17:57:31,501][WARN ][o.e.e.r.p.P.r.ALL-UNNAMED] [host] Not entitled: component [readonlyrest], module [ALL-UNNAMED], class [class tech.beshu.ror.tools.core.utils.EsDirectory$], entitlement [file], operation [read], path [/usr/share/elasticsearch]
org.elasticsearch.entitlement.runtime.api.NotEntitledException: component [readonlyrest], module [ALL-UNNAMED], class [class tech.beshu.ror.tools.core.utils.EsDirectory$], entitlement [file], operation [read], path [/usr/share/elasticsearch]
	at org.elasticsearch.entitlement.runtime.policy.PolicyManager.notEntitled(PolicyManager.java:690) ~[elasticsearch-entitlement-8.18.3.jar:?]
	at org.elasticsearch.entitlement.runtime.policy.PolicyManager.checkFileRead(PolicyManager.java:511) ~[elasticsearch-entitlement-8.18.3.jar:?]
	at org.elasticsearch.entitlement.runtime.policy.PolicyManager.checkFileRead(PolicyManager.java:475) ~[elasticsearch-entitlement-8.18.3.jar:?]
	at org.elasticsearch.entitlement.runtime.policy.PolicyManager.checkFileRead(PolicyManager.java:454) ~[elasticsearch-entitlement-8.18.3.jar:?]
	at org.elasticsearch.entitlement.runtime.api.ElasticsearchEntitlementChecker.check$java_io_File$exists(ElasticsearchEntitlementChecker.java:1451) ~[elasticsearch-entitlement-8.18.3.jar:?]
	at java.io.File.exists(File.java) ~[?:?]
	at tech.beshu.ror.tools.core.utils.EsDirectory$.verifyEsLocation(EsDirectory.scala:69) ~[ror-tools-core.jar:?]
	at tech.beshu.ror.tools.core.utils.EsDirectory$.from(EsDirectory.scala:59) ~[ror-tools-core.jar:?]
	at tech.beshu.ror.tools.core.patches.PatchingVerifier$.createEsPatchExecutor$$anonfun$1(PatchingVerifier.scala:44) ~[ror-tools-core.jar:?]
	at scala.util.Try$.apply(Try.scala:217) ~[scala-library-2.13.14.jar:?]
	at tech.beshu.ror.tools.core.patches.PatchingVerifier$.createEsPatchExecutor(PatchingVerifier.scala:44) ~[ror-tools-core.jar:?]
	at tech.beshu.ror.tools.core.patches.PatchingVerifier$.verify(PatchingVerifier.scala:32) ~[ror-tools-core.jar:?]
	at tech.beshu.ror.es.utils.EsPatchVerifier$.verify$$anonfun$1$$anonfun$1(EsPatchVerifier.scala:28) ~[readonlyrest-1.67.3_es8.18.3.jar:?]
	at scala.util.Either.flatMap(Either.scala:360) ~[scala-library-2.13.14.jar:?]
	at tech.beshu.ror.es.utils.EsPatchVerifier$.verify$$anonfun$1(EsPatchVerifier.scala:28) ~[readonlyrest-1.67.3_es8.18.3.jar:?]
	at tech.beshu.ror.es.utils.EsPatchVerifier$.verify$$anonfun$adapted$1(EsPatchVerifier.scala:33) ~[readonlyrest-1.67.3_es8.18.3.jar:?]
	at tech.beshu.ror.utils.AccessControllerHelper$$anon$1.run(AccessControllerHelper.scala:28) ~[core-1.67.3.jar:?]
	at java.security.AccessController.doPrivileged(AccessController.java:74) ~[?:?]
	at tech.beshu.ror.utils.AccessControllerHelper$.doPrivileged(AccessControllerHelper.scala:29) ~[core-1.67.3.jar:?]
	at tech.beshu.ror.es.utils.EsPatchVerifier$.verify(EsPatchVerifier.scala:33) ~[readonlyrest-1.67.3_es8.18.3.jar:?]
	at tech.beshu.ror.es.ReadonlyRestPlugin.<init>(ReadonlyRestPlugin.scala:91) ~[readonlyrest-1.67.3_es8.18.3.jar:?]
	at jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(DirectConstructorHandleAccessor.java:62) ~[?:?]
	at java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499) ~[?:?]
	at java.lang.reflect.Constructor.newInstance(Constructor.java:483) ~[?:?]
	at org.elasticsearch.plugins.PluginsService.loadPlugin(PluginsService.java:512) ~[elasticsearch-8.18.3.jar:?]
	at org.elasticsearch.plugins.PluginsService.loadBundle(PluginsService.java:427) ~[elasticsearch-8.18.3.jar:?]
	at org.elasticsearch.plugins.PluginsService.lambda$loadPluginBundles$2(PluginsService.java:219) ~[elasticsearch-8.18.3.jar:?]
	at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:186) ~[?:?]
	at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:215) ~[?:?]
	at java.util.Iterator.forEachRemaining(Iterator.java:133) ~[?:?]
	at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1939) ~[?:?]
	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:570) ~[?:?]
	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:560) ~[?:?]
	at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:153) ~[?:?]
	at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:176) ~[?:?]
	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:265) ~[?:?]
	at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:636) ~[?:?]
	at org.elasticsearch.plugins.PluginsService.loadPluginBundles(PluginsService.java:219) ~[elasticsearch-8.18.3.jar:?]
	at org.elasticsearch.plugins.PluginsService.<init>(PluginsService.java:94) ~[elasticsearch-8.18.3.jar:?]
	at org.elasticsearch.node.NodeServiceProvider.newPluginService(NodeServiceProvider.java:57) ~[elasticsearch-8.18.3.jar:?]
	at org.elasticsearch.node.NodeConstruction.createEnvironment(NodeConstruction.java:480) ~[elasticsearch-8.18.3.jar:?]
	at org.elasticsearch.node.NodeConstruction.prepareConstruction(NodeConstruction.java:274) ~[elasticsearch-8.18.3.jar:?]
	at org.elasticsearch.node.Node.<init>(Node.java:201) ~[elasticsearch-8.18.3.jar:?]
	at org.elasticsearch.bootstrap.Elasticsearch$2.<init>(Elasticsearch.java:385) ~[elasticsearch-8.18.3.jar:?]
	at org.elasticsearch.bootstrap.Elasticsearch.initPhase3(Elasticsearch.java:385) ~[elasticsearch-8.18.3.jar:?]
	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:97) ~[elasticsearch-8.18.3.jar:?]
[2025-12-03T17:57:31,513][WARN ][t.b.r.e.u.EsPatchVerifier$] [host] Cannot verify if the ES was patched. component [readonlyrest], module [ALL-UNNAMED], class [class tech.beshu.ror.tools.core.utils.EsDirectory$], entitlement [file], operation [read], path [/usr/share/elasticsearch]
[2025-12-03T17:57:31,569][INFO ][t.b.r.c.RorProperties$   ] [host] No 'com.readonlyrest.settings.maxSize' property found. Using default: 3.0 MB
[2025-12-03T17:57:31,869][INFO ][t.b.r.c.RorSsl$          ] [host] Cannot find SSL configuration in /etc/elasticsearch/elasticsearch.yml ...
[2025-12-03T17:57:31,871][INFO ][t.b.r.c.RorSsl$          ] [host] ... trying: /etc/elasticsearch/readonlyrest.yml
[2025-12-03T17:57:31,926][INFO ][t.b.r.c.FipsConfiguration$] [host] Cannot find FIPS configuration in /etc/elasticsearch/elasticsearch.yml ...
[2025-12-03T17:57:31,926][INFO ][t.b.r.c.FipsConfiguration$] [host] ... trying: /etc/elasticsearch/readonlyrest.yml
[2025-12-03T17:57:31,929][INFO ][t.b.r.b.EsInitListener   ] [host] ReadonlyREST is waiting for full Elasticsearch init
[2025-12-03T17:57:32,119][INFO ][o.e.p.PluginsService     ] [host] loaded module [repository-url]
[2025-12-03T17:57:32,120][INFO ][o.e.p.PluginsService     ] [host] loaded module [rest-root]
[2025-12-03T17:57:32,120][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-core]
[2025-12-03T17:57:32,120][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-redact]
[2025-12-03T17:57:32,121][INFO ][o.e.p.PluginsService     ] [host] loaded module [ingest-user-agent]
[2025-12-03T17:57:32,121][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-async-search]
[2025-12-03T17:57:32,121][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-monitoring]
[2025-12-03T17:57:32,121][INFO ][o.e.p.PluginsService     ] [host] loaded module [repository-s3]
[2025-12-03T17:57:32,121][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-analytics]
[2025-12-03T17:57:32,122][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-esql-core]
[2025-12-03T17:57:32,122][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-ent-search]
[2025-12-03T17:57:32,122][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-autoscaling]
[2025-12-03T17:57:32,122][INFO ][o.e.p.PluginsService     ] [host] loaded module [lang-painless]
[2025-12-03T17:57:32,123][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-ml]
[2025-12-03T17:57:32,123][INFO ][o.e.p.PluginsService     ] [host] loaded module [lang-mustache]
[2025-12-03T17:57:32,123][INFO ][o.e.p.PluginsService     ] [host] loaded module [legacy-geo]
[2025-12-03T17:57:32,124][INFO ][o.e.p.PluginsService     ] [host] loaded module [logsdb]
[2025-12-03T17:57:32,124][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-ql]
[2025-12-03T17:57:32,124][INFO ][o.e.p.PluginsService     ] [host] loaded module [rank-rrf]
[2025-12-03T17:57:32,125][INFO ][o.e.p.PluginsService     ] [host] loaded module [analysis-common]
[2025-12-03T17:57:32,125][INFO ][o.e.p.PluginsService     ] [host] loaded module [health-shards-availability]
[2025-12-03T17:57:32,125][INFO ][o.e.p.PluginsService     ] [host] loaded module [transport-netty4]
[2025-12-03T17:57:32,125][INFO ][o.e.p.PluginsService     ] [host] loaded module [aggregations]
[2025-12-03T17:57:32,126][INFO ][o.e.p.PluginsService     ] [host] loaded module [ingest-common]
[2025-12-03T17:57:32,127][INFO ][o.e.p.PluginsService     ] [host] loaded module [frozen-indices]
[2025-12-03T17:57:32,127][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-identity-provider]
[2025-12-03T17:57:32,128][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-shutdown]
[2025-12-03T17:57:32,129][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-text-structure]
[2025-12-03T17:57:32,129][INFO ][o.e.p.PluginsService     ] [host] loaded module [snapshot-repo-test-kit]
[2025-12-03T17:57:32,129][INFO ][o.e.p.PluginsService     ] [host] loaded module [ml-package-loader]
[2025-12-03T17:57:32,129][INFO ][o.e.p.PluginsService     ] [host] loaded module [kibana]
[2025-12-03T17:57:32,129][INFO ][o.e.p.PluginsService     ] [host] loaded module [constant-keyword]
[2025-12-03T17:57:32,130][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-logstash]
[2025-12-03T17:57:32,130][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-ccr]
[2025-12-03T17:57:32,130][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-graph]
[2025-12-03T17:57:32,130][INFO ][o.e.p.PluginsService     ] [host] loaded module [rank-vectors]
[2025-12-03T17:57:32,131][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-esql]
[2025-12-03T17:57:32,131][INFO ][o.e.p.PluginsService     ] [host] loaded module [parent-join]
[2025-12-03T17:57:32,132][INFO ][o.e.p.PluginsService     ] [host] loaded module [counted-keyword]
[2025-12-03T17:57:32,133][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-enrich]
[2025-12-03T17:57:32,133][INFO ][o.e.p.PluginsService     ] [host] loaded module [repositories-metering-api]
[2025-12-03T17:57:32,133][INFO ][o.e.p.PluginsService     ] [host] loaded module [transform]
[2025-12-03T17:57:32,135][INFO ][o.e.p.PluginsService     ] [host] loaded module [repository-azure]
[2025-12-03T17:57:32,135][INFO ][o.e.p.PluginsService     ] [host] loaded module [dot-prefix-validation]
[2025-12-03T17:57:32,136][INFO ][o.e.p.PluginsService     ] [host] loaded module [repository-gcs]
[2025-12-03T17:57:32,139][INFO ][o.e.p.PluginsService     ] [host] loaded module [spatial]
[2025-12-03T17:57:32,139][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-otel-data]
[2025-12-03T17:57:32,140][INFO ][o.e.p.PluginsService     ] [host] loaded module [apm]
[2025-12-03T17:57:32,140][INFO ][o.e.p.PluginsService     ] [host] loaded module [mapper-extras]
[2025-12-03T17:57:32,140][INFO ][o.e.p.PluginsService     ] [host] loaded module [mapper-version]
[2025-12-03T17:57:32,142][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-rollup]
[2025-12-03T17:57:32,142][INFO ][o.e.p.PluginsService     ] [host] loaded module [percolator]
[2025-12-03T17:57:32,142][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-migrate]
[2025-12-03T17:57:32,142][INFO ][o.e.p.PluginsService     ] [host] loaded module [data-streams]
[2025-12-03T17:57:32,143][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-stack]
[2025-12-03T17:57:32,143][INFO ][o.e.p.PluginsService     ] [host] loaded module [rank-eval]
[2025-12-03T17:57:32,143][INFO ][o.e.p.PluginsService     ] [host] loaded module [reindex]
[2025-12-03T17:57:32,143][INFO ][o.e.p.PluginsService     ] [host] loaded module [systemd]
[2025-12-03T17:57:32,143][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-security]
[2025-12-03T17:57:32,143][INFO ][o.e.p.PluginsService     ] [host] loaded module [blob-cache]
[2025-12-03T17:57:32,143][INFO ][o.e.p.PluginsService     ] [host] loaded module [searchable-snapshots]
[2025-12-03T17:57:32,144][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-slm]
[2025-12-03T17:57:32,144][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-geoip-enterprise-downloader]
[2025-12-03T17:57:32,144][INFO ][o.e.p.PluginsService     ] [host] loaded module [snapshot-based-recoveries]
[2025-12-03T17:57:32,144][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-watcher]
[2025-12-03T17:57:32,144][INFO ][o.e.p.PluginsService     ] [host] loaded module [old-lucene-versions]
[2025-12-03T17:57:32,144][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-ilm]
[2025-12-03T17:57:32,144][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-inference]
[2025-12-03T17:57:32,144][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-voting-only-node]
[2025-12-03T17:57:32,145][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-deprecation]
[2025-12-03T17:57:32,145][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-fleet]
[2025-12-03T17:57:32,145][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-aggregate-metric]
[2025-12-03T17:57:32,145][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-downsample]
[2025-12-03T17:57:32,145][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-profiling]
[2025-12-03T17:57:32,145][INFO ][o.e.p.PluginsService     ] [host] loaded module [ingest-geoip]
[2025-12-03T17:57:32,145][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-write-load-forecaster]
[2025-12-03T17:57:32,146][INFO ][o.e.p.PluginsService     ] [host] loaded module [search-business-rules]
[2025-12-03T17:57:32,146][INFO ][o.e.p.PluginsService     ] [host] loaded module [ingest-attachment]
[2025-12-03T17:57:32,146][INFO ][o.e.p.PluginsService     ] [host] loaded module [wildcard]
[2025-12-03T17:57:32,146][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-apm-data]
[2025-12-03T17:57:32,146][INFO ][o.e.p.PluginsService     ] [host] loaded module [unsigned-long]
[2025-12-03T17:57:32,146][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-sql]
[2025-12-03T17:57:32,146][INFO ][o.e.p.PluginsService     ] [host] loaded module [runtime-fields-common]
[2025-12-03T17:57:32,146][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-async]
[2025-12-03T17:57:32,147][INFO ][o.e.p.PluginsService     ] [host] loaded module [vector-tile]
[2025-12-03T17:57:32,147][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-kql]
[2025-12-03T17:57:32,147][INFO ][o.e.p.PluginsService     ] [host] loaded module [lang-expression]
[2025-12-03T17:57:32,147][INFO ][o.e.p.PluginsService     ] [host] loaded module [x-pack-eql]
[2025-12-03T17:57:32,147][INFO ][o.e.p.PluginsService     ] [host] loaded plugin [readonlyrest]
[2025-12-03T17:57:33,462][INFO ][o.e.e.NodeEnvironment    ] [host] using [1] data paths, mounts [[/ (/dev/mapper/vg_system-lv_root)]], net usable_space [12.1gb], net total_space [17.9gb], types [xfs]
[2025-12-03T17:57:33,462][INFO ][o.e.e.NodeEnvironment    ] [host] heap size [3.7gb], compressed ordinary object pointers [true]
[2025-12-03T17:57:33,534][INFO ][o.e.n.Node               ] [host] node name [host], node ID [ufyJRjJMS5C1d1TquekSLQ], cluster name [elasticsearch], roles [data_warm, master, remote_cluster_client, data, data_cold, ingest, data_frozen, ml, data_hot, transform, data_content]
[2025-12-03T17:57:37,802][INFO ][o.e.i.r.RecoverySettings ] [host] using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b]
[2025-12-03T17:57:38,077][INFO ][o.e.f.FeatureService     ] [host] Registered local node features [cluster.stats.source_modes, data_stream.auto_sharding, data_stream.lifecycle.global_retention, data_stream.rollover.lazy, desired_node.version_deprecated, esql.agg_values, esql.async_query, esql.base64_decode_encode, esql.casting_operator, esql.counter_types, esql.disable_nullable_opts, esql.from_options, esql.metadata_fields, esql.metrics_counter_fields, esql.mv_ordering_sorted_ascending, esql.mv_sort, esql.resolve_fields_api, esql.spatial_points_from_source, esql.spatial_shapes, esql.st_centroid_agg, esql.st_contains_within, esql.st_disjoint, esql.st_intersects, esql.st_x_y, esql.string_literal_auto_casting, esql.string_literal_auto_casting_extended, esql.timespan_abbreviations, features_supported, file_settings, flattened.ignore_above_support, geoip.downloader.database.configuration, get_database_configuration_action.multi_node, health.dsl.info, health.extended_repository_indicator, knn_retriever_supported, license-trial-independent-version, linear_retriever_supported, logsdb_telemetry, logsdb_telemetry_stats, mapper.boolean_dimension, mapper.flattened.ignore_above_with_arrays_support, mapper.ignore_above_index_level_setting, mapper.index_sorting_on_nested, mapper.keyword_dimension_ignore_above, mapper.keyword_normalizer_synthetic_source, mapper.pass_through_priority, mapper.query_index_mode, mapper.range.null_values_off_by_one_fix, mapper.segment_level_fields_stats, mapper.source.synthetic_source_copy_to_fix, mapper.source.synthetic_source_copy_to_inside_objects_fix, mapper.source.synthetic_source_fallback, mapper.source.synthetic_source_stored_fields_advance_fix, mapper.source.synthetic_source_with_copy_to_and_doc_values_false, mapper.subobjects_auto, mapper.subobjects_auto_fixes, mapper.synthetic_source_keep, mapper.track_ignored_source, mapper.vectors.bbq, mapper.vectors.bit_vectors, mapper.vectors.int4_quantization, put_database_configuration_action.ipinfo, query_rule_list_types, query_rule_retriever_supported, query_rules.test, random_reranker_retriever_supported, repositories.supports_usage_stats, rest.capabilities_action, rest.local_only_capabilities, retrievers_supported, routing.boolean_routing_path, routing.multi_value_routing_path, rrf_retriever_composition_supported, rrf_retriever_supported, script.hamming, script.term_stats, search.vectors.k_param_supported, security.migration_framework, security.queryable_built_in_roles, security.role_mapping_cleanup, security.roles_metadata_flattened, semantic_text.default_elser_2, semantic_text.search_inference_id, simulate.component.template.substitutions, simulate.ignored.fields, simulate.index.template.substitutions, simulate.mapping.addition, simulate.mapping.validation, simulate.mapping.validation.templates, simulate.support.non.template.mapping, slm.interval_schedule, snapshot.repository_verify_integrity, standard_retriever_supported, stats.include_disk_thresholds, text_similarity_reranker_retriever_composition_supported, text_similarity_reranker_retriever_supported, tsdb.ts_routing_hash_doc_value_parse_byte_ref, unified_highlighter_matched_fields, usage.data_tiers.precalculate_stats]
[2025-12-03T17:57:38,122][INFO ][o.e.c.m.DataStreamGlobalRetentionSettings] [host] Updated default factory retention to [null]
[2025-12-03T17:57:38,123][INFO ][o.e.c.m.DataStreamGlobalRetentionSettings] [host] Updated max factory retention to [null]
[2025-12-03T17:57:38,459][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [host] [controller/1433854] [Main.cc@123] controller (64 bit): Version 8.18.3 (Build ac992c24cfaf6c) Copyright (c) 2025 Elasticsearch BV
[2025-12-03T17:57:38,840][INFO ][o.e.x.o.OTelPlugin       ] [host] OTel ingest plugin is enabled
[2025-12-03T17:57:38,892][INFO ][o.e.x.c.t.YamlTemplateRegistry] [host] OpenTelemetry index template registry is enabled
[2025-12-03T17:57:38,898][INFO ][o.e.t.a.APM              ] [host] Sending apm metrics is disabled
[2025-12-03T17:57:38,899][INFO ][o.e.t.a.APM              ] [host] Sending apm tracing is disabled
[2025-12-03T17:57:38,956][INFO ][o.e.x.s.Security         ] [host] Security is disabled
[2025-12-03T17:57:39,270][INFO ][o.e.x.w.Watcher          ] [host] Watcher initialized components at 2025-12-03T17:57:39.270Z
[2025-12-03T17:57:39,402][INFO ][o.e.x.p.ProfilingPlugin  ] [host] Profiling is enabled
[2025-12-03T17:57:39,435][INFO ][o.e.x.p.ProfilingPlugin  ] [host] profiling index templates will not be installed or reinstalled
[2025-12-03T17:57:39,445][INFO ][o.e.x.a.APMPlugin        ] [host] APM ingest plugin is enabled
[2025-12-03T17:57:39,509][INFO ][o.e.x.c.t.YamlTemplateRegistry] [host] apm index template registry is enabled
[2025-12-03T17:57:40,328][INFO ][o.e.t.n.NettyAllocator   ] [host] creating NettyAllocator with the following configs: [name=elasticsearch_configured, chunk_size=1mb, suggested_max_allocation_size=1mb, factors={es.unsafe.use_netty_default_chunk_and_page_size=false, g1gc_enabled=true, g1gc_region_size=4mb}]
[2025-12-03T17:57:40,394][INFO ][o.e.d.DiscoveryModule    ] [host] using discovery type [multi-node] and seed hosts providers [settings]
[2025-12-03T17:57:41,300][WARN ][o.e.t.TransportService   ] [host] invalid action name [upgrade_action] must start with one of: [cluster:monitor, indices:data/write, indices:admin, indices:monitor, indices:data/read, indices:internal, internal:, cluster:internal, cluster:admin]
[2025-12-03T17:57:41,627][WARN ][o.e.t.TransportService   ] [host] invalid action name [cat_action] must start with one of: [cluster:monitor, indices:data/write, indices:admin, indices:monitor, indices:data/read, indices:internal, internal:, cluster:internal, cluster:admin]
[2025-12-03T17:57:42,358][INFO ][o.e.n.Node               ] [host] initialized
[2025-12-03T17:57:42,359][INFO ][o.e.n.Node               ] [host] starting ...
[2025-12-03T17:57:42,381][INFO ][o.e.x.s.c.f.PersistentCache] [host] persistent cache index loaded
[2025-12-03T17:57:42,382][INFO ][o.e.x.d.l.DeprecationIndexingComponent] [host] deprecation component started
[2025-12-03T17:57:42,475][INFO ][o.e.t.TransportService   ] [host] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
[2025-12-03T17:57:42,856][WARN ][o.e.b.BootstrapChecks    ] [host] the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured; for more information see [https://www.elastic.co/guide/en/elasticsearch/reference/8.18/bootstrap-checks-discovery-configuration.html]
[2025-12-03T17:57:42,858][INFO ][o.e.c.c.ClusterBootstrapService] [host] this node is locked into cluster UUID [aPTYDyjUSbCu9G4xP9sMLQ] and will not attempt further cluster bootstrapping
[2025-12-03T17:57:42,868][INFO ][o.e.c.c.ClusterBootstrapService] [host] no discovery configuration found, will perform best-effort cluster bootstrapping after [3s] unless existing master is discovered
[2025-12-03T17:57:43,013][INFO ][o.e.c.s.MasterService    ] [host] elected-as-master ([1] nodes joined in term 11)[_FINISH_ELECTION_, {host}{ufyJRjJMS5C1d1TquekSLQ}{M-kCwhT3S4-_PeFDG-sZ6Q}{host}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}{8.18.3}{7000099-8525000} completing election], term: 11, version: 159, delta: master node changed {previous [], current [{host}{ufyJRjJMS5C1d1TquekSLQ}{M-kCwhT3S4-_PeFDG-sZ6Q}{host}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}{8.18.3}{7000099-8525000}]}
[2025-12-03T17:57:43,095][INFO ][o.e.c.s.ClusterApplierService] [host] master node changed {previous [], current [{host}{ufyJRjJMS5C1d1TquekSLQ}{M-kCwhT3S4-_PeFDG-sZ6Q}{host}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}{8.18.3}{7000099-8525000}]}, term: 11, version: 159, reason: Publication{term=11, version=159}
[2025-12-03T17:57:43,165][INFO ][o.e.h.AbstractHttpServerTransport] [host] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
[2025-12-03T17:57:43,179][INFO ][o.e.c.c.NodeJoinExecutor ] [host] node-join: [{host}{ufyJRjJMS5C1d1TquekSLQ}{M-kCwhT3S4-_PeFDG-sZ6Q}{host}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}{8.18.3}{7000099-8525000}] with reason [completing election]
[2025-12-03T17:57:43,191][INFO ][o.e.x.w.LicensedWriteLoadForecaster] [host] license state changed, now [valid]
[2025-12-03T17:57:43,222][INFO ][o.e.n.Node               ] [host] started {host}{ufyJRjJMS5C1d1TquekSLQ}{M-kCwhT3S4-_PeFDG-sZ6Q}{host}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}{8.18.3}{7000099-8525000}{xpack.installed=true, transform.config_version=10.0.0, ml.machine_memory=8147161088, ml.allocated_processors=2, ml.allocated_processors_double=2.0, ml.max_jvm_size=4072669184, ml.config_version=12.0.0}
[2025-12-03T17:57:43,243][INFO ][o.e.n.j.JdkPosixCLibrary ] [host] Sending 7 bytes to socket
[2025-12-03T17:57:43,252][INFO ][t.b.r.b.EsInitListener   ] [host] Elasticsearch fully initiated. ReadonlyREST can continue ...
[2025-12-03T17:57:43,304][INFO ][t.b.r.c.l.ConfigLoadingInterpreter$] [host] Loading Elasticsearch settings from file: /etc/elasticsearch/elasticsearch.yml
[2025-12-03T17:57:43,320][WARN ][o.e.x.i.s.e.a.ElasticInferenceServiceAuthorizationHandler] [host] Failed to revoke access to default inference endpoint IDs: [rainbow-sprinkles], error: org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];
[2025-12-03T17:57:43,314][WARN ][o.e.e.r.p.P.r.ALL-UNNAMED] [host] Not entitled: component [readonlyrest], module [ALL-UNNAMED], class [class tech.beshu.ror.es.EsEnv], entitlement [file], operation [read], path [/usr/share/elasticsearch/modules/x-pack-security]
org.elasticsearch.entitlement.runtime.api.NotEntitledException: component [readonlyrest], module [ALL-UNNAMED], class [class tech.beshu.ror.es.EsEnv], entitlement [file], operation [read], path [/usr/share/elasticsearch/modules/x-pack-security]
	at org.elasticsearch.entitlement.runtime.policy.PolicyManager.notEntitled(PolicyManager.java:690) ~[elasticsearch-entitlement-8.18.3.jar:?]
	at org.elasticsearch.entitlement.runtime.policy.PolicyManager.checkFileRead(PolicyManager.java:511) ~[elasticsearch-entitlement-8.18.3.jar:?]
	at org.elasticsearch.entitlement.runtime.policy.PolicyManager.checkFileRead(PolicyManager.java:475) ~[elasticsearch-entitlement-8.18.3.jar:?]
	at org.elasticsearch.entitlement.runtime.policy.PolicyManager.checkFileRead(PolicyManager.java:454) ~[elasticsearch-entitlement-8.18.3.jar:?]
	at org.elasticsearch.entitlement.runtime.api.ElasticsearchEntitlementChecker.check$java_io_File$exists(ElasticsearchEntitlementChecker.java:1451) ~[elasticsearch-entitlement-8.18.3.jar:?]
	at java.io.File.exists(File.java) ~[?:?]
	at tech.beshu.ror.es.EsEnv.isOssDistribution$$anonfun$1(EsEnv.scala:28) ~[?:?]
	at scala.util.Try$.apply(Try.scala:217) ~[?:?]
	at tech.beshu.ror.es.EsEnv.isOssDistribution(EsEnv.scala:29) ~[?:?]
	at tech.beshu.ror.configuration.EsConfig$.from$$anonfun$3(EsConfig.scala:44) ~[?:?]
	at cats.data.EitherT.flatMap$$anonfun$1(EitherT.scala:446) ~[?:?]
	at monix.eval.internal.TaskRunLoop$.startFull(TaskRunLoop.scala:189) ~[?:?]
	at monix.eval.internal.TaskRestartCallback.syncOnSuccess(TaskRestartCallback.scala:101) ~[?:?]
	at monix.eval.internal.TaskRestartCallback.onSuccess(TaskRestartCallback.scala:74) ~[?:?]
	at monix.execution.Callback.apply(Callback.scala:116) ~[?:?]
	at monix.eval.internal.TaskFromFuture$.startSimple$$anonfun$1(TaskFromFuture.scala:114) ~[?:?]
	at scala.runtime.function.JProcedure1.apply(JProcedure1.java:15) ~[?:?]
	at scala.runtime.function.JProcedure1.apply(JProcedure1.java:10) ~[?:?]
	at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[?:?]
	at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.compute(ForkJoinTask.java:1735) ~[?:?]
	at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.compute(ForkJoinTask.java:1726) ~[?:?]
	at java.util.concurrent.ForkJoinTask$InterruptibleTask.exec(ForkJoinTask.java:1650) ~[?:?]
	at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:507) ~[?:?]
	at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1394) ~[?:?]
	at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1970) ~[?:?]
	at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:187) ~[?:?]
[2025-12-03T17:57:43,348][INFO ][t.b.r.c.RorSsl$          ] [host] Cannot find SSL configuration in /etc/elasticsearch/elasticsearch.yml ...
[2025-12-03T17:57:43,348][INFO ][t.b.r.c.RorSsl$          ] [host] ... trying: /etc/elasticsearch/readonlyrest.yml
[2025-12-03T17:57:43,367][INFO ][t.b.r.c.FipsConfiguration$] [host] Cannot find FIPS configuration in /etc/elasticsearch/elasticsearch.yml ...
[2025-12-03T17:57:43,368][INFO ][t.b.r.c.FipsConfiguration$] [host] ... trying: /etc/elasticsearch/readonlyrest.yml
[2025-12-03T17:57:43,386][INFO ][t.b.r.c.RorProperties$   ] [host] No 'com.readonlyrest.settings.loading.delay' property found. Using default: 5 seconds
[2025-12-03T17:57:43,388][INFO ][t.b.r.c.RorProperties$   ] [host] No 'com.readonlyrest.settings.loading.attempts.count' property found. Using default: 5
[2025-12-03T17:57:43,389][INFO ][t.b.r.c.RorProperties$   ] [host] No 'com.readonlyrest.settings.loading.attempts.interval' property found. Using default: 5 seconds
[2025-12-03T17:57:43,398][INFO ][t.b.r.c.l.ConfigLoadingInterpreter$] [host] [CLUSTERWIDE SETTINGS] Loading ReadonlyREST settings from index (.readonlyrest) ...
[2025-12-03T17:57:44,003][INFO ][o.e.x.m.MlIndexRollover  ] [host] ML legacy indices rolled over
[2025-12-03T17:57:44,004][INFO ][o.e.x.m.MlAnomaliesIndexUpdate] [host] legacy ml anomalies indices rolled over and aliases updated
[2025-12-03T17:57:44,199][INFO ][o.e.l.ClusterStateLicenseService] [host] license [8d82f73b-f85d-410e-b0d0-f33b64f28978] mode [basic] - valid
[2025-12-03T17:57:44,201][INFO ][o.e.c.f.AbstractFileWatchingService] [host] starting file watcher ...
[2025-12-03T17:57:44,207][INFO ][o.e.g.GatewayService     ] [host] recovered [3] indices into cluster_state
[2025-12-03T17:57:44,214][INFO ][o.e.c.f.AbstractFileWatchingService] [host] file settings service up and running [tid=66]
[2025-12-03T17:57:44,214][INFO ][o.e.r.s.FileSettingsService] [host] setting file [/etc/elasticsearch/operator/settings.json] not found, initializing [file_settings] as empty
[2025-12-03T17:57:44,256][INFO ][o.e.x.w.LicensedWriteLoadForecaster] [host] license state changed, now [not valid]
[2025-12-03T17:57:44,642][INFO ][o.e.h.n.s.HealthNodeTaskExecutor] [host] Node [{host}{ufyJRjJMS5C1d1TquekSLQ}] is selected as the current health node.
[2025-12-03T17:57:44,648][INFO ][o.e.c.r.a.AllocationService] [host] current.health="GREEN" message="Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.ds-ilm-history-7-2025.12.01-000001][0], [.ds-.logs-deprecation.elasticsearch-default-2025.12.01-000001][0], [.security-7][0]]])." previous.health="RED" reason="shards started [[.ds-ilm-history-7-2025.12.01-000001][0], [.ds-.logs-deprecation.elasticsearch-default-2025.12.01-000001][0], [.security-7][0]]"
[2025-12-03T17:57:48,450][INFO ][t.b.r.c.l.ConfigLoadingInterpreter$] [host] Loading ReadonlyREST settings from index failed: cannot find index
[2025-12-03T17:57:48,453][INFO ][t.b.r.c.l.ConfigLoadingInterpreter$] [host] [CLUSTERWIDE SETTINGS] Loading ReadonlyREST settings from index (.readonlyrest) ...
[2025-12-03T17:57:53,456][INFO ][t.b.r.c.l.ConfigLoadingInterpreter$] [host] Loading ReadonlyREST settings from index failed: cannot find index
[2025-12-03T17:57:53,457][INFO ][t.b.r.c.l.ConfigLoadingInterpreter$] [host] [CLUSTERWIDE SETTINGS] Loading ReadonlyREST settings from index (.readonlyrest) ...
[2025-12-03T17:57:58,463][INFO ][t.b.r.c.l.ConfigLoadingInterpreter$] [host] Loading ReadonlyREST settings from index failed: cannot find index
[2025-12-03T17:57:58,463][INFO ][t.b.r.c.l.ConfigLoadingInterpreter$] [host] [CLUSTERWIDE SETTINGS] Loading ReadonlyREST settings from index (.readonlyrest) ...
[2025-12-03T17:58:03,467][INFO ][t.b.r.c.l.ConfigLoadingInterpreter$] [host] Loading ReadonlyREST settings from index failed: cannot find index
[2025-12-03T17:58:03,467][INFO ][t.b.r.c.l.ConfigLoadingInterpreter$] [host] [CLUSTERWIDE SETTINGS] Loading ReadonlyREST settings from index (.readonlyrest) ...
[2025-12-03T17:58:08,470][INFO ][t.b.r.c.l.ConfigLoadingInterpreter$] [host] Loading ReadonlyREST settings from index failed: cannot find index
[2025-12-03T17:58:08,472][INFO ][t.b.r.c.l.ConfigLoadingInterpreter$] [host] Loading ReadonlyREST settings from file from: /etc/elasticsearch, because index not exist
[2025-12-03T17:58:08,488][INFO ][t.b.r.c.RorProperties$   ] [host] No 'com.readonlyrest.settings.loading.delay' property found. Using default: 5 seconds
[2025-12-03T17:58:08,488][INFO ][t.b.r.c.RorProperties$   ] [host] No 'com.readonlyrest.settings.loading.attempts.count' property found. Using default: 5
[2025-12-03T17:58:08,488][INFO ][t.b.r.c.RorProperties$   ] [host] No 'com.readonlyrest.settings.loading.attempts.interval' property found. Using default: 5 seconds
[2025-12-03T17:58:08,495][INFO ][t.b.r.c.l.TestConfigLoadingInterpreter$] [host] [CLUSTERWIDE SETTINGS] Loading ReadonlyREST test settings from index (.readonlyrest) ...
[2025-12-03T17:58:13,856][INFO ][t.b.r.a.f.RawRorConfigBasedCoreFactory] [host] ADDING BLOCK:	 { name: 'LOCALHOST-only access', policy: ALLOW, rules: [hosts]
[2025-12-03T17:58:13,888][INFO ][t.b.r.b.RorInstance      ] [host] ReadonlyREST was loaded ...
[2025-12-03T17:58:13,891][INFO ][t.b.r.c.RorProperties$   ] [host] No 'com.readonlyrest.settings.refresh.interval' property found. Using default: 5 seconds
[2025-12-03T17:58:13,906][INFO ][t.b.r.b.e.MainConfigBasedReloadableEngine] [host] ROR main engine (id=031037ac9f475607d8e9a1b2cbd645ccd37a3b0d) was initiated (Enabled ROR ACL).
[2025-12-03T17:59:22,669][INFO ][t.b.r.a.l.AccessControlListLoggingDecorator] [host] ALLOWED by { name: 'LOCALHOST-only access', policy: ALLOW, rules: [hosts] req={ ID:ba56c475-4af2-4199-829f-920a36339a7c-1025379724#119, TYP:MainRequest, CGR:<N/A>, USR:[no info about user], BRS:true, KDX:null, ACT:cluster:monitor/main, OA:127.0.0.1/32, XFF:null, DA:127.0.0.1/32, IDX:<N/A>, MET:GET, PTH:/, CNT:<N/A>, HDR:Accept=*/*, Host=localhost:9200, User-Agent=curl/7.76.1, content-length=0, HIS:[LOCALHOST-only access-> RULES:[hosts->true]], }
[2025-12-03T18:08:45,976][INFO ][t.b.r.a.l.AccessControlListLoggingDecorator] [host] ALLOWED by { name: 'LOCALHOST-only access', policy: ALLOW, rules: [hosts] req={ ID:a58a9b44-8aa4-4282-abbe-bf59f6d91a3c-617723621#566, TYP:MainRequest, CGR:<N/A>, USR:[no info about user], BRS:true, KDX:null, ACT:cluster:monitor/main, OA:127.0.0.1/32, XFF:null, DA:127.0.0.1/32, IDX:<N/A>, MET:GET, PTH:/, CNT:<N/A>, HDR:Accept=*/*, Host=localhost:9200, User-Agent=curl/7.76.1, content-length=0, HIS:[LOCALHOST-only access-> RULES:[hosts->true]], }
[2025-12-03T18:09:50,879][INFO ][t.b.r.a.l.AccessControlListLoggingDecorator] [host] ALLOWED by { name: 'LOCALHOST-only access', policy: ALLOW, rules: [hosts] req={ ID:6d3ba32a-be36-4151-9e29-990ad91b3fc3-537901821#618, TYP:ClusterStateRequest, CGR:<N/A>, USR:[no info about user], BRS:true, KDX:null, ACT:cluster:monitor/state, OA:127.0.0.1/32, XFF:null, DA:127.0.0.1/32, IDX:*, MET:GET, PTH:/_cat/nodes, CNT:<N/A>, HDR:Accept=*/*, Host=localhost:9200, User-Agent=curl/7.76.1, content-length=0, HIS:[LOCALHOST-only access-> RULES:[hosts->true] RESOLVED:[indices=*]], }
[2025-12-03T18:10:00,229][INFO ][t.b.r.a.l.AccessControlListLoggingDecorator] [host] ALLOWED by { name: 'LOCALHOST-only access', policy: ALLOW, rules: [hosts] req={ ID:cc722966-5692-46ec-a8ce-c4504bf12f6d-673362588#628, TYP:ClusterStateRequest, CGR:<N/A>, USR:[no info about user], BRS:true, KDX:null, ACT:cluster:monitor/state, OA:127.0.0.1/32, XFF:null, DA:127.0.0.1/32, IDX:*, MET:GET, PTH:/_cat/nodes, CNT:<N/A>, HDR:Accept=*/*, Host=localhost:9200, User-Agent=curl/7.76.1, content-length=0, HIS:[LOCALHOST-only access-> RULES:[hosts->true] RESOLVED:[indices=*]], }
[2025-12-03T18:10:58,539][INFO ][t.b.r.a.l.AccessControlListLoggingDecorator] [host] ALLOWED by { name: 'LOCALHOST-only access', policy: ALLOW, rules: [hosts] req={ ID:5d69d1fc-48f2-4423-a90f-04fef1ed1e2f-788071571#678, TYP:ClusterStateRequest, CGR:<N/A>, USR:[no info about user], BRS:true, KDX:null, ACT:cluster:monitor/state, OA:127.0.0.1/32, XFF:null, DA:127.0.0.1/32, IDX:*, MET:GET, PTH:/_cat/plugins, CNT:<N/A>, HDR:Accept=*/*, Host=localhost:9200, User-Agent=curl/7.76.1, content-length=0, HIS:[LOCALHOST-only access-> RULES:[hosts->true] RESOLVED:[indices=*]], }

Just to confirm - is the logger configuration present in the readonlyrest.yml file?

It should be something like that:

readonlyrest:
    audit:
      enabled: true
      outputs: 
      - type: log
        logger_name: readonlyrest_audit

(I ask about it, because there is no such config in the /etc/elasticsearch/readonlyrest.yml file you’ve just provided)

After adding your lines, the log began to be written twice to the elasticsearch.log file.

[2025-12-03T19:46:17,006][INFO ][t.b.r.a.l.AccessControlListLoggingDecorator] [host] ALLOWED by { name: 'LOCALHOST-only access', policy: ALLOW, rules: [hosts] req={ ID:76368665-fa97-48e8-9115-36910b6660fc-1627188978#237, TYP:ClusterStateRequest, CGR:<N/A>, USR:[no info about user], BRS:true, KDX:null, ACT:cluster:monitor/state, OA:127.0.0.1/32, XFF:null, DA:127.0.0.1/32, IDX:*, MET:GET, PTH:/_cat/plugins, CNT:<N/A>, HDR:Accept=*/*, Host=localhost:9200, User-Agent=curl/7.76.1, content-length=0, HIS:[LOCALHOST-only access-> RULES:[hosts->true] RESOLVED:[indices=*]], }
[2025-12-03T19:46:17,074][INFO ][readonlyrest_audit       ] [host] {"headers":["Accept","Host","User-Agent","content-length"],"es_cluster_name":"elasticsearch","es_node_name":"host","acl_history":"[LOCALHOST-only access-> RULES:[hosts->true] RESOLVED:[indices=*]]","origin":"127.0.0.1/32","final_state":"ALLOWED","match":true,"destination":"127.0.0.1/32","task_id":237,"req_method":"GET","type":"ClusterStateRequest","path":"/_cat/plugins","indices":["*"],"@timestamp":"2025-12-03T19:46:16Z","content_len_kb":0,"processingMillis":160,"correlation_id":"76368665-fa97-48e8-9115-36910b6660fc","action":"cluster:monitor/state","block":"{ name: 'LOCALHOST-only access', policy: ALLOW, rules: [hosts]","id":"76368665-fa97-48e8-9115-36910b6660fc-1627188978#237","content_len":0}

The log is still written to the wrong file.

new ror conf:

readonlyrest:
    audit:
      enabled: true
      outputs: 
      - type: log
        logger_name: readonlyrest_audit

    access_control_rules:
    - name: "LOCALHOST-only access"
      hosts: ["127.0.0.1", "localhost"]