Elasticsearch readonlyrest-plugin taking a very long time to load

I have an issue with elasticsearch taking a very long time to load the readonlyrest-plugin.

My setup
OS: centos-release-7-6.1810.2.el7.centos.x86_64
Elasticsearch version: 6.5.4
ROR version: readonlyrest-1.16.33_es6.5.4

Config Elasticsearch
The only line I added to elasticsearch.yml is:

xpack.security.enabled: false

Working ROR Configuration
This config works fine:

readonlyrest:
    access_control_rules:
    - name: "::KIBANA-SRV::"
      auth_key: kibana:kibana

I have verified this is working by running the following and getting a 200 response.

curl -vvv -u kibana:kibana "http://localhost:9200/_cat/indices"

And running the following gives 401 as expected:

curl -vvv "http://localhost:9200/_cat/indices"

ROR Configuration which causes issue

readonlyrest:
    access_control_rules:
    - name: "::KIBANA-SRV::"
      auth_key: kibana:kibana

    - name: adminuser
      kibana_access: admin
      jwt_auth:
        name: "webseal"
        roles: ["kibana_admin"]

    jwt:
    - name: webseal
      signature_algo: RSA
      signature_key: "your_signature_min_256_chars"
      user_claim: sub
      roles_claim: group
      header_name: jwt

When I started writing this post I thought elasticsearch wasn’t starting, but it turns out it just uses extremely long to load the ROR-plugin. It takes 24 minutes to add the jwt-block in the config. You can see this in the elasticsearch log. Adding first block att 11:08 and the next one at 11:32. Anyone know why this is happening? Thanks.

[2019-02-04T11:08:00,444][INFO ][o.e.p.PluginsService     ] [NpgnpOt] loaded module [x-pack-watcher]
[2019-02-04T11:08:00,444][INFO ][o.e.p.PluginsService     ] [NpgnpOt] loaded plugin [readonlyrest]
[2019-02-04T11:08:04,943][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [NpgnpOt] [controller/5934] [Main.cc@109] controller (64 bit): Version 6.5.4 (Build b616085ef32393) Copyright (c) 2018 Elasticsearch BV
[2019-02-04T11:08:05,416][INFO ][t.b.r.e.IndexLevelActionFilter] [NpgnpOt] Settings observer refreshing...
[2019-02-04T11:08:05,589][INFO ][t.b.r.a.ACL              ] [NpgnpOt] ADDING BLOCK:     { name: '::KIBANA-SRV::', policy: ALLOW, rules: [auth_key]}
[2019-02-04T11:32:14,003][INFO ][t.b.r.a.ACL              ] [NpgnpOt] ADDING BLOCK:     { name: 'adminuser', policy: ALLOW, rules: [kibana_access, jwt_auth]}
[2019-02-04T11:32:14,005][INFO ][t.b.r.e.IndexLevelActionFilter] [NpgnpOt] Configuration reloaded - ReadonlyREST enabled
[2019-02-04T11:32:14,005][INFO ][t.b.r.e.IndexLevelActionFilter] [NpgnpOt] Readonly REST plugin was loaded...
[2019-02-04T11:32:14,203][DEBUG][o.e.a.ActionModule       ] [NpgnpOt] Using REST wrapper from plugin tech.beshu.ror.es.ReadonlyRestPlugin
[2019-02-04T11:32:14,452][INFO ][o.e.d.DiscoveryModule    ] [NpgnpOt] using discovery type [zen] and host providers [settings]
[2019-02-04T11:32:15,017][INFO ][t.b.r.c.s.SettingsPoller ] [NpgnpOt] [CLUSTERWIDE SETTINGS] Cluster not ready...
[2019-02-04T11:32:15,149][INFO ][o.e.n.Node               ] [NpgnpOt] initialized
[2019-02-04T11:32:15,149][INFO ][o.e.n.Node               ] [NpgnpOt] starting ...
[2019-02-04T11:32:15,323][INFO ][o.e.t.TransportService   ] [NpgnpOt] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
[2019-02-04T11:32:16,019][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [NpgnpOt] no known master node, scheduling a retry
[2019-02-04T11:32:18,399][INFO ][o.e.c.s.MasterService    ] [NpgnpOt] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {NpgnpOt}{NpgnpOthT0uY9LfTc4IWeQ}{gRrLsmpmQfCfx_7niN31Ew}{127.0.0.1}{$
[2019-02-04T11:32:18,404][INFO ][o.e.c.s.ClusterApplierService] [NpgnpOt] new_master {NpgnpOt}{NpgnpOthT0uY9LfTc4IWeQ}{gRrLsmpmQfCfx_7niN31Ew}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=8201478144, xpack.$
[2019-02-04T11:32:18,423][INFO ][t.b.r.c.s.SettingsPoller ] [NpgnpOt] [CLUSTERWIDE SETTINGS] Cluster not ready...
[2019-02-04T11:32:18,436][INFO ][o.e.h.n.Netty4HttpServerTransport] [NpgnpOt] publish_address {127.0.0.1:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200}
[2019-02-04T11:32:18,436][INFO ][o.e.n.Node               ] [NpgnpOt] started
[2019-02-04T11:32:19,031][INFO ][o.e.l.LicenseService     ] [NpgnpOt] license [0648a842-df13-4c99-8153-522cea30358a] mode [basic] - valid
[2019-02-04T11:32:19,042][INFO ][o.e.g.GatewayService     ] [NpgnpOt] recovered [4] indices into cluster_state
[2019-02-04T11:32:19,425][INFO ][t.b.r.c.s.SettingsPoller ] [NpgnpOt] [CLUSTERWIDE SETTINGS] Cluster not ready...
[2019-02-04T11:32:19,492][INFO ][o.e.c.r.a.AllocationService] [NpgnpOt] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.kibana_1][0], [kibana_sample_data_ecommerce][0], [kiba$
[2019-02-04T11:32:20,429][INFO ][t.b.r.e.SettingsObservableImpl] [NpgnpOt] [CLUSTERWIDE SETTINGS] index settings not found. Will keep on using the local YAML file. Learn more about clusterwide settings at h$
[2019-02-04T12:23:15,656][INFO ][t.b.r.a.ACL              ] [NpgnpOt] ^[[36mALLOWED by { name: '::KIBANA-SRV::', policy: ALLOW, rules: [auth_key]} req={ ID:116243600-247446465#1658, TYP:ClusterStateRequest,$
[2019-02-04T12:24:33,796][INFO ][t.b.r.a.ACL              ] [NpgnpOt] ^[[35mFORBIDDEN by default req={ ID:196642771-1551202143#1706, TYP:ClusterStateRequest, CGR:N/A, USR:[no basic auth header], BRS:true, K$

It seems this only happens after restarting the elasticsearch service by running

sudo systemctl stop elasticsearch.service
sudo systemctl start elasticsearch.service

So my current workaround is just to reboot the server instead of just the service.

Hi @anon39196365,

Have you seen this?

Thanks for the quick reply. It now takes about a minute to start which is acceptable.

1 Like

@sscarduzio
I now see that when adding more ACL blocks with jwt_auth it takes one minute per block. This means that for each role we create we add one minute startup time for elasticsearch which is a bit inconvinient-.

Do you know if this issue will be fixed or will it always take one minute per block?

I just fixed this in master.

1 Like

Now it does not generate a new random seed for every time we use an encrypted in-memory cache (it’s where we keep the credentials hashes for external authentication connectors).

It should generate the seed only once now.

To make it additionally faster in Docker, you can use /dev/urandom as source of randomness.

 -Djava.security.egd=file:/dev/./urandom

To make it even faster By the way are you aware of Haveged? You can run it as a daemon in another container, just to provide a faster /dev/urandom.

1 Like