Readonlyrest pro - Kibana ldap issue

Elastic/kibana version: 6.5.4
I’ve set up our elastic cluster with kibana and readonlyrest and everything works fine, Have LDAP authentication and still, everything works fine.
However, if i dare to edit the ReadonlyREST config through the kibana interface, everything seems to work except it appears to completely discard the LDAP configuration, needing to do a full elasticsearch restart after updating the local readonlyrest file ( Restarting Kibana changes nothing ).
Is it possible this is a bug with the Readonlyrest module and reloading the ldap connection after config changes or should i just never use the kibana interface to update the config? ( i always update the file aswell, just use the kibana interface to avoid restarting elastic )

Hi @BirkirFreyr, the settings editor included in our Kibana plugin writes settings to the “.readonlyrest” index and refreshes the plugin settings. It does not update the readonlyrest.yml in the filesystem.

Please check if you have any “FORBIDDEN” log line in ES logs that prevents the settings to be stored, and check in ES if the “.readonlyrest” index is present and has been populated with any document.

Once the settings are written to the index, then it takes over in priority and the in-index settings will override the yml file, even after ES cluster restart.

.readonlyrest index:

readonlyrest:
  ssl:
    enable: true
    keystore_file: "/usr/share/elasticsearch/plugins/readonlyrest/keystore.jks"
    keystore_pass: readonlyrest
    key_pass: readonlyrest

  response_if_req_forbidden: Forbidden by ReadonlyREST ES Plugin

  # IMPORTANT FOR LOGIN/LOGOUT TO WORK
  prompt_for_basic_auth: false
  audit_collector: true

  access_control_rules:
  - name: "::LOCALHOST::"
    hosts: [127.0.0.1]
    verbosity: error

  - name: "::CURATOR::"
    verbosity: error
    auth_key: "curator:curator"
    actions: [
      "cluster:monitor/main",
      "cluster:monitor/state",
      "indices:monitor/stats",
      "indices:monitor/settings/get",
      "indices:admin/types/exists",
      "indices:admin/close",
      "indices:admin/open",
      "indices:admin/exists",
      "indices:admin/synced_flush",
      "indices:admin/delete",
      "indices:data/read/*",
      ]
    indices: ["*"]

  - name: "::ADMINS::"
    verbosity: error
    ldap_auth:
      name: "ldap"
      groups: ["group1", "group2", "group3"]
      kibana_access: admin

  - name: "::KIBANA-SRV::"
    verbosity: error
    auth_key: "kibana:logstash"

  - name: "::LOGSTASH::"
    verbosity: error
    auth_key: "logstash:logstash"
    actions: [
      "cluster:monitor/main",
      "indices:admin/types/exists",
      "indices:data/read/*",
      "indices:data/write/*",
      "indices:admin/template/*",
      "indices:admin/create",
      ]
    indices: ["logstash-*"]

  ldaps:
  - name: ldap
    host: "ldap.example.com"
    port: 3268
    ssl_enabled: false
    ssl_trust_all_certs: true
    bind_dn: "CN=LDAPauth,CN=Users,DC=example,DC=com"
    bind_password: "somepass"
    search_user_base_DN: "CN=Users,DC=example,DC=com"
    user_id_attribute: "sAMAccountName"
    search_groups_base_DN: "OU=Groups,DC=example,DC=com"
    unique_member_attribute: "member"
    group_search_filter: "(objectClass=group)"
    connection_pool_size: 10
    connection_timeout_in_sec: 10
    request_timeout_in_sec: 10
    cache_ttl_in_sec: 600

Added an extra indice to the ::CURATOR:: config and saved and it was instantly updated in the .readonlyrest index. But as soon as i hit the save button and the new config takes effect, i can no longer log in with my ldap credentials

the “acl_history” line from readonlyrest_audit:
"acl_history": "[::LOCALHOST::->[hosts->false]], [::CURATOR::->[indices->true, auth_key->true, actions->false]], [::ADMINS::->[ldap_authentication->false]], [::KIBANA-SRV::->[auth_key->false]], [::LOGSTASH::->[auth_key->false]]",

As you can clearly see, there is a block for LDAP in the configuration and yet after saving the file, it no longer seems to check it.

And it’s getting weirder…
After testing a change and saving it, waiting a few minutes i seem to sometimes authenticate, sometimes not…
Entered my credentials -> failed. tried again -> success, i log in.
Go to look at an index (Discover) and i get a message saying “Forbidden by ROR ES plugin” - hit the kibana indexes “Refresh” button and i get the results…
The heck is happening? haha - is my configuration wrong or is my cluster freaking out?

Should be mentioned that my kibana is talking to the load balancer, which in turn is balancing between 3 nodes - so perhaps it might be that one of them has reloaded its configuration and dropped the ldap connection, but not all?

Edit: Tested some more while tailing the ES logs on all 3 machines - after updating ROR config, only 2 of them seemed to re-read the index, the 3rd one logged no signs of re-reading it and now doesn’t try to use the ::ADMIN:: block:
FORBIDDEN by default req={ ID:153818657--134005668#3223, TYP:MultiSearchRequest, CGR:N/A, USR:birkir(?), BRS:false, KDX:null, ACT:indices:data/read/msearch, OA:10.170.100.115, DA:0.0.0.0, IDX:logstash-prod-*, MET:POST, PTH:/_msearch, CNT:<OMITTED, LENGTH=802>, HDR:{authorization=<OMITTED>, content-length=802, x-forwarded-proto=https, Connection=close, x-forwarded-port=51612, content-type=application/x-ndjson, Host=elastic01.vedur.is, x-forwarded-for=10.170.100.99}, HIS:[::LOCALHOST::->[hosts->false]], [::CURATOR::->[auth_key->false]], [::KIBANA-SRV::->[auth_key->false]], [::LOGSTASH::->[auth_key->false]], [::BEATS::->[groups->false]], [Grafana read-only access->[auth_key->false]] }

@BirkirFreyr, there is a polling time of 5 seconds with the settings refresher. The poller runs in every instance of ES with ROR installed.
You can see the poller in action if you turn on debug logs in ES.