[undefined] Forbidden, with { due_to={ 0="OPERATION_NOT_ALLOWED" } } on Kibana when accessing ILM

Hi,
We’re using RoR plugin together with the ELK stack. We’re unable to access the Index Management Kibana UI Page due to the following popup error:
[undefined] Forbidden, with { due_to={ 0=“OPERATION_NOT_ALLOWED” } }

I already tried to add the kibana_adm role with no success. After inserting it, ELK stack is not able to come up and we receive many Encountered a retryable error. Will Retry with exponential backoff in syslog.

Versioning is:

  • readonlyrest-1.18.2_es6.7.1

  • Kibana and Elasticsearch 6.7.1

The elasticsearch.yml file is as default except for:

    http.cors.enabled: true
    http.cors.allow-origin: "*"
    http.cors.allow-headers: Authorization
    http.type: ssl_netty4
    xpack.security.enabled: false

The readonlyrest file is as follows:

readonlyrest:
      enable: true # optional, defaults=true if at least 1 "access_control_rules" block
      audit_collector: true

      ssl:
        enable: true
        keystore_file: "/etc/elasticsearch/ssl/server.pfx"
        keystore_pass: fihpqfd32d23f4
        key_pass: fih4re3e23d4

      response_if_req_forbidden: Forbidden

      access_control_rules:

      - name: "::LOGSTASH::"
        # auth_key is good for testing, but replace it with `auth_key_sha1`!
        auth_key_sha256: r34t35g4gfdwsdewdqwdqwdqw8eec190aecdfc48b1cc37e728fb
        actions: ["cluster:monitor/main","indices:admin/types/exists","indices:data/read/*","indices:data/write/*","indices:admin/template/*","indices:admin/create"]
        indices: ["logstash*","security*","stores*","mobileapps*","jmeter*",'auditbeat*']
        verbosity: error # don't log successful request

      # We trust Kibana's server side process, full access granted via HTTP authentication
      - name: "::KIBANA-SRV::"
        # auth_key is good for testing, but replace it with `auth_key_sha256`!
        auth_key_sha256: 47e40r42rf34g35feqdwdsafb68c8b96
        verbosity: error # don't log successful request

      - name: "::CURATOR::"
        # auth_key is good for testing, but replace it with `auth_key_sha256`!
        auth_key_sha256: 08a79916e5t53wewewerwrewrer30f115a33
        verbosity: error # don't log successful request

      - name: "::GRAFANA-SRV::"
        # auth_key is good for testing, but replace it with `auth_key_sha256`!
        auth_key_sha256: 7103f686fd43r43r2er23r24r4r2r43a0fe5981f6
        actions: ["indices:data/read/*","indices:admin/get","cluster:monitor/main","indices:admin/mappings/get"]
        verbosity: error # don't log successful request

      - name: "::monitoring::"
        auth_key_sha256: bbd543r34r2ed354t3rd2e314a8971695
        verbosity: error
        actions: ["cluster:monitor/*","indices:monitor/*"]

      - name: "::elastalert_on_own_index::"
        auth_key_sha256: d10916704r43r2d4r4rwewer653fe38862
        actions: ["cluster:monitor/main","indices:admin/types/exists","indices:data/read/*","indices:data/write/*","indices:admin/template/*","indices:admin/create"]
        verbosity: error
        indices: ["elastalert*"]

      - name: "::elastalert_on_logstash::"
        auth_key_sha256: d10914trwf4tr3f4r34r34tr34rd43r3fe38862
        verbosity: error
        actions: ["indices:data/read/*","cluster:monitor/*"]
        indices: ["logstash*","security*","auditbeat*"]

      - name: "::es2logs_on_logstash::"
        auth_key_sha256: fa71r35t434rd43rf34tr3f4r43tf34f34f34rf3684f176
        verbosity: error
        actions: ["indices:data/read/*","cluster:monitor/*"]
        indices: ["logstash*"]

      - name: "::SALTSTACK-SRV::"
        # auth_key is good for testing, but replace it with `auth_key_sha256`!
        auth_key_sha256: 0a47dr43r43r34r34r35t465t3rr4r34r34r779b172b127c38a92
        verbosity: error # don't log successful request
        indices: ["releases"]

      - name: "::SALTSTACK-SRV::MONITOR"
        # auth_key is good for testing, but replace it with `auth_key_sha256`!
        auth_key_sha256: 0a47d4r34r34r3r3ty65t34r34r3t4r23e24re72b127c38a92
        verbosity: error # don't log successful request
        actions: ["cluster:monitor/*"]

      - name: "field_caps stuff"
        verbosity: error # don't log successful request
        type: allow
        actions: ["indices:data/read/field_caps"]
        hosts: ["	OURDEVSUBNET/24”,”OURPRIVATESUBNET24"]

      - name: "internal access to kibana index"
        verbosity: error # don't log successful request
        type: allow
        hosts: ["OURDEVSUBNET/24","OURPRIVATESUBNET24/24"]
        actions: ["indices:data/read/search","indices:data/read/get","indices:data/read/mget"]
        indices: [".kibana"]

      - name: "kibana_rw"
        kibana_access: rw
        indices: [".kibana", "app0*", "logstash*”,”app1*”,”app2*”]
        ldap_authentication:
          name: "ldap1"
          cache_ttl_in_sec: 60
        ldap_authorization:
          name: "ldap1"
          groups: ["kibana_rw"]
          cache_ttl_in_sec: 60

      - name: "elasticsearch_adm"
        ldap_authentication:
          name: "ldap1"
          cache_ttl_in_sec: 60
        ldap_authorization:
          name: "ldap1"
          groups: ["elasticsearch_adm"]
          cache_ttl_in_sec: 60
          
      - name: "kibana_ro"
        kibana_access: ro
        indices: [".kibana", "app0*", "logstash*”,”app1*”,”app2*”]
        ldap_authentication:
          name: "ldap1"
          cache_ttl_in_sec: 60
        ldap_authorization:
          name: "ldap1"
          groups: ["kibana_ro"]
          cache_ttl_in_sec: 60
         
          # This is the doomed section, whenever I add it, the ELK stack doesn't come up anymore.
          #  - name: "kibana_admin"
          #    kibana_access: rw
          #    cluster: ["manage-ilm", "manage_index_templates"]
          #    ldap_authentication:
          #      name: "ldap1"
          #      cache_ttl_in_sec: 60     
          #    ldap_authorization:
          #      name: "ldap1"
          #      groups: ["kibana_admin"]
          #      cache_ttl_in_sec: 60 

      ldaps:
      - name: ldap1
        host: “ourldaphost.ourdevdomain”
        port:  636                                                # default 389
        ssl_enabled: true                                    # default true
        ssl_trust_all_certs: true                                 # default false
        search_user_base_DN: "ou=users,dc=dev,dc=ourdomain”
        user_id_attribute: "uid"                                  # default "uid"
        search_groups_base_DN: "ou=groups,dc=dev,dc=ourdomain”
        unique_member_attribute: "member"                   # default "uniqueMember"
        connection_pool_size: 10                                  # default 30
        connection_timeout_in_sec: 10                             # default 1
        request_timeout_in_sec: 10                                # default 1
        cache_ttl_in_sec: 60

I don’t see any errors in logstash.log

Thank you in advance for supporting.

Luigi

If the Kibana session is being allowed by an ACL block containing kibana_access: <any value>, then it’s normal that you don’t have access to index management. When this rule is set, it prevents the user from modifying any other index than their own kibana_index. The idea behind this is that a Kibana should explore the data, and should be protected from accidentally change it (this rule is much older than the introduction of all those cluster and indices management tools in Kibana).

More info about kibana_access rule: https://github.com/beshu-tech/readonlyrest-docs/blob/master/elasticsearch.md#kibana_access

We’ve tried editing the ACL block by removing the kibana_access rule:

        - name: "kibana_admin"
          cluster: ["manage-ilm", "manage_index_templates"]
          ldap_authentication:
            name: "ldap1"
            cache_ttl_in_sec: 60     
          ldap_authorization:
            name: "ldap1"
            groups: ["kibana_admin"]
            cache_ttl_in_sec: 60  

After this change, we restarted the ELK stack but then kibana is not able to come up correctly and we see the following in syslog:

Mar 31 07:55:35 monitor01 kibana[28421]: {"type":"log","@timestamp":"2020-03-31T07:55:35Z","tags":["warning","task_manager"],"pid":28421,"message":"PollError [undefined] ReadonlyREST failed to start"

Commenting the whole block leads to the ELK coming back online.

Make sure your cluster in in green state, also, go read the elasticsearch logs, there should be more error messages.

Among the elasticsearch log files, in nameofthecluster_access.log it seems like the cluster row is not valid:

[2020-03-30T08:32:04,846][ERROR][tech.beshu.ror.es.IndexLevelActionFilter] [monitor01] ROR starting failure:
tech.beshu.ror.es.StartingFailureException: Errors:
Unknown rules: cluster
	at tech.beshu.ror.es.StartingFailureException$.from(StartingFailureException.scala:34) ~[readonlyrest-1.19.3_es6.7.1.jar:?]
	at tech.beshu.ror.es.IndexLevelActionFilter.$anonfun$startingTaskCancellable$1(IndexLevelActionFilter.scala:66) ~[readonlyrest-1.19.3_es6.7.1.jar:?]
	at tech.beshu.ror.es.IndexLevelActionFilter.$anonfun$startingTaskCancellable$1$adapted(IndexLevelActionFilter.scala:61) ~[readonlyrest-1.19.3_es6.7.1.jar:?]
	at monix.execution.Callback$$anon$2.tryApply(Callback.scala:296) ~[monix-execution_2.12-3.0.0.jar:3.0.0]
	at monix.execution.Callback$$anon$2.apply(Callback.scala:289) ~[monix-execution_2.12-3.0.0.jar:3.0.0]
	at monix.execution.Callback$$anon$2.onSuccess(Callback.scala:285) ~[monix-execution_2.12-3.0.0.jar:3.0.0]
	at monix.eval.internal.TaskRunLoop$.startFull(TaskRunLoop.scala:165) ~[monix-eval_2.12-3.0.0.jar:3.0.0]
	at monix.eval.internal.TaskRestartCallback.syncOnSuccess(TaskRestartCallback.scala:101) ~[monix-eval_2.12-3.0.0.jar:3.0.0]
	at monix.eval.internal.TaskRestartCallback$$anon$1.run(TaskRestartCallback.scala:118) ~[monix-eval_2.12-3.0.0.jar:3.0.0]
	at monix.execution.internal.Trampoline.monix$execution$internal$Trampoline$$immediateLoop(Trampoline.scala:66) ~[monix-execution_2.12-3.0.0.jar:3.0.0]
	at monix.execution.internal.Trampoline.startLoop(Trampoline.scala:32) ~[monix-execution_2.12-3.0.0.jar:3.0.0]
	at monix.execution.schedulers.TrampolineExecutionContext$JVMNormalTrampoline.super$startLoop(TrampolineExecutionContext.scala:163) ~[monix-execution_2.12-3.0.0.jar:3.0.0]
	at monix.execution.schedulers.TrampolineExecutionContext$JVMNormalTrampoline.$anonfun$startLoop$1(TrampolineExecutionContext.scala:163) ~[monix-execution_2.12-3.0.0.jar:3.0.0]
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) [scala-library-2.12.9.jar:?]
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:85) [scala-library-2.12.9.jar:?]
	at monix.execution.schedulers.TrampolineExecutionContext$JVMNormalTrampoline.startLoop(TrampolineExecutionContext.scala:163) [monix-execution_2.12-3.0.0.jar:3.0.0]
	at monix.execution.internal.Trampoline.execute(Trampoline.scala:40) [monix-execution_2.12-3.0.0.jar:3.0.0]
	at monix.execution.schedulers.TrampolineExecutionContext.execute(TrampolineExecutionContext.scala:64) [monix-execution_2.12-3.0.0.jar:3.0.0]
	at monix.execution.schedulers.BatchingScheduler.execute(BatchingScheduler.scala:50) [monix-execution_2.12-3.0.0.jar:3.0.0]
	at monix.execution.schedulers.BatchingScheduler.execute$(BatchingScheduler.scala:47) [monix-execution_2.12-3.0.0.jar:3.0.0]
	at monix.execution.schedulers.AsyncScheduler.execute(AsyncScheduler.scala:31) [monix-execution_2.12-3.0.0.jar:3.0.0]
	at monix.eval.internal.TaskRestartCallback.start(TaskRestartCallback.scala:56) [monix-eval_2.12-3.0.0.jar:3.0.0]
	at monix.eval.internal.TaskRunLoop$.executeAsyncTask(TaskRunLoop.scala:592) [monix-eval_2.12-3.0.0.jar:3.0.0]
	at monix.eval.internal.TaskRunLoop$.startFull(TaskRunLoop.scala:120) [monix-eval_2.12-3.0.0.jar:3.0.0]
	at monix.eval.internal.TaskRestartCallback.syncOnSuccess(TaskRestartCallback.scala:101) [monix-eval_2.12-3.0.0.jar:3.0.0]
	at monix.eval.internal.TaskRestartCallback.onSuccess(TaskRestartCallback.scala:74) [monix-eval_2.12-3.0.0.jar:3.0.0]
	at monix.eval.internal.TaskSleep$SleepRunnable.run(TaskSleep.scala:66) [monix-eval_2.12-3.0.0.jar:3.0.0]
	at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1402) [?:1.8.0_242]
	at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) [?:1.8.0_242]
	at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) [?:1.8.0_242]
	at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) [?:1.8.0_242]
	at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157) [?:1.8.0_242]
[2020-03-31T07:51:07,171][ERROR][tech.beshu.ror.es.IndexLevelActionFilter] [monitor01] ROR starting failure:
tech.beshu.ror.es.StartingFailureException: Errors:
Unknown rules: cluster
	at tech.beshu.ror.es.StartingFailureException$.from(StartingFailureException.scala:34) ~[readonlyrest-1.19.3_es6.7.1.jar:?]
	at tech.beshu.ror.es.IndexLevelActionFilter.$anonfun$startingTaskCancellable$1(IndexLevelActionFilter.scala:66) ~[readonlyrest-1.19.3_es6.7.1.jar:?]
	at tech.beshu.ror.es.IndexLevelActionFilter.$anonfun$startingTaskCancellable$1$adapted(IndexLevelActionFilter.scala:61) ~[readonlyrest-1.19.3_es6.7.1.jar:?]
	at monix.execution.Callback$$anon$2.tryApply(Callback.scala:296) ~[monix-execution_2.12-3.0.0.jar:3.0.0]
	at monix.execution.Callback$$anon$2.apply(Callback.scala:289) ~[monix-execution_2.12-3.0.0.jar:3.0.0]
	at monix.execution.Callback$$anon$2.onSuccess(Callback.scala:285) ~[monix-execution_2.12-3.0.0.jar:3.0.0]
	at monix.eval.internal.TaskRunLoop$.startFull(TaskRunLoop.scala:165) ~[monix-eval_2.12-3.0.0.jar:3.0.0]
	at monix.eval.internal.TaskRestartCallback.syncOnSuccess(TaskRestartCallback.scala:101) ~[monix-eval_2.12-3.0.0.jar:3.0.0]
	at monix.eval.internal.TaskRestartCallback$$anon$1.run(TaskRestartCallback.scala:118) ~[monix-eval_2.12-3.0.0.jar:3.0.0]
	at monix.execution.internal.Trampoline.monix$execution$internal$Trampoline$$immediateLoop(Trampoline.scala:66) ~[monix-execution_2.12-3.0.0.jar:3.0.0]
	at monix.execution.internal.Trampoline.startLoop(Trampoline.scala:32) ~[monix-execution_2.12-3.0.0.jar:3.0.0]
	at monix.execution.schedulers.TrampolineExecutionContext$JVMNormalTrampoline.super$startLoop(TrampolineExecutionContext.scala:163) ~[monix-execution_2.12-3.0.0.jar:3.0.0]
	at monix.execution.schedulers.TrampolineExecutionContext$JVMNormalTrampoline.$anonfun$startLoop$1(TrampolineExecutionContext.scala:163) ~[monix-execution_2.12-3.0.0.jar:3.0.0]
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) [scala-library-2.12.9.jar:?]
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:85) [scala-library-2.12.9.jar:?]
	at monix.execution.schedulers.TrampolineExecutionContext$JVMNormalTrampoline.startLoop(TrampolineExecutionContext.scala:163) [monix-execution_2.12-3.0.0.jar:3.0.0]
	at monix.execution.internal.Trampoline.execute(Trampoline.scala:40) [monix-execution_2.12-3.0.0.jar:3.0.0]
	at monix.execution.schedulers.TrampolineExecutionContext.execute(TrampolineExecutionContext.scala:64) [monix-execution_2.12-3.0.0.jar:3.0.0]
	at monix.execution.schedulers.BatchingScheduler.execute(BatchingScheduler.scala:50) [monix-execution_2.12-3.0.0.jar:3.0.0]
	at monix.execution.schedulers.BatchingScheduler.execute$(BatchingScheduler.scala:47) [monix-execution_2.12-3.0.0.jar:3.0.0]
	at monix.execution.schedulers.AsyncScheduler.execute(AsyncScheduler.scala:31) [monix-execution_2.12-3.0.0.jar:3.0.0]
	at monix.eval.internal.TaskRestartCallback.start(TaskRestartCallback.scala:56) [monix-eval_2.12-3.0.0.jar:3.0.0]
	at monix.eval.internal.TaskRunLoop$.executeAsyncTask(TaskRunLoop.scala:592) [monix-eval_2.12-3.0.0.jar:3.0.0]
	at monix.eval.internal.TaskRunLoop$.startFull(TaskRunLoop.scala:120) [monix-eval_2.12-3.0.0.jar:3.0.0]
	at monix.eval.internal.TaskRestartCallback.syncOnSuccess(TaskRestartCallback.scala:101) [monix-eval_2.12-3.0.0.jar:3.0.0]
	at monix.eval.internal.TaskRestartCallback.onSuccess(TaskRestartCallback.scala:74) [monix-eval_2.12-3.0.0.jar:3.0.0]
	at monix.eval.internal.TaskSleep$SleepRunnable.run(TaskSleep.scala:66) [monix-eval_2.12-3.0.0.jar:3.0.0]
	at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1402) [?:1.8.0_242]
	at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) [?:1.8.0_242]
	at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) [?:1.8.0_242]
	at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) [?:1.8.0_242]
	at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157) [?:1.8.0_242]

Removing the cluster row and adding the actions rule leads to starting RoR successfully. Unfortunately, we are still receiving the undefined] Forbidden, with { due_to={ 0=“OPERATION_NOT_ALLOWED” } } when accessing the ILM page.

readonlyrest.yml ACL block:

    - name: "kibana_admin"
        actions: ["cluster:manage-ilm","cluster:manage_index_templates"]  
        ldap_authentication:
          name: "ldap1"
          cache_ttl_in_sec: 60     
        ldap_authorization:
          name: "ldap1"
          groups: ["kibana_admin"]
          cache_ttl_in_sec: 60

The indentation in your last snippet is off.

When you see “Forbidden” in Kibana logs, it means that the ACL in Elasticsearch is rejecting some request. This is due to how the ACL is configured so when this happens you should immediately switch to “elasticsearch.log” and look for the string “FORBIDDEN” or any other interesting logs in elasticsearch logs. Do you see any? Paste any interesting line here.

I had to increase the logging variable to "logger.org.elasticsearch.transport": "trace"in order to see anything meaningful in /var/log/elasticsearch/nameofourcluster.log. The only FORBIDDEN logs I see are probably related to the matching of the subnet filtering rule:

tication->false]]","origin”:”MYINTERNALIP/32","match":false,"final_state":"FORBIDDEN","destination”:”OURNODEIP/32","task_id":179365,"type":"GetAliasesRequest","req_method":"GET","path":"/_cat/aliases","indices":[],"@timestamp":"2020-03-31T13:27:15Z","content_len_kb":0,"processingMillis"

@lsambolino that log line is truncated, all the initial part is missing. We need that to have a complete trace of the execution of the ACL.

I am here reporting the full line. From here, it appears to me we are failing LDAP authentication when trying to assess the path /_ilm/policy:

----------------ES.........:....\......x-pack.indices:data/write/bulk[s][r].yKX6lYeLTb2l0o53ezA88Q..rKk6QWZ7SpOQLpg5d2fkqQ.....a....readonlyrest_audit-2020-04-01.SbJCpkgnRTKCtf64DAkAjg........readonlyrest_audit-2020-04-01...............readonlyrest_audit-2020-04-01....ror_audit_evt..1961861025-1#6403713....{"headers":["Connection","Content-Length","Host"],"acl_history":"[::LOGSTASH::-> RULES:[auth_key_sha256->false]], [::KIBANA-SRV::-> RULES:[auth_key_sha256->false]], [::CURATOR::-> RULES:[auth_key_sha256->false]], [::GRAFANA-SRV::-> RULES:[auth_key_sha256->false]], [::monitoring::-> RULES:[auth_key_sha256->false]], [::elastalert_on_own_index::-> RULES:[auth_key_sha256->false]], [::elastalert_on_logstash::-> RULES:[auth_key_sha256->false]], [::es2logs_on_logstash::-> RULES:[auth_key_sha256->false]], [::SALTSTACK-SRV::-> RULES:[auth_key_sha256->false]], [::SALTSTACK-SRV::MONITOR-> RULES:[auth_key_sha256->false]], [field_caps stuff-> RULES:[hosts->true, actions->false]], [internal access to kibana index-> RULES:[hosts->true, actions->false]], [kibana_rw-> RULES:[ldap_authentication->false]], [elasticsearch_adm-> RULES:[ldap_authentication->false]], [kibana_ro-> RULES:[ldap_authentication->false]], [kibana_admin-> RULES:[ldap_authentication->false]]","origin":"MYIP/32","match":false,"final_state":"FORBIDDEN","destination":"OURNODEIP/32","task_id":6403713,"type":"Request","req_method":"GET","path":"/_ilm/policy","indices":[],"@timestamp":"2020-04-01T08:29:20Z","content_len_kb":0,"processingMillis":1,"action":"cluster:admin/ilm/get","block":"default","id":"1961861025-1#6403713","content_len":0}............_none.....................readonlyrest_audit-2020-04-01.SbJCpkgnRTKCtf64DAkAjg..ror_audit_evt.1961861025-1#6403713.6....4. ----------------

I tried to check if my username access the elastic api correctly:

curl -vv -u myusername -X GET "https://monitornode.hostname.domain:9200"
Enter host password for user 'myusername':
Note: Unnecessary use of -X or --request, GET is already inferred.
*   Trying OUR.MONITOR.NODE.IP...
* TCP_NODELAY set
* Connected toournodehostname (OUR.MONITOR.NODE.IP) port 9200 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/cert.pem
  CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-SHA
* ALPN, server did not agree to a protocol
* Server certificate:
*  subject: C=IT; CN=monitornode.hostname.domain; L=ourCity; ST=OURCOUNTRYCODE
*  start date: Jan 16 08:26:55 2020 GMT
*  expire date: Sep  7 08:26:55 2021 GMT
*  subjectAltName: host "monitornode.hostname.domain" matched cert's "monitornode.hostname.domain"
*  issuer: C=IT; CN=ca.development.ourdomain; L=ourCity; ST=OURCOUNTRYCODE
*  SSL certificate verify ok.
* Server auth using Basic with user 'myusername'
> GET / HTTP/1.1
> Host:ournodehostname:9200
> Authorization: Basic bHNhbWJvbGlubzohdHhiMi1FPTNkNDk4IQ==
> User-Agent: curl/7.64.1
> Accept: */*
> 
< HTTP/1.1 200 OK
< content-type: application/json; charset=UTF-8
< content-length: 490
< 
{
  "name" : "monitor01",
  "cluster_name" : "ourclustername",
  "cluster_uuid" : "vkfLJF213456pELG05g",
  "version" : {
    "number" : "6.7.1",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "2f32220",
    "build_date" : "2019-04-02T15:59:27.961366Z",
    "build_snapshot" : false,
    "lucene_version" : "7.7.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}
* Connection #0 to hostournodehostname left intact
* Closing connection 0

I would say my user is able to interrogate elastic API correctly but still, cannot get through ILM page.

From your log, what I can see is that no basic auth credentials are being forwarded in the request:

"headers":["Connection","Content-Length","Host"]

The “Authorization” header is completely missing here. It might be a bug on our side. Will test.

UPDATE: works in 7.6.2, going to test for 6.7.1

Works for me, I think it’s a conf problem then. For a test, try to login with the kibana server credentials and test if you can use index lifecycle tools.

Hi Simone,

Thank you for taking the time of testing. We are prompted for login on the Discover page and then we inserted our kibana login password. As soon as we click on the Management - Index Lifecycle Policies page (and we are not asked for password so I suppose we are carrying our password back) we receive this popup:

Error loading policies

401: Unauthorized. [undefined] Forbidden, with { due_to={ 0=“OPERATION_NOT_ALLOWED” } }

By checking /usr/share/elasticsearch/plugins/readonlyrest our plugin version is readonlyrest-1.19.3_es6.7.1.jar

We fixed a major multi-tenancy bug in 1.19.4, could we start from that common baseline? When you’re done can you please attach the whole readonlyrest.yml (with necessary omissions) and the full ES log line that says “FORBIDDEN” that shows at the same timestamp when you get the “401” in Kibana?

Hi Simone, thank you for the support until now. We are currently going through a major upgrade of the whole ELK cluster. We will be back to you when possibile.

1 Like