Permission denied for Kibana admin account

this was fixed recently, what ROR version is this?

Versions which I received via email yesterday:

readonlyrest_kbn_pro-1.17.5_es6.6.0.zip
readonlyrest-1.17.5_es6.6.0.zip

Kibana.log was complaining about the .iz1kibana_1 log this morning, which didn’t exist (but it said was an alias) and I had .iz1kibana_1_2 and .iz1kibana_1_3 indices - Kibana wouldn’t start but wouldn’t get past the “Kibana server is not ready yet” so I removed that bit of code and restarted - and now it’s working. So, it’s “fixed” I guess — oops not so fast, that was logged in with my LDAP account which has kibana_access: admin so I thought it would obviously work with the local kibana admin account - which it did not. I tried to create a new index pattern logged in as “admin” and when I got to the “Creating index pattern…” step, a pop-up basic authentication is displayed. Submitting admin creds doesn’t work, nor does submitting my LDAP creds - ES error messages all say USR:admin.

Hi Chris, very difficult to help you wihtout

Can you expand on this? Do you refer to the “connectorService” bug?

I removed this from my kibana.yml

kibana.index = .iz1kibana_1

and reset it to what it had been originally:

kibana.index = .iz1kibana

No idea what the connectorService bug is, so no.

More info - something must be wrong with my whole config as LDAP auth seems to be the only thing working. Logstash is now unable to log to the ELK cluster using another local account.

Here’s my (sanitized) readonlyrest.yml file:

# yamllint disable rule:line-length
# THIS FILE IS PROVISIONED BY PUPPET
# However, once it gets loaded into the .readonlyrest index,
#  you might need to use an admin account to log into Kibana
#  and choose "Load default" from the "ReadonlyREST" tab.
# Alternately, you can use the "update-ror" script in ~cmh/bin/
readonlyrest:
  enable: true
  response_if_req_forbidden: Forbidden by ReadonlyREST plugin
  ssl:
    enable: true
    keystore_file: "elasticsearch.jks"
    keystore_pass: "pass"
    key_pass: "pass"
  access_control_rules:
    # LOCAL: Kibana admin account
    - name: "local-admin"
      auth_key: "admin:pass"
      kibana_access: admin
    # LOCAL: Logstash servers inbound access
    - name: "local-logstash"
      auth_key: "logstash:pass"
      # Local accounts for routine access should have less verbisity
      #  to keep the amount of logfile noise down
      verbosity: error
    # LOCAL: Kibana server
    - name: "local-kibana"
      auth_key: "kibana:pass"
      verbosity: error
    # LOCAL: Puppet communication
    - name: "local-puppet"
      auth_key: "puppet:pass"
      verbosity: error
    # LOCAL: Elastalert
    - name: "elastalert"
      auth_key: "elastalert:pass"
      verbosity: error
    # LDAP: kibana-admin group
    - name: "ldap-admin"
      kibana_access: admin
      kibana_hide_apps: [""]
      ldap_auth:
        name: "ldap1"
        groups: ["kibana-admin"]
      type: allow
    # LDAP for everyone else
    - name: "ldap-all"
      # possibly include: "kibana:dev_tools",
      kibana_hide_apps: ["readonlyrest_kbn", "timelion", "kibana:management", "apm"]
      ldap_auth:
        name: "ldap1"
        groups: ["kibana-admin", "admins", "prod-admins", "devqa", "development", "ipausers"]
      type: allow
    # Allow localhost
    - name: "localhost"
      hosts: ["127.0.0.1"]
  # Define the LDAP connection
  ldaps:
    - name: ldap1
      host: "freeipa.example.com"
      port: 636
      bind_dn: "uid=system,cn=stuff,dc=localdomain"
      bind_password: "pass"
      ssl_enabled: true
      ssl_trust_all_certs: true
      search_user_base_DN: "cn=users,cn=accounts,dc=stuff,dc=localdomain"
      search_groups_base_DN: "cn=groups,cn=accounts,dc=stuff,dc=localdomain"
      user_id_attribute: "uid"
      unique_member_attribute: "member"
      connection_pool_size: 10
      connection_timeout_in_sec: 30
      request_timeout_in_sec: 30
      cache_ttl_in_sec: 60
      group_search_filter: "(objectclass=top)"
      group_name_attribute: "cn"

The logstash boxes have the same username/password combination found in the readonlyrest.yml file, but I’m still getting denied.

[2019-04-26T16:51:23,623][INFO ][t.b.r.a.ACL              ] [bz1elasticsearch2-1.bb.internal.maas360.com] FORBIDDEN by default req={ ID:1271538866-900160031#11003, TYP:MainRequest, CGR:N/A, USR:logstash(?), BRS:true, KDX:null, ACT:cluster:monitor/main, OA:{logstash box}, DA:{es node}, IDX:<N/A>, MET:HEAD, PTH:/, CNT:<N/A>, HDR:{Authorization=<OMITTED>, content-length=0, Connection=Keep-Alive, User-Agent=Manticore 0.6.4, Host=elasticsearch:9200, Accept-Encoding=gzip,deflate, Content-Type=application/json}, HIS:[local admin->[auth_key->false]], [kibana-admin ldap->[ldap_authentication->false]], [devqa ldap->[ldap_authentication->false]], [readonly test->[auth_key->false]], [kibana server->[auth_key->false]], [logstash->[auth_key->false]], [localhost->[hosts->false]] }

Anything obvious that’s wrong with this config? I’ve set it up according to the instructions - at least I think I have, but something’s obviously wrong here.

what does your logstash output conf look like?

  elasticsearch {
    index           => "%{targetindex}-%{+YYYY.MM.dd}-bbdevqa"
    id              => "elasticsearch_output"
    hosts           => ["eshost1", "eshost2"...]
    # Templates have been moved to elasticsearch
    manage_template => false
    user            => "logstash"
    password        => "pass"
    cacert          => "/etc/puppetlabs/puppet/ssl/certs/ca.pem"
    ssl             => true
    ssl_certificate_verification => true
  }
}

just a stab, but you’ve got SSL enabled on the Logstash side but readonlyrest side shows basic auth.

That’s nothing new - been running the cluster like that with success. The SSL enable happens on the elasticsearch side in the elasticsearch.yml:

...
http.enabled: true
http.port: 9200
http.type: ssl_netty4
...

SSL communication is working and with the puppet CA it works without complaining about the cert, which is nice.

Mind you, I’m not saying that I’m doing it right - but I think that’s what the doc said and that part at least has been working for a while - just like the admin account used to work.

Thanks for looking, though, I do really appreciate it. Seems like the harder I try, the worse I’m making this.

Well I just found an interesting thing - I noticed a bunch of auth rules in the log output that shouldn’t still be there, early on stuff like a test account and such. Using the “readonlyrest_kbn” app on the Kibana GUI I see exactly what I expect - the current contents of the /etc/elasticsearch/bbdevqa/readonlyrest.yml file (I had recently clicked “Load Default” and “Save” to reload the contents of the YAML file)

However, if I extracted the JSON-encoded contents of the .readonlyrest index:

curl -sk https://localhost:9200/.readonlyrest/_search | jq '.hits.hits[0]._source.settings'

What I got is similar but not the same. In fact, several old accounts are listed in the output I extracted from the .readonlyrest index. Stuff that had been configured and was then removed was still in the index, such as this:

     - name: "readonly test"
       auth_key: ro:pass
       kibana_access: ro
       kibana_hide_apps: ["readonlyrest_kbn", "timelion", "kibana:dev_tools", "kibana:management", "monitoring", "apm"]
...

That account hasn’t been part of the YAML file for a while.

I was told that I could update the config in the index with the following command:

curl -ks -X POST https://localhost:9200/_readonlyrest/admin/config -H Content-Type: application/json -d {JSON-encoded config file}

which I have done, but all those old entries are still in there.

I think I now understand why the password auth isn’t working for the admin account, as this is what I extracted from the .readonlyrest index:

...
  access_control_rules:
    - name: "local admin"
      type: allow
      auth_key: "admin:derp"
      kibana_access: admin
...

That password does not match what I’ve subsequently set in the readonlyrest.yml file.

So, now I’m going to delete the .readonlyrest index completely and see if that helps solve the problem.

Is there another way to actually update the index config and have it only remember what it gets from the YAML config?

I completely cleared the .readonlyrest index. I completely shut down the cluster, restarted, and reloaded the proper index. Finally, I can auth to “admin” via the proper credentials - but I still can’t manage index patterns, I get the basic auth pop-up prompt again.

Also, my LDAP admin account stopped working, so I’ll have to figure that out.

However, I did try logging in as the logstash user (without kibana_access: admin) and was able to manage index patterns. Huh.

hmm, that’s not a bad feature request, setting to rely on the yml file instead (such as removing the save button on the interface to force it. looks like someone talks about that here.. A JSON export button would be nice also…

@mdnuts what feature request? Disabling the save button?

@mdnuts, @cmh I have the impression that all this bad UX when using in-index settings is because people don’t know how it work. But once explained how it works, they generally say it makes sense (please tell me if it’s not your case, and if not what would make most sense to you?).

My proposal would be adding the info in a notification when saving settings from the Kibana app, like:

“This will write in-index settings that will OVERRIDE the readonlyrest.yml in all the nodes. If want to keep using the file based settings, change them in every node and restart the cluster.”

I can only speak for myself.

The way it works makes perfect sense for me but if i make changes in the plugin it’s not easy to add the changes in the yml to backup my current configuration. Typically you could just copy and paste but when you try that in vim it screws the formatting up.

All I want is for it to work, and it’s not doing that consistently. The config I’m seeing in the UI was not matching the config that I’m getting when I dump it via the API using this command:

eval echo -ne "$(curl -sk https://localhost:9200/.readonlyrest/_search | jq '.hits.hits[0]._source.settings')" > index.config

Completely deleting the .readonlyrest index and restarting the entire cluster was the only way I could figure out to get rid of the old data.

There was old information in the config that I’m getting when I dump it this way which doesn’t show up in the UI, and I’m at a loss to know what is stored where. We’ve got a yaml file on the filesystem, then we’ve got that yaml file in the .readonlyrest index, and then there seems to be a third place where the config is stored, and somehow when I reload the config, old parts of the config remain active.

Add to that that the LDAP config that was working last week (and works with the same binddn and bindpassword on other hosts) has stopped working, and I’m at a loss to discover why, The “admin” account can’t manage index patterns even though it has kibana_access: admin although the logstash user (which has no stated kibana_access` can.

Then wrap it all up in that I have to use a forum to get support that I only get a quick reply every day or so. I’m just a bit frustrated right now.

1 Like
  1. Inconsistency between the readonlyrest.yml file, the ReadonlyREST tab in Kibana, and what I see retrieving the config from the .readonlyrest index via the curl command - this is strange but was fixed, but I’d like to understand why it happened so I don’t run into this in the future
  2. I’d like to know why the “admin” account can’t manage index patterns but the logstash account can.
  3. I’m working on figuring out the problem with the LDAP auth and passing the exact values specified in readonlyrest.yml to a ldapsearch, it works. In the Elasticsearch logs, though:
[2019-04-29T17:54:38,977][ERROR][t.b.r.a.d.l.u.UnboundidAuthenticationLdapClient] [bz1elasticsearch2-0.bb.internal.maas360.com] LDAP authenticate operation failed: invalid credentials
[2019-04-29T17:54:38,983][INFO ][t.b.r.a.ACL              ] [elasticsearch1.example.com] FORBIDDEN by default req={ ID:321072270-211053875#60867062, TYP:RRAdminRequest, CGR:N/A, USR:cheerschap(?), BRS:false, KDX:null, ACT:cluster:admin/rradmin/refreshsettings, OA:1.2.3.4, DA:0.0.0.0, IDX:<N/A>, MET:GET, PTH:/_readonlyrest/metadata/current_user, CNT:<N/A>, HDR:{authorization=<OMITTED>, Connection=close, content-length=0, Host=elasticsearch1.example.com:9200}, HIS:[local-admin->[auth_key->false]], [local-logstash->[auth_key->false]], [local-kibana->[auth_key->false]], [local-puppet->[auth_key->false]], [elastalert->[auth_key->false]], [ldap-admin->[ldap_authentication->false]], [ldap-all->[ldap_authentication->false]], [localhost->[hosts->false]] }
1 Like

@cmh sorry about your experience with support. The frustration you experienced looks like a bad mix of:

  1. You probably discovered a bug with the settings refresh, will look into this. For the time being I agree it’s best for you to experiment with the readonlyrest.yml alone to avoid ambiguity.

  2. During the past week we’ve shipped the biggest code change in the history of this product (sorry we’ve not been as responsive as we normally are).

I have seen from the download logs you are not using 1.17.6 yet. We’d like you to update to that version, as the codebase is radically different and and more stable than the old one. Should have also more logs.

For debugging the cause of LDAP failures and having more info about why the logstash is not logging in, I’d recommend to enable debug logs as described in the troubleshooting guide.

Forum vs email
We feel using the forum for support creates added value as the community has an opportunity for everyone to help each other and passively learn from other people’s experience as well. Also, it creates a secondary form of documentation.

However feel free to communicate with us via email if you feel it’s the best for you (support at reaodnlyrest dot com). No problem :slight_smile:

Just to clarify - I was at 1.17.5. You’re saying there is a radically different codebase between 1.17.5 and 1.17.6?

Tried to update today and now my cluster won’t start. Contacting support via email.

@cmh do you have any error messages?