Permission denied for Kibana admin account

  elasticsearch {
    index           => "%{targetindex}-%{+YYYY.MM.dd}-bbdevqa"
    id              => "elasticsearch_output"
    hosts           => ["eshost1", "eshost2"...]
    # Templates have been moved to elasticsearch
    manage_template => false
    user            => "logstash"
    password        => "pass"
    cacert          => "/etc/puppetlabs/puppet/ssl/certs/ca.pem"
    ssl             => true
    ssl_certificate_verification => true
  }
}

just a stab, but you’ve got SSL enabled on the Logstash side but readonlyrest side shows basic auth.

That’s nothing new - been running the cluster like that with success. The SSL enable happens on the elasticsearch side in the elasticsearch.yml:

...
http.enabled: true
http.port: 9200
http.type: ssl_netty4
...

SSL communication is working and with the puppet CA it works without complaining about the cert, which is nice.

Mind you, I’m not saying that I’m doing it right - but I think that’s what the doc said and that part at least has been working for a while - just like the admin account used to work.

Thanks for looking, though, I do really appreciate it. Seems like the harder I try, the worse I’m making this.

Well I just found an interesting thing - I noticed a bunch of auth rules in the log output that shouldn’t still be there, early on stuff like a test account and such. Using the “readonlyrest_kbn” app on the Kibana GUI I see exactly what I expect - the current contents of the /etc/elasticsearch/bbdevqa/readonlyrest.yml file (I had recently clicked “Load Default” and “Save” to reload the contents of the YAML file)

However, if I extracted the JSON-encoded contents of the .readonlyrest index:

curl -sk https://localhost:9200/.readonlyrest/_search | jq '.hits.hits[0]._source.settings'

What I got is similar but not the same. In fact, several old accounts are listed in the output I extracted from the .readonlyrest index. Stuff that had been configured and was then removed was still in the index, such as this:

     - name: "readonly test"
       auth_key: ro:pass
       kibana_access: ro
       kibana_hide_apps: ["readonlyrest_kbn", "timelion", "kibana:dev_tools", "kibana:management", "monitoring", "apm"]
...

That account hasn’t been part of the YAML file for a while.

I was told that I could update the config in the index with the following command:

curl -ks -X POST https://localhost:9200/_readonlyrest/admin/config -H Content-Type: application/json -d {JSON-encoded config file}

which I have done, but all those old entries are still in there.

I think I now understand why the password auth isn’t working for the admin account, as this is what I extracted from the .readonlyrest index:

...
  access_control_rules:
    - name: "local admin"
      type: allow
      auth_key: "admin:derp"
      kibana_access: admin
...

That password does not match what I’ve subsequently set in the readonlyrest.yml file.

So, now I’m going to delete the .readonlyrest index completely and see if that helps solve the problem.

Is there another way to actually update the index config and have it only remember what it gets from the YAML config?

I completely cleared the .readonlyrest index. I completely shut down the cluster, restarted, and reloaded the proper index. Finally, I can auth to “admin” via the proper credentials - but I still can’t manage index patterns, I get the basic auth pop-up prompt again.

Also, my LDAP admin account stopped working, so I’ll have to figure that out.

However, I did try logging in as the logstash user (without kibana_access: admin) and was able to manage index patterns. Huh.

hmm, that’s not a bad feature request, setting to rely on the yml file instead (such as removing the save button on the interface to force it. looks like someone talks about that here.. A JSON export button would be nice also…

@mdnuts what feature request? Disabling the save button?

@mdnuts, @cmh I have the impression that all this bad UX when using in-index settings is because people don’t know how it work. But once explained how it works, they generally say it makes sense (please tell me if it’s not your case, and if not what would make most sense to you?).

My proposal would be adding the info in a notification when saving settings from the Kibana app, like:

“This will write in-index settings that will OVERRIDE the readonlyrest.yml in all the nodes. If want to keep using the file based settings, change them in every node and restart the cluster.”

I can only speak for myself.

The way it works makes perfect sense for me but if i make changes in the plugin it’s not easy to add the changes in the yml to backup my current configuration. Typically you could just copy and paste but when you try that in vim it screws the formatting up.

All I want is for it to work, and it’s not doing that consistently. The config I’m seeing in the UI was not matching the config that I’m getting when I dump it via the API using this command:

eval echo -ne "$(curl -sk https://localhost:9200/.readonlyrest/_search | jq '.hits.hits[0]._source.settings')" > index.config

Completely deleting the .readonlyrest index and restarting the entire cluster was the only way I could figure out to get rid of the old data.

There was old information in the config that I’m getting when I dump it this way which doesn’t show up in the UI, and I’m at a loss to know what is stored where. We’ve got a yaml file on the filesystem, then we’ve got that yaml file in the .readonlyrest index, and then there seems to be a third place where the config is stored, and somehow when I reload the config, old parts of the config remain active.

Add to that that the LDAP config that was working last week (and works with the same binddn and bindpassword on other hosts) has stopped working, and I’m at a loss to discover why, The “admin” account can’t manage index patterns even though it has kibana_access: admin although the logstash user (which has no stated kibana_access` can.

Then wrap it all up in that I have to use a forum to get support that I only get a quick reply every day or so. I’m just a bit frustrated right now.

1 Like
  1. Inconsistency between the readonlyrest.yml file, the ReadonlyREST tab in Kibana, and what I see retrieving the config from the .readonlyrest index via the curl command - this is strange but was fixed, but I’d like to understand why it happened so I don’t run into this in the future
  2. I’d like to know why the “admin” account can’t manage index patterns but the logstash account can.
  3. I’m working on figuring out the problem with the LDAP auth and passing the exact values specified in readonlyrest.yml to a ldapsearch, it works. In the Elasticsearch logs, though:
[2019-04-29T17:54:38,977][ERROR][t.b.r.a.d.l.u.UnboundidAuthenticationLdapClient] [bz1elasticsearch2-0.bb.internal.maas360.com] LDAP authenticate operation failed: invalid credentials
[2019-04-29T17:54:38,983][INFO ][t.b.r.a.ACL              ] [elasticsearch1.example.com] FORBIDDEN by default req={ ID:321072270-211053875#60867062, TYP:RRAdminRequest, CGR:N/A, USR:cheerschap(?), BRS:false, KDX:null, ACT:cluster:admin/rradmin/refreshsettings, OA:1.2.3.4, DA:0.0.0.0, IDX:<N/A>, MET:GET, PTH:/_readonlyrest/metadata/current_user, CNT:<N/A>, HDR:{authorization=<OMITTED>, Connection=close, content-length=0, Host=elasticsearch1.example.com:9200}, HIS:[local-admin->[auth_key->false]], [local-logstash->[auth_key->false]], [local-kibana->[auth_key->false]], [local-puppet->[auth_key->false]], [elastalert->[auth_key->false]], [ldap-admin->[ldap_authentication->false]], [ldap-all->[ldap_authentication->false]], [localhost->[hosts->false]] }
1 Like

@cmh sorry about your experience with support. The frustration you experienced looks like a bad mix of:

  1. You probably discovered a bug with the settings refresh, will look into this. For the time being I agree it’s best for you to experiment with the readonlyrest.yml alone to avoid ambiguity.

  2. During the past week we’ve shipped the biggest code change in the history of this product (sorry we’ve not been as responsive as we normally are).

I have seen from the download logs you are not using 1.17.6 yet. We’d like you to update to that version, as the codebase is radically different and and more stable than the old one. Should have also more logs.

For debugging the cause of LDAP failures and having more info about why the logstash is not logging in, I’d recommend to enable debug logs as described in the troubleshooting guide.

Forum vs email
We feel using the forum for support creates added value as the community has an opportunity for everyone to help each other and passively learn from other people’s experience as well. Also, it creates a secondary form of documentation.

However feel free to communicate with us via email if you feel it’s the best for you (support at reaodnlyrest dot com). No problem :slight_smile:

Just to clarify - I was at 1.17.5. You’re saying there is a radically different codebase between 1.17.5 and 1.17.6?

Tried to update today and now my cluster won’t start. Contacting support via email.

@cmh do you have any error messages?

Just sent to support email:

...
[2019-04-30T18:43:24,624][INFO ][o.e.p.PluginsService     ] [elastic.example.com] loaded plugin [readonlyrest]
[2019-04-30T18:43:28,935][WARN ][t.b.r.e.ReadonlyRestPlugin] [elastic.example.com][ReadonlyRestPlugin] could not check if had remote ES clusters: Failed to get setting group for [] setting prefix and setting [pidfile] because of a missing '.'
[2019-04-30T18:43:28,956][INFO ][t.b.r.e.IndexLevelActionFilter] [elastic.example.com] Settings observer refreshing...
[2019-04-30T18:43:31,746][ERROR][o.e.b.Bootstrap          ] elastic.example.com] Exception
java.lang.IllegalArgumentException: Predicate isEmpty() did not fail.
    at eu.timepit.refined.api.RefinedType.unsafeRefine(RefinedType.scala:32) ~[?:?]
    at eu.timepit.refined.api.RefinedType.unsafeRefine$(RefinedType.scala:29) ~[?:?]
    at eu.timepit.refined.api.RefinedType$$anon$1.unsafeRefine(RefinedType.scala:55) ~[?:?]
    at eu.timepit.refined.api.RefinedTypeOps.unsafeFrom(RefinedTypeOps.scala:41) ~[?:?]
...

Looks like the relevant part is:

could not check if had remote ES clusters: Failed to get setting group for [] setting prefix and setting [pidfile] because of a missing '.'

Ugh - just discovered one of my clusters was running a mix of 1.17.5 (one node) and 1.16.34. I guess that’s the “radically different codebase”.

1 Like

@cmh it’d be nice if you could show more logs (eg. error log is truncated) or configuration you’re trying to run (it’d be simpler for us to tell what is wrong, replicate error or fix error message)

The full readonlyrest.yml is posted above in comment #7 (Permission denied for Kibana admin account - #7 by cmh) and is still the same.

I could post more of the logfiles, but it’s troublesome to have to go through and sanitize them and hope I got everything. I looked through them, and that failed to get setting looked to be the only really relevant part, didn’t thin the full stack was necessary.

@cmh we should provide better error message, but I see that this is the problem:

This value cannot be empty string.

1 Like

So that config works in 1.17.5 but causes the cluster to not be able to start in 1.17.6?

@cmh we’ve improved config validation at startup time. Before 1.17.6 some validations were done at request handling level. Now, ROR does them at config loading level (before ROR starts). If ROR finds invalid configuration, it won’t start.

1 Like