No login failure feedback on RoR login page

Am using RoR 1.18.4 with local and LDAP auth on elastic stack 6.8.2. Everything works, but a little thing that is more an annoyance than anything is that I don’t get any type of feedback when a login fails - it just goes back to the login page. Have swapped out the icon but other than that, that’s all the customization I’ve done to the login page. Have searched here and in the doc and didn’t see anything that looked relevant. Any suggestions?

Relevant config files - let me know if I can provide anything else of relevance.


# File managed by Puppet.
elasticsearch.password: derp
elasticsearch.requestTimeout: '60000'
elasticsearch.ssl.certificateAuthorities: "/etc/puppetlabs/puppet/ssl/certs/ca.pem"
elasticsearch.username: kibana
kibana.defaultAppId: discover
kibana.index: ".iz1kibana"
logging.dest: "/var/log/elk/kibana.log"
readonlyrest_kbn.login_custom_logo: "/plugins/readonlyrest_kbn/img/elk-logo.jpg"
- ".*/api/status$"
server.port: '443'
server.ssl.certificate: "/etc/kibana/ssl.crt"
server.ssl.enabled: true
server.ssl.key: "/etc/kibana/ssl.key"
xpack.graph.enabled: false false
xpack.monitoring.enabled: false false
xpack.watcher.enabled: false


# yamllint disable rule:line-length
# However, once it gets loaded into the .readonlyrest index,
#  you might need to use an admin account to log into Kibana
#  and choose "Load default" from the "ReadonlyREST" tab.
# Alternately, you can use the "update-ror" script in ~cheerschap/bin/
  enable: true
  prompt_for_basic_auth: false
  response_if_req_forbidden: Forbidden by ReadonlyREST plugin
    enable: true
    keystore_file: "elasticsearch.jks"
    keystore_pass: {redacted}
    key_pass: {redacted}
    # LOCAL: Kibana admin account
    - name: "local-admin"
      auth_key_unix: {redacted}
      kibana_access: admin
    # LOCAL: Logstash servers inbound access
    - name: "local-logstash"
      auth_key_unix: {redacted}
      # Local accounts for routine access should have less verbisity
      #  to keep the amount of logfile noise down
      verbosity: error
    # LOCAL: Kibana server
    - name: "local-kibana"
      auth_key_unix: {redacted}
      verbosity: error
    # LOCAL: Puppet communication
    - name: "local-puppet"
      auth_key_unix: {redacted}
      verbosity: error
    # LOCAL: Elastalert
    - name: "elastalert"
      auth_key_unix: {redacted}
      verbosity: error
    # LDAP: kibana-admin group
    - name: "ldap-admin"
      kibana_access: admin
        name: "ldap1"
        groups: ["kibana-admin"]
      type: allow
    # LDAP for everyone else
    - name: "ldap-all"
      # possibly include: "kibana:dev_tools",
      kibana_hide_apps: ["readonlyrest_kbn", "timelion", "kibana:management", "apm", "infra:home", "infra:logs"]
        name: "ldap1"
        groups: ["kibana-admin", "admins", "prod-admins", "devqa", "development", "ipausers"]
      type: allow
    # Allow localhost
    - name: "localhost"
      hosts: []
      verbosity: error
  # Define the LDAP connection
    - name: ldap1
      hosts: ["", ""]
      ha: "FAILOVER"
      port: 636
      bind_dn: {redacted}
      bind_password: {redacted}
      ssl_enabled: true
      ssl_trust_all_certs: true
      search_user_base_DN: {redacted}
      search_groups_base_DN: {redacted}
      user_id_attribute: "uid"
      unique_member_attribute: "member"
      connection_pool_size: 10
      connection_timeout_in_sec: 30
      request_timeout_in_sec: 30
      cache_ttl_in_sec: 60
      group_search_filter: "(objectclass=top)"
      group_name_attribute: "cn"

In the latest ROR for Kibana we fixed the absence of feedback on login form. You can see it in action if you run our docker test env.

1 Like

Yep, that did it, thank you! I figured I was missing something in my config.

Okay, so 1.18.8 fixed my lack of feedback on login failures, but this morning I noticed that the _cat/indices didn’t work anymore. I spent quite a bit of time diving into it - all the other API bits worked just fine and Kibana was working fine - I could even see a list of indices when trying to create a new index pattern in Kibana management tab - but _cat/indices would take a long time and return nothing. Initially I was concerned that something had happened and all the data was gone, but Kibana was reacting just fine. Debugging the problem, I discovered that the problem was present in my two dev clusters, but not in the prod cluster. I had pushed template updates yesterday but that went to all three. The only thing I did yesterday that was consistent between the two dev clusters was upgrade RoR from 1.18.4 to 1.18.8.

Sure enough, when I took the smaller dev cluster down and removed 1.18.8 and reinstalled 1.18.4, I can now run _cat/indices with output again. I can say with a fair degree of certainty that 1.18.8 has caused the issue.

I looked at the logs, there was nothing showing up when I was testing with 1.18.8 installed.

So, to summarize:

  • _cat/indices stopped working when I installed 1.18.8
  • _cat/master worked, as did _cat/nodes
  • I was able to get info on an index directly using the index name, such as curl -sk https://localhost:9200/syslog-2019.11.14-clustername
  • My testing was using curl -sk https://localhost:9200 without a password, but changing to hostname and providing the logstash credentials did not return indices.
  • I tried the same using my LDAP credentials (in the “kibana-admin” LDAP group so I have full admin access - that didn’t work.
  • Nothing showed up in the logfiles as I wasn’t getting rejected, I was just getting no output.
  • Removing 1.18.8 and reinstalling 1.18.4 caused _cat/indices to start working again.

The config files remain the same as above, so I’m not bothering to repost them.

I also have a test cluster running 7.4.0 and 1.18.8 where _cat/indices returns no values, but I don’t have anything logging to that cluster, so I wasn’t surprised - but I did expect that I would at least see some system indices. I went to the download page so I don’t see any way to request a previous version, so couldn’t test 1.18.4 to see if _cat/indices works for me there - and I don’t have a kibana instance on that cluster to view anything.

Please let me know if you need any more information from me. I’ve left the larger dev cluster at 1.18.8 so I can run tests if necessary.

we’ve already fixed it. Please try:

1 Like

Thanks, confirmed that this fixed it - any word on when 1.18.9 will be available?

We’re closing last changes which we’d like to include in this release, so really soon. I will let you know.


@cmh I’d like let you know, that ROR 1.18.9 is already released.

1 Like

I downloaded 1.18.9 the usual way and put it on my ES cluster - cool.

Got the KBN Pro plugin the usual way as well (to my registered email) and when I start kibana I get:

 FATAL  Error: ReadonlyREST for Kibana halted because your trial has expired, or this build is way too old. Get the latest build, or get assistance at at

This is on a new Kibana/ES cluster install, haven’t tried on existing clusters yet.

Going back to 1.18.4 and it starts.


Hi Chris,

Is this the file we are talking about?

Sorry for the delay in responding, yes.

This issue should be resolved by now. Did the old plugin uninstall without errors, and did the new one install without errors?

Finally got to get this 1.18.9 installed on our clusters and it seems like it’s working now.

1 Like