LDAP connectivity failures , few questions

Hello,

we use LDAP authentication with our readonlyrest plugin .
our infrastructure has some LDAP servers . in the YML file we use the hostname which hides a few servers , so we can connect to any of them .

few days ago we had some issues with one of our LDAP servers .
during that time the connectivity details which were cached in the ELASTICSEARCH server remained the same , though we had other LDAP server which were up, the specific ELASTICSEARCH node was still trying to use the LDAP server which was down for a few minutes.
so I have some questions regarding to that :

  1. if we have a failure in the domain controller, does the machnism which sends queries to LDAP servers try to query other server from the domain ? if so , which parameter should we set ?
  2. the parameter , cache_ttl_in_sec , does it cache only the username credentials ?
  3. is there a relationship between cache_ttl_in_sec parameter to other parameters ?
    what value do you recommend for that parameter ?
  4. can you please send me a link to the release notes documentation which has all the relevant settings we can configure in the readonlyrest.yml file ? especialli in the LDAP section.

thanks .

Hi ,

can you please take a look at that ?
we would like to check if we need to test the SAML complicated solution or maybe we can still live with the LDAP configuration with few changes .

Thanks again.

We do have LDAP HA.

  # High availability LDAP settings (using "hosts", rather than "host")
    - name: ldap2
      hosts:                                                        # HA style, alternative to "host"
      - "ldaps://ssl-ldap2.foo.com:636"                             # can use ldap:// or ldaps:// (for ssl)
      - "ldaps://ssl-ldap3.foo.com:636"                             # the port is declared in line
      ha: "ROUND_ROBIN"                                             # optional, default "FAILOVER"
      search_user_base_DN: "ou=People,dc=example2,dc=com"
      search_groups_base_DN: "ou=Groups,dc=example2,dc=com"

Hi Simone ,

thanks for the update .
is there a link you can share with the readonlyrest.yml parameters ,
I guess there are lots of parameters we are not aware of and maybe would like to check .

Sure: readonlyrest-docs/elasticsearch.md at master · beshu-tech/readonlyrest-docs · GitHub

Hi ,

I configured the readonlyrest.yml as suggested above with the HA section .
then I took down one of our DC and the other were up .
I tried to connect to the cluster through all the nodes .
some of them returned “Rejected by ROR” status :401 .
some of them were ok and connection was successful.
in those nodes which returned “status 401” I had to restart elastic service and only then I could connect to the cluster .
can you please advise if there’s something else I need to configure ?
our configuration file looks like that :
ldaps:
-name: ldap_name
hosts:

  • “ldap://dc_name:port_number”
  • “ldap://other_dc_name:port_number”
    ha: “ROUND_ROBIN”

thanks

We are going to release a rewritten LDAP connector in a matter of days, I added your concern to the ticket, so it will be tested.

thank you ,
we have ES 6.1 version but also need the ROR for future installations of ES 6.5.4 6.5.1 and 6.6.0 .
will you have a patch for those versions as well ?

another thing , just to make sure ,
when we install the ES we prepare readonlyrest.yml file for each node .
once the cluster is up all the changes are made through KIBANA , using the ROR plugin .
that’s how it suppose to work , right ?

Yes, the new core will be integrated with all the currently supported ES versions

Yes the idea is that you leave a static readonlyrest.yml in all your nodes as a failsafe for the eventuality that the .readonlyrest index gets deleted/corrupted (btw, protect direct access to that index from users using the ACL!).

Then, for practical reasons, it’s easier to use the ReadonlyREST PRO/Enterprise GUI in Kibana, as you would just press save and the security settings would be reloaded in all the nodes without needing a full cluster restart.

Thank you Simone ,

I’ll wait for the new release .

Hello Simone ,

can you plase update if the patch was released ?

Thanks .

The PR has finally landed. Reviewing now, then a bit of manual testing (in addition to all the unit and integration tests already present) and we can merge.

Hi Simone ,

if we download now the ROR , versions 6.5.1 , 6.5.4 and 6.6.0 will we get the latest release including those bug fixes mentioned above ?

We’re preparing a new stable version, out in 1-2 days. Releases are a bit fast as we are progressively stabilising the product with the new Kibana API.

Hi ,

I’ve just downloaded version 6.1.1 and will download the other relvant versions later today .
can you please confirm that this is the latest release ?
also , sent you an email to change my login account .
it would be great if you can send the confirmation to my nee e-mail address .

thank you .

@coutoPL can you comment?
For the email change, it should be already taken care of.

Hi Simone ,

I still can’t login with my new email .
I had to connect with the old credentials to reply .
the new email was sent via “contact us” link .
please update and also let me know if the version I donloaded yesterday is the latest release .

thanks .

I think @Ferran changed the email in the downloads entitlement database, but not in Chargebee.

@sdba2, fixed that in Chargebee too. Please try again :slight_smile:

This is good timing! I’m just about to upgrade our ES from 6.6.1 to 6.6.2 to overcome a small Kibana UI problem (missing “plus and minus magnifying glass” filter buttons on bar chart legend). So I was going to download the ES and Pro plug-ins to match the new ES version, and I see you have rewritten the core of the LDAP connector, and it now has round-robin HA of multiple DNS-named LDAP servers, also lovely.

I’m slightly concerned that I can’t download an older build version of either of the RoR plug-ins anymore. I don’t expect anything bad to happen, of course, but upgrading ES itself is a one-way operation because the shards get “stamped” as belonging to a particular version, so they will never migrate back to an older ES node. So if I go from ES 6.6.1 + RoR 1.17.0 to ES 6.6.2 + RoR 1.17.3, and something weird happens to my LDAP, It would be desirable to go back to the RoR code logic of 1.17.0, but I think I would have to have RoR 1.17.0 built for 6.6.2, which I may not be able to retrieve now. Anyway, please let me know if this seemingly-minor upgrade is risky, or I should do some very large cluster backup before contemplating it. Thanks.