LDAP connectivity failures , few questions


#1

Hello,

we use LDAP authentication with our readonlyrest plugin .
our infrastructure has some LDAP servers . in the YML file we use the hostname which hides a few servers , so we can connect to any of them .

few days ago we had some issues with one of our LDAP servers .
during that time the connectivity details which were cached in the ELASTICSEARCH server remained the same , though we had other LDAP server which were up, the specific ELASTICSEARCH node was still trying to use the LDAP server which was down for a few minutes.
so I have some questions regarding to that :

  1. if we have a failure in the domain controller, does the machnism which sends queries to LDAP servers try to query other server from the domain ? if so , which parameter should we set ?
  2. the parameter , cache_ttl_in_sec , does it cache only the username credentials ?
  3. is there a relationship between cache_ttl_in_sec parameter to other parameters ?
    what value do you recommend for that parameter ?
  4. can you please send me a link to the release notes documentation which has all the relevant settings we can configure in the readonlyrest.yml file ? especialli in the LDAP section.

thanks .


#2

Hi ,

can you please take a look at that ?
we would like to check if we need to test the SAML complicated solution or maybe we can still live with the LDAP configuration with few changes .

Thanks again.


(Simone Scarduzio) #3

We do have LDAP HA.

  # High availability LDAP settings (using "hosts", rather than "host")
    - name: ldap2
      hosts:                                                        # HA style, alternative to "host"
      - "ldaps://ssl-ldap2.foo.com:636"                             # can use ldap:// or ldaps:// (for ssl)
      - "ldaps://ssl-ldap3.foo.com:636"                             # the port is declared in line
      ha: "ROUND_ROBIN"                                             # optional, default "FAILOVER"
      search_user_base_DN: "ou=People,dc=example2,dc=com"
      search_groups_base_DN: "ou=Groups,dc=example2,dc=com"

#4

Hi Simone ,

thanks for the update .
is there a link you can share with the readonlyrest.yml parameters ,
I guess there are lots of parameters we are not aware of and maybe would like to check .


(Simone Scarduzio) #5

Sure: https://github.com/beshu-tech/readonlyrest-docs/blob/master/elasticsearch.md#ldap-connector


#6

Hi ,

I configured the readonlyrest.yml as suggested above with the HA section .
then I took down one of our DC and the other were up .
I tried to connect to the cluster through all the nodes .
some of them returned “Rejected by ROR” status :401 .
some of them were ok and connection was successful.
in those nodes which returned “status 401” I had to restart elastic service and only then I could connect to the cluster .
can you please advise if there’s something else I need to configure ?
our configuration file looks like that :
ldaps:
-name: ldap_name
hosts:

  • “ldap://dc_name:port_number”
  • “ldap://other_dc_name:port_number”
    ha: “ROUND_ROBIN”

thanks


(Simone Scarduzio) #7

We are going to release a rewritten LDAP connector in a matter of days, I added your concern to the ticket, so it will be tested.


#8

thank you ,
we have ES 6.1 version but also need the ROR for future installations of ES 6.5.4 6.5.1 and 6.6.0 .
will you have a patch for those versions as well ?

another thing , just to make sure ,
when we install the ES we prepare readonlyrest.yml file for each node .
once the cluster is up all the changes are made through KIBANA , using the ROR plugin .
that’s how it suppose to work , right ?


(Simone Scarduzio) #9

Yes, the new core will be integrated with all the currently supported ES versions

Yes the idea is that you leave a static readonlyrest.yml in all your nodes as a failsafe for the eventuality that the .readonlyrest index gets deleted/corrupted (btw, protect direct access to that index from users using the ACL!).

Then, for practical reasons, it’s easier to use the ReadonlyREST PRO/Enterprise GUI in Kibana, as you would just press save and the security settings would be reloaded in all the nodes without needing a full cluster restart.


#10

Thank you Simone ,

I’ll wait for the new release .


#11

Hello Simone ,

can you plase update if the patch was released ?

Thanks .


(Simone Scarduzio) #12

The PR has finally landed. Reviewing now, then a bit of manual testing (in addition to all the unit and integration tests already present) and we can merge.


#13

Hi Simone ,

if we download now the ROR , versions 6.5.1 , 6.5.4 and 6.6.0 will we get the latest release including those bug fixes mentioned above ?


(Simone Scarduzio) #14

We’re preparing a new stable version, out in 1-2 days. Releases are a bit fast as we are progressively stabilising the product with the new Kibana API.