LDAP Connection Leak Issue?

Hi,

We’re currently using RoR v1.18.4 and ELK 6.8.2. We’ve noticed an issue where over time, the number of connections RoR is making to our LDAP server increases over time, eventually maxing out the number of LDAP connections allowed for the user. The only way to clear the connections seems to be to restart Elasticsearch. The number of connections does not increase linearly, but rather seems to increase in steps. It looks like this is happening when we edit the RoR config in Kibana and save - each save seems to generate a new set of LDAP connections, on top of the existing ones. The total connections exceeds the configured LDAP connection pool settings - see settings below.

We’ve also briefly tested this with v1.19.0 and it seems like the same behaviour exists.

Have you every seen any issue like this? Any ideas how to resolve or at least to investigate further?

Thanks,

  • Adrian

ldaps:
- name: ldap1
host: “xxxx”
ssl_enabled: false
bind_dn: “domain\user”
bind_password: “password”
search_user_base_DN: “OU=XXX Users,DC=XXX,DC=ie”
search_groups_base_DN: “DC=XXX,DC=ie”
user_id_attribute: “cn”
#groups_from_user: true
groups_from_user_attribute: “memberOf”
unique_member_attribute: “member”
cache_ttl_in_sec: 60
connection_pool_size: 10
connection_timeout_in_sec: 2
request_timeout_in_sec: 2

Hi @aidofitz, thanks for reporting this.
@coutoPL is this related to this PR? [RORDEV-182]cache improvements - parallel calls for the same key will be… by coutoPL · Pull Request #551 · sscarduzio/elasticsearch-readonlyrest-plugin · GitHub

I don’t think so. IMO it’s related to graceful close of old core on reload. Seems that it’s rather related to the jira, Wojtek is working on ATM.

1 Like

thanks for look at this guys. Is there an open issue I can track to follow progress on this?

it’s in our internal jira. We’ll notify you here when it’s done.

great, thanks for that!

Could you please test this version?

https://readonlyrest-data.s3-eu-west-1.amazonaws.com/build/1.19.3-pre5/readonlyrest-1.19.3-pre5_es6.8.2.zip?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA5SJIWBO54AGBERLX/20200310/eu-west-1/s3/aws4_request&X-Amz-Date=20200310T060851Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&X-Amz-Signature=b1a39bc4a0158deefb4acbdb40449cfc509d392fc2cb224f586d0306e1d9d581

We’ve found and fixed the leak.

1 Like

Hi Mateusz,

Thanks very much for this. Since we originally reported the issue, we’ve actually upgraded our main test ES cluster to v7.6, however. Would it be possible to get a 7.6 compatible RoR version for testing?

Regards,

Adrian

yes, sure:

https://readonlyrest-data.s3-eu-west-1.amazonaws.com/build/1.19.3-pre5/readonlyrest-1.19.3-pre5_es7.6.0.zip?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA5SJIWBO54AGBERLX/20200313/eu-west-1/s3/aws4_request&X-Amz-Date=20200313T192549Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&X-Amz-Signature=d085f8ee16eef28ee76f0373d51bd0f412df06392748fcba96550a3df2de8e9c

https://readonlyrest-data.s3-eu-west-1.amazonaws.com/build/1.19.3-pre5/readonlyrest-1.19.3-pre5_es7.6.1.zip?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA5SJIWBO54AGBERLX/20200313/eu-west-1/s3/aws4_request&X-Amz-Date=20200313T192603Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&X-Amz-Signature=9ace53de7b1bd125b10d81dfc0dd99c16c59cbe7bad00f026e8e53149759243c

Hi, sorry, just realised i forgot to respond to this. Just to confirm the fix seems to have resolved the issues we were seeing. Thanks!

2 Likes