LDAP with ReadOnlyRest plugin for elastic search

Hello,
I’ve tried the basic authentication with the ReadOnlyRest plugin for elasticSearch and it works fine.
Now i want to use a LDAP authentication and I’m facing some problems.
What i want to do is to make appear a prompt where users can enter their Ldap’s creditentials. Do i have to change something in the configuration file for Logstash or Kibana?
Here is a example of my readonlyrest.yml:


readonlyrest:

prompt_for_basic_auth: true

access_control_rules:
-
indices:
- my_indice-*
ldap_auth:
groups:
- MyGroup
name: ldap1
name: “Accept requests from users in group team1 on Filebeat”
type: allow

ldaps:
-
bind_dn: “cn=Directory Manager”
bind_password: myPwd
cache_ttl_in_sec: 60
connection_pool_size: 10
connection_timeout_in_sec: 10
host: “ldah_host”
port: 389
name: ldap1
request_timeout_in_sec: 10
search_groups_base_DN: “cn=groups,ou=clients,dc=something,dc=be”
search_user_base_DN: “cn=users,ou=clients,dc=something,dc=be”
ssl_enabled: false
ssl_trust_all_certs: true

Thanks for paying attention

Hi @Spierre, welcome to the forum!
ROR for ES will accept LDAP credentials as basic http auth, exactly like it did when you had it configured with hardcoded credentials using “auth_key”. No Kibana configuration change is required.

The configuration you pasted has been flattened out because you didn’t use the code button (</>) in the forum’s editor, so it’s very difficult to comment on your current settings.

However, it all boils down to get the LDAP connector settings right to work with your LDAP server. I highly recommend to set up Elasticsearch in debug mode and see all the debug logs from the LDAP connector and troubleshoot from there.

Hi, thanks for responding.

Sorry for the format, here is my actual ReadOnlyRest settings:

--- 
readonlyrest:

  prompt_for_basic_auth: true
  audit_collector: true

  access_control_rules:
    - 
      indices: ["filebeat-something-*"]
      ldap_auth: 
        groups: 
          - MyGroup
        name: "ldap1"
      name: "Accept requests from users in group team1 on Filebeat"
      type: allow
      kibana_access: rw

  ldaps: 
    - 
      bind_dn: "cn=Directory Manager"
      bind_password: "myPassword"
      cache_ttl_in_sec: 60
      connection_pool_size: 10
      connection_timeout_in_sec: 10
      host: "10.0.228.23"
      port: 389
      name: ldap1
      request_timeout_in_sec: 10
      search_groups_base_DN: "cn=groups,ou=clients,dc=something"
      search_user_base_DN: "cn=users,ou=clients,dc=something"
      ssl_enabled: false
      ssl_trust_all_certs: true

Now what i’m facing is that the connection with the LDAP seems to work but i don’t succeed to have an ldap user logged.
In the log file, i have this. :

[2019-08-08T09:54:25,056][INFO ][t.b.r.a.l.AclLoggingDecorator] [node-23] e[35mFORBIDDEN by default req={ ID:1777138561-597548455#3390, TYP:MainRequest, CGR:N/A, USR:[user not logged], BRS:true, KDX:null, ACT:cluster:monitor/main, OA:10.0.228.23/32, XFF:null, DA:10.0.228.23/32, IDX:<N/A>, MET:HEAD, PTH:/, CNT:<N/A>, HDR:Accept-Encoding=gzip,deflate, Authorization=<OMITTED>, Connection=Keep-Alive, Content-Type=application/json, Host=10.0.228.23:9200, User-Agent=Manticore 0.6.4, content-length=0, HIS:[Accept requests from users in group team1 on Filebeat-> RULES:[ldap_auth->false], RESOLVED:[]] }e[0m

ldap_auth is still not matching (returns false). What do you mean that LDAP seems to work?
Maybe authentication is ok but authorization is not?
The LDAP connector settings about groups searching are not correct maybe?
Are you seeing in the debug logs the list of groups the LDAP connector resolves for your user?

What I was meaning is that at least the host of the Ldap was correct.
Yes, I think the problem must be with bind_dn or the group_search_filter. But i don’t know where i can find the correct infos for this. I have tried so many options, always the same issue. I received a status 401, which means the creditentials provided are false, but they are corrects when they are tested with phpLDAPAdmin…

And I’m not seeing a group of LDAP connector for my user in the log…
It just show multiples times lines like this :

"stacktrace": ["tech.beshu.ror.es.IndexLevelActionFilter$1$1: forbidden",
"at tech.beshu.ror.es.IndexLevelActionFilter$1.onForbidden(IndexLevelActionFilter.java:290) [readonlyrest-1.18.1_es7.1.0.jar:?]",
"at tech.beshu.ror.acl.helpers.AclResultCommitter$.$anonfun$commit$1(AclResultCommitter.scala:41) [core-1.18.1.jar:?]",
"at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) [scala-library-2.12.8.jar:?]",
"at scala.util.Try$.apply(Try.scala:213) [scala-library-2.12.8.jar:?]"
......

The LDAP used here is OpenLDAP, do you know if there is something particular with it ?

OpenLDAP is what we use in our integration tests. Definitely nothing wrong with it.

Disregard that useless, noisy stack trace (it’s now been removed from our code BTW). The log lines I would like you to search in elasticsearch.log are others:

Try to grep “returned for users”, Or also “Fetching LDAP user”, “Trying to fetch” or other strings. These are the interesting log lines that reveal the dialogue between the LDAP server and the connector.

Hello dear Simone Scarduzio,

I’m still working on it again and the problem is that in fact, despite I put the rootLogger in debug mode in the log4jPropeties placed in the elasticsearch directory, I am still totally unable to see the log lines you are talking about.

Here is my properties:

status = error

# log action execution errors for easier debugging
logger.action.name = org.elasticsearch.action
logger.action.level = debug

appender.console.type = Console
appender.console.name = console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %m%n

######## Server JSON ############################
appender.rolling.type = RollingFile
appender.rolling.name = rolling
appender.rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_server.json
appender.rolling.layout.type = ESJsonLayout
appender.rolling.layout.type_name = server

appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.json.gz
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size = 128MB
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.fileIndex = nomax
appender.rolling.strategy.action.type = Delete
appender.rolling.strategy.action.basepath = ${sys:es.logs.base_path}
appender.rolling.strategy.action.condition.type = IfFileName
appender.rolling.strategy.action.condition.glob = ${sys:es.logs.cluster_name}-*
appender.rolling.strategy.action.condition.nested_condition.type = IfAccumulatedFileSize
appender.rolling.strategy.action.condition.nested_condition.exceeds = 2GB
################################################
######## Server -  old style pattern ###########
appender.rolling_old.type = RollingFile
appender.rolling_old.name = rolling_old
appender.rolling_old.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.log
appender.rolling_old.layout.type = PatternLayout
appender.rolling_old.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %m%n

appender.rolling_old.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.log.gz
appender.rolling_old.policies.type = Policies
appender.rolling_old.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling_old.policies.time.interval = 1
appender.rolling_old.policies.time.modulate = true
appender.rolling_old.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling_old.policies.size.size = 128MB
appender.rolling_old.strategy.type = DefaultRolloverStrategy
appender.rolling_old.strategy.fileIndex = nomax
appender.rolling_old.strategy.action.type = Delete
appender.rolling_old.strategy.action.basepath = ${sys:es.logs.base_path}
appender.rolling_old.strategy.action.condition.type = IfFileName
appender.rolling_old.strategy.action.condition.glob = ${sys:es.logs.cluster_name}-*
appender.rolling_old.strategy.action.condition.nested_condition.type = IfAccumulatedFileSize
appender.rolling_old.strategy.action.condition.nested_condition.exceeds = 2GB
################################################

rootLogger.level = debug
rootLogger.appenderRef.console.ref = console
rootLogger.appenderRef.rolling.ref = rolling
rootLogger.appenderRef.rolling_old.ref = rolling_old

######## Deprecation JSON #######################
appender.deprecation_rolling.type = RollingFile
appender.deprecation_rolling.name = deprecation_rolling
appender.deprecation_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation.json
appender.deprecation_rolling.layout.type = ESJsonLayout
appender.deprecation_rolling.layout.type_name = deprecation

appender.deprecation_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation-%i.json.gz
appender.deprecation_rolling.policies.type = Policies
appender.deprecation_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.deprecation_rolling.policies.size.size = 1GB
appender.deprecation_rolling.strategy.type = DefaultRolloverStrategy
appender.deprecation_rolling.strategy.max = 4
#################################################
######## Deprecation -  old style pattern #######
appender.deprecation_rolling_old.type = RollingFile
appender.deprecation_rolling_old.name = deprecation_rolling_old
appender.deprecation_rolling_old.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation.log
appender.deprecation_rolling_old.layout.type = PatternLayout
appender.deprecation_rolling_old.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %m%n

appender.deprecation_rolling_old.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}\
  _deprecation-%i.log.gz
appender.deprecation_rolling_old.policies.type = Policies
appender.deprecation_rolling_old.policies.size.type = SizeBasedTriggeringPolicy
appender.deprecation_rolling_old.policies.size.size = 1GB
appender.deprecation_rolling_old.strategy.type = DefaultRolloverStrategy
appender.deprecation_rolling_old.strategy.max = 4
#################################################
logger.deprecation.name = org.elasticsearch.deprecation
logger.deprecation.level = warn
logger.deprecation.appenderRef.deprecation_rolling.ref = deprecation_rolling
logger.deprecation.appenderRef.deprecation_rolling_old.ref = deprecation_rolling_old
logger.deprecation.additivity = false

######## Search slowlog JSON ####################
appender.index_search_slowlog_rolling.type = RollingFile
appender.index_search_slowlog_rolling.name = index_search_slowlog_rolling
appender.index_search_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs\
  .cluster_name}_index_search_slowlog.json
appender.index_search_slowlog_rolling.layout.type = ESJsonLayout
appender.index_search_slowlog_rolling.layout.type_name = index_search_slowlog

appender.index_search_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs\
  .cluster_name}_index_search_slowlog-%i.json.gz
appender.index_search_slowlog_rolling.policies.type = Policies
appender.index_search_slowlog_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.index_search_slowlog_rolling.policies.size.size = 1GB
appender.index_search_slowlog_rolling.strategy.type = DefaultRolloverStrategy
appender.index_search_slowlog_rolling.strategy.max = 4
#################################################
######## Search slowlog -  old style pattern ####
appender.index_search_slowlog_rolling_old.type = RollingFile
appender.index_search_slowlog_rolling_old.name = index_search_slowlog_rolling_old
appender.index_search_slowlog_rolling_old.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}\
  _index_search_slowlog.log
appender.index_search_slowlog_rolling_old.layout.type = PatternLayout
appender.index_search_slowlog_rolling_old.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %m%n

appender.index_search_slowlog_rolling_old.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}\
  _index_search_slowlog-%i.log.gz
appender.index_search_slowlog_rolling_old.policies.type = Policies
appender.index_search_slowlog_rolling_old.policies.size.type = SizeBasedTriggeringPolicy
appender.index_search_slowlog_rolling_old.policies.size.size = 1GB
appender.index_search_slowlog_rolling_old.strategy.type = DefaultRolloverStrategy
appender.index_search_slowlog_rolling_old.strategy.max = 4
#################################################
logger.index_search_slowlog_rolling.name = index.search.slowlog
logger.index_search_slowlog_rolling.level = trace
logger.index_search_slowlog_rolling.appenderRef.index_search_slowlog_rolling.ref = index_search_slowlog_rolling
logger.index_search_slowlog_rolling.appenderRef.index_search_slowlog_rolling_old.ref = index_search_slowlog_rolling_old
logger.index_search_slowlog_rolling.additivity = false

######## Indexing slowlog JSON ##################
appender.index_indexing_slowlog_rolling.type = RollingFile
appender.index_indexing_slowlog_rolling.name = index_indexing_slowlog_rolling
appender.index_indexing_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}\
  _index_indexing_slowlog.json
appender.index_indexing_slowlog_rolling.layout.type = ESJsonLayout
appender.index_indexing_slowlog_rolling.layout.type_name = index_indexing_slowlog

appender.index_indexing_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}\
  _index_indexing_slowlog-%i.json.gz
appender.index_indexing_slowlog_rolling.policies.type = Policies
appender.index_indexing_slowlog_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.index_indexing_slowlog_rolling.policies.size.size = 1GB
appender.index_indexing_slowlog_rolling.strategy.type = DefaultRolloverStrategy
appender.index_indexing_slowlog_rolling.strategy.max = 4
#################################################
######## Indexing slowlog -  old style pattern ##
appender.index_indexing_slowlog_rolling_old.type = RollingFile
appender.index_indexing_slowlog_rolling_old.name = index_indexing_slowlog_rolling_old
appender.index_indexing_slowlog_rolling_old.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}\
  _index_indexing_slowlog.log
appender.index_indexing_slowlog_rolling_old.layout.type = PatternLayout
appender.index_indexing_slowlog_rolling_old.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %m%n

appender.index_indexing_slowlog_rolling_old.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}\
  _index_indexing_slowlog-%i.log.gz
appender.index_indexing_slowlog_rolling_old.policies.type = Policies
appender.index_indexing_slowlog_rolling_old.policies.size.type = SizeBasedTriggeringPolicy
appender.index_indexing_slowlog_rolling_old.policies.size.size = 1GB
appender.index_indexing_slowlog_rolling_old.strategy.type = DefaultRolloverStrategy
appender.index_indexing_slowlog_rolling_old.strategy.max = 4
#################################################

logger.index_indexing_slowlog.name = index.indexing.slowlog.index
logger.index_indexing_slowlog.level = trace
logger.index_indexing_slowlog.appenderRef.index_indexing_slowlog_rolling.ref = index_indexing_slowlog_rolling
logger.index_indexing_slowlog.appenderRef.index_indexing_slowlog_rolling_old.ref = index_indexing_slowlog_rolling_old
logger.index_indexing_slowlog.additivity = false


appender.audit_rolling.type = RollingFile
appender.audit_rolling.name = audit_rolling
appender.audit_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_audit.json
appender.audit_rolling.layout.type = PatternLayout
appender.audit_rolling.layout.pattern = {\
                "@timestamp":"%d{ISO8601}"\
                %varsNotEmpty{, "node.name":"%enc{%map{node.name}}{JSON}"}\
                %varsNotEmpty{, "node.id":"%enc{%map{node.id}}{JSON}"}\
                %varsNotEmpty{, "host.name":"%enc{%map{host.name}}{JSON}"}\
                %varsNotEmpty{, "host.ip":"%enc{%map{host.ip}}{JSON}"}\
                %varsNotEmpty{, "event.type":"%enc{%map{event.type}}{JSON}"}\
                %varsNotEmpty{, "event.action":"%enc{%map{event.action}}{JSON}"}\
                %varsNotEmpty{, "user.name":"%enc{%map{user.name}}{JSON}"}\
                %varsNotEmpty{, "user.run_by.name":"%enc{%map{user.run_by.name}}{JSON}"}\
                %varsNotEmpty{, "user.run_as.name":"%enc{%map{user.run_as.name}}{JSON}"}\
                %varsNotEmpty{, "user.realm":"%enc{%map{user.realm}}{JSON}"}\
                %varsNotEmpty{, "user.run_by.realm":"%enc{%map{user.run_by.realm}}{JSON}"}\
                %varsNotEmpty{, "user.run_as.realm":"%enc{%map{user.run_as.realm}}{JSON}"}\
                %varsNotEmpty{, "user.roles":%map{user.roles}}\
                %varsNotEmpty{, "origin.type":"%enc{%map{origin.type}}{JSON}"}\
                %varsNotEmpty{, "origin.address":"%enc{%map{origin.address}}{JSON}"}\
                %varsNotEmpty{, "realm":"%enc{%map{realm}}{JSON}"}\
                %varsNotEmpty{, "url.path":"%enc{%map{url.path}}{JSON}"}\
                %varsNotEmpty{, "url.query":"%enc{%map{url.query}}{JSON}"}\
                %varsNotEmpty{, "request.method":"%enc{%map{request.method}}{JSON}"}\
                %varsNotEmpty{, "request.body":"%enc{%map{request.body}}{JSON}"}\
                %varsNotEmpty{, "request.id":"%enc{%map{request.id}}{JSON}"}\
                %varsNotEmpty{, "action":"%enc{%map{action}}{JSON}"}\
                %varsNotEmpty{, "request.name":"%enc{%map{request.name}}{JSON}"}\
                %varsNotEmpty{, "indices":%map{indices}}\
                %varsNotEmpty{, "opaque_id":"%enc{%map{opaque_id}}{JSON}"}\
                %varsNotEmpty{, "x_forwarded_for":"%enc{%map{x_forwarded_for}}{JSON}"}\
                %varsNotEmpty{, "transport.profile":"%enc{%map{transport.profile}}{JSON}"}\
                %varsNotEmpty{, "rule":"%enc{%map{rule}}{JSON}"}\
                %varsNotEmpty{, "event.category":"%enc{%map{event.category}}{JSON}"}\
                }%n
# "node.name" node name from the `elasticsearch.yml` settings
# "node.id" node id which should not change between cluster restarts
# "host.name" unresolved hostname of the local node
# "host.ip" the local bound ip (i.e. the ip listening for connections)
# "event.type" a received REST request is translated into one or more transport requests. This indicates which processing layer generated the event "rest" or "transport" (internal)
# "event.action" the name of the audited event, eg. "authentication_failed", "access_granted", "run_as_granted", etc.
# "user.name" the subject name as authenticated by a realm
# "user.run_by.name" the original authenticated subject name that is impersonating another one.
# "user.run_as.name" if this "event.action" is of a run_as type, this is the subject name to be impersonated as.
# "user.realm" the name of the realm that authenticated "user.name"
# "user.run_by.realm" the realm name of the impersonating subject ("user.run_by.name")
# "user.run_as.realm" if this "event.action" is of a run_as type, this is the realm name the impersonated user is looked up from
# "user.roles" the roles array of the user; these are the roles that are granting privileges
# "origin.type" it is "rest" if the event is originating (is in relation to) a REST request; possible other values are "transport" and "ip_filter"
# "origin.address" the remote address and port of the first network hop, i.e. a REST proxy or another cluster node
# "realm" name of a realm that has generated an "authentication_failed" or an "authentication_successful"; the subject is not yet authenticated
# "url.path" the URI component between the port and the query string; it is percent (URL) encoded
# "url.query" the URI component after the path and before the fragment; it is percent (URL) encoded
# "request.method" the method of the HTTP request, i.e. one of GET, POST, PUT, DELETE, OPTIONS, HEAD, PATCH, TRACE, CONNECT
# "request.body" the content of the request body entity, JSON escaped
# "request.id" a synthentic identifier for the incoming request, this is unique per incoming request, and consistent across all audit events generated by that request
# "action" an action is the most granular operation that is authorized and this identifies it in a namespaced way (internal)
# "request.name" if the event is in connection to a transport message this is the name of the request class, similar to how rest requests are identified by the url path (internal)
# "indices" the array of indices that the "action" is acting upon
# "opaque_id" opaque value conveyed by the "X-Opaque-Id" request header
# "x_forwarded_for" the addresses from the "X-Forwarded-For" request header, as a verbatim string value (not an array)
# "transport.profile" name of the transport profile in case this is a "connection_granted" or "connection_denied" event
# "rule" name of the applied rulee if the "origin.type" is "ip_filter"
# "event.category" fixed value "elasticsearch-audit"

appender.audit_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_audit-%d{yyyy-MM-dd}.json
appender.audit_rolling.policies.type = Policies
appender.audit_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.audit_rolling.policies.time.interval = 1
appender.audit_rolling.policies.time.modulate = true

logger.xpack_security_audit_logfile.name = org.elasticsearch.xpack.security.audit.logfile.LoggingAuditTrail
logger.xpack_security_audit_logfile.level = info
logger.xpack_security_audit_logfile.appenderRef.audit_rolling.ref = audit_rolling
logger.xpack_security_audit_logfile.additivity = false

logger.xmlsig.name = org.apache.xml.security.signature.XMLSignature
logger.xmlsig.level = error
logger.samlxml_decrypt.name = org.opensaml.xmlsec.encryption.support.Decrypter
logger.samlxml_decrypt.level = fatal
logger.saml2_decrypt.name = org.opensaml.saml.saml2.encryption.Decrypter
logger.saml2_decrypt.level = fatal

Am i missing something?

OK, little improvements

When i add this in my kibana.yml :

elasticsearch.username: myLdapUser
elasticsearch.password: myPassword

I obtain this in the log:

[2019-08-21T08:26:19,852][DEBUG][t.b.r.a.l.AclLoggingDecorator] [node-23] checking request: 1499885724-1990058277#518129
[2019-08-21T08:26:19,852][DEBUG][t.b.r.a.b.r.LdapAuthenticationRule] [node-23] Attempting Login as: myLdapUser rc: 1499885724-1990058277#518129
[2019-08-21T08:26:19,852][DEBUG][t.b.r.a.b.d.l.LoggableLdapAuthenticationServiceDecorator] [node-23] Trying to authenticate user [myLdapUser] with LDAP [ldap1]
[2019-08-21T08:26:19,852][DEBUG][t.b.r.a.b.d.l.LoggableLdapAuthenticationServiceDecorator] [node-23] User [myLdapUser]  authenticated by LDAP [ldap1]
[2019-08-21T08:26:19,852][DEBUG][t.b.r.a.b.d.l.LoggableLdapAuthorizationServiceDecorator] [node-23] Trying to fetch user [id=myLdapUser] groups from LDAP [ldap1]
[2019-08-21T08:26:19,852][DEBUG][t.b.r.a.b.d.l.LoggableLdapAuthorizationServiceDecorator] [node-23] LDAP [ldap1] returned for user [myLdapUser] following groups: []

It seems that ROR uses the creditentials writed in kibana config to establish the connection with LDAP…
How to change that behaviour and make ROR ask creditentials and not read it in the kibana config?

Your error is that you forgot to create the first ACL block with a pair of static credentials for the Kibana server.

readonlyrest:

  prompt_for_basic_auth: true
  audit_collector: true

  access_control_rules:

  # JUST ADD THIS BLOCK
  - name: "::KIBANA-SRV::" 
    auth_key: kibana:kibana # <-- add this credentials to kibana.yml as  elasticsearch.username, elasticsearch.password 

  - name: "Accept requests from users in group team1 on Filebeat"
    indices: ["filebeat-something-*"]
    ldap_auth: 
        groups: ["MyGroup"]
        name: "ldap1"
    type: allow
    kibana_access: rw

Ok, thanks for your help, i can now write a config that runs and it looks like :

https://github.com/sscarduzio/elasticsearch-readonlyrest-plugin/blob/master/integration-tests/src/test/resources/ldap_separate_authc_authz_mixed_local/elasticsearch.yml

The only issue remaining is that i am still not able to expoit the groups already created in LDAP.
In the Readonlyrest.yml, I have to redefine the groups in the ElasticSearch’s scope and I am able to redestribute the permissions from there. I can go on with that, but it means I have to add each new users in our LDAP groups in my readonlyrest file…

If you have a simple example of a configuration that will help me to avoid that , it would be nice.

In fact, ideally i would like to use a configuration with that form:
https://github.com/sscarduzio/elasticsearch-readonlyrest-plugin/blob/master/integration-tests/src/test/resources/ldap_integration_2nd/ldap_second_option_test_elasticsearch.yml

mmm sorry I did not understand what you want to do :confused:

Well… I finally find a way to do what I want :grin:

What i was meaning it’s that i was unable to link permissions or restrictions with the group already existing in LDAP. I was only able to do it with group created in the readonlyrest.yml.

The issue was from my LDAP configuration in fact with PHPLdapAdmin.

To link a group with somme users in phpMyLdap, we have to add a memberUid attribute for each users, and this attribute must have for value the complete distinguisherd name of the user, not just the user name.

Then in my readonlyrest.yml

--- 
readonlyrest:

  prompt_for_basic_auth: true
  audit_collector: true

  access_control_rules:

  - name: "::KIBANA-SRV::" 
    auth_key: kibana:kibana # <-- add this credentials to kibana.yml as  elasticsearch.username, elasticsearch.password 
    indices: [".kibana*"]

  - name: "::Envi1::"
    type: allow
    indices: ["indice1*",".kibana*"]
    ldap_auth:
      name: "ldap1"
      groups: ["groupeOne"]
    kibana_access: rw

  - name: "::Envi2::"
    type: allow
    indices: ["indice2*",".kibana*"]
    ldap_auth:
      name: "ldap1"
      groups: ["groupTwo"]
    kibana_access: rw

  ######### LDAP1 SERVER CONFIGURATION ########################

  #############################################################
  ldaps:
    - 
      bind_dn: "cn=Directory Manager"
      bind_password: "managerPwd"
      cache_ttl_in_sec: 60
      connection_pool_size: 10
      connection_timeout_in_sec: 10
      hosts:
      - "ldap://ldap.host"
      name: ldap1
      user_id_attribute: "uid"
      request_timeout_in_sec: 10
      search_groups_base_DN: "cn=groups,ou=clients,dc=example,dc=be"
      search_user_base_DN: "cn=users,ou=clients,dc=example,dc=be"
      group_search_filter: "(objectClass=posixGroup)"
      unique_member_attribute: "memberUid"
      ssl_enabled: false
      ssl_trust_all_certs: true

For unique_member_attribute, we have to write the value “memberUid”, groupOne and groupTwo are groups in LDAP .

I post it in case of others people facing the same issue.

1 Like

Thank you @Spierre, this will be good reference for others for sure!