I am trying to filter so that the only records that are returned are those that match a cardid of aa-bb-cc-dd-ee. If i login and go to the discover tab, and enter in the following in the search field, I only get the record i am expecting.
cardid : aa-bb-cc-dd-ee.
Once i save my config, when i go into Kibana, it takes me straight to the management tab with a message “In order to visualize and explore data in Kibana, you’ll need to create an index pattern to retrieve data from Elasticsearch.”
In there i cant see any of my index patterns, and its asking me to create a new index pattern. In step 1 it tells me it can see 13 indices, although i have 100 (and when i removing the filter it does tell me i have 100 indicies).
I am only using a 1 node cluster.
When i try and use the fields filter, that works - so FLS works, but not DLS.
And i tried to adjust it to this; expecting that it would filter out these records - but it did nothing and showed all records including those with cardid aa-bb-cc-dd.
Issue is that it causes the visualize or dashboard pages to stop working. When i go to either of these screens, it throws me across to the Management page, and shows me no indexes. Guessing this has something to do with the filter also filtering the .kibana index.
How do we force the filter to exclude the .kibana index? Or if this isn’t possible, can we do a filter that has an OR type=Index-pattern OR type=dashboard OR type=visualization OR type=search OR type=config?
Hi Paul, you have to use the composition of ACL blocks. Leverage the fact that ACL blocks are evaluated sequentially.
Look at this example:
- name: "for this user, kibana index should never be filtered"
auth_key: b:b
indices: [".kibana"]
- name: "for this user, read operations on other indices should be filtered"
auth_key: b:b
filter: '{"bool": { "must_not": { "match": { "final_state": "ALLOWED" }}}}'
- name: "for this user, non-read operations should normally be allowed under the kibana_access: rw policy"
auth_key: b:b
kibana_access: rw
This helped me and i was able to configure and get the authentication and authorization working with the filter which i’m passing.
Wanted to know how to include dynamic user for this scenario, I’m working on a POC where I have to restrict user based on tenant and I have multiple tenants and multiple users per tenant and a few users are created post setup of the cluster.
Now for a single user my configuration is as below.
- name: "RDW_user accessing all the indices"
auth_key: rdwadmin:rdwadmin
indices: [".kibana*","log-*","testindex"]
- name: "Allow access to only rdw tenant logs for RDW_user"
auth_key: rdwadmin:rdwadmin
filter: '{"bool": { "must": { "match": { "TenantId": "rdw" }}}}'
- name: "Allow non-read operations under the kibana_access for RDW_user "
auth_key: rdwadmin:rdwadmin
kibana_access: rw
This requires reboot of cluster every time we add a new user. As mentioned earlier some users are created after the environment is setup and ready to use.
Note : I’m still exploring all the options in the open source version of ROR.
Be careful that the read request to the “.kibana” index don’t get filtered by the second block, as they would return no documents. I suggest you add a indices rule that does not include “.kibana” index.
About the need to refresh the settings without restarting the whole ES cluster, you have two ways to achieve this.
Using an external connector like LDAP or any identity provider that emits JWT tokens and use dynamic variables in your readonlyrest.yml.
Buy ReadonlyREST PRO, so you can edit settings, press save, and all nodes will reload the settings with no downtime.