Hosts_local question

Hi @sscarduzio, one more question please. I am unable to create index patterns in Kibana with following rule:

- name: "::KIBANA RW DEVELOPER::"
  auth_key: kibanaadmin:admin0nly
  kibana_access: rw
  indices: [".kibana", "log_index-*"]

What am I missing?

I tried adding read indices ACT like below but did not get through…
actions: [“indices:data/read/*”]

Thanks,

@sairamvla

Might be related, might not. I had weird problems when I did not specify a Kibana index. You might try that.

kibana_index: ".kibana"

kibana_index: is which index to use to store Kibana settings and kibana_access: is what permissions to grant to that index. These two deal with the inner workings of Kibana.

indices: is which indexes to give permission to and actions: is what permissions to give to them. From the perspective of a Kibana user, these two deal with the indexes that you see in Kibana and not the .kibana index itself.

This looks ok to me, even for writing the index pattern. Not sure how the rest of the settings looks like though.

Anyway, please show us the Elasticsearch log lines where you get the “FORBIDDEN” request when you try to create the index pattern.

(for future people searching, since the title of this thread is descriptive)

This feature is absolutely amazing especially when dealing with more then one service connecting to localhost:9200. Such as Kibana, Kibana X-Pack monitoring, CLI access, Curator, etc all running on the same host.

Also, many thanks @sscarduzio It took some time to fully embrace the flexibility of the permission architecture using ACL blocks but I’m starting to gain appreciation of it.

Take for instance Kibana, without the hosts_local: feature; it is very difficult to do the following all at the same time:

  1. Kibana needs full access and can log in with an user:pass
  2. Kibana X-Pack needs to connect without a user or pass due to an limitation in X-Pack
  3. You want to allow CLI access without authentication for basic RO troubleshooting

What you do is use a different loopback IP for each service and then use the hosts_local: variable to filter on it. That way multiple services all running on the same server can be distinguished from each other. In short, you can say “This ACL block apply only to this particular service”. RoR ACL is top->down first match so being able to distinguis one service from another is incredibly powerful. In the example below, the main Kibana service connects to 127.1.1.1, the Kibana Xpack service connects to 127.1.1.2, and the CLI for localhost:9200 allows basic view only troubleshooting options with zero authentication.

# Local command line
- name: "::CLI::"
  actions: ["cluster:monitor/*","indices:monitor/*"]
  hosts_local: ["127.0.0.1"]

# Kibana service
- name: "::KIBANA::"
  auth_key: kibanauser:kibanapass
  verbosity: error
  hosts_local: ["127.1.1.1"]

# Allow Kibana monitoring to work
- name: "::KIBANA::XPACK::"
  type: allow
  actions: ["cluster:monitor/*","cluster:admin/xpack/*","indices:data/read/*","indices:data/write/*","indices:admin/*"]
  indices: [".kibana*",".monitoring*"]
  hosts: ["127.1.1.2"]      
  verbosity: error

And then in kibana.yml:

elasticsearch.url: "http://127.1.1.1:9200"
elasticsearch.username: "kibanauser"
elasticsearch.password: "kibanapass"

xpack.monitoring.elasticsearch.url: "http://127.1.1.2:9200"

Theoretically, a person on the CLI could access 127.1.1.2:9200 and gain higher level privledges with zero authentication to the .kibana and .monitoring* indexes. The importance of this is debatable though, once a person has access to the server the user/password are in kibana.yml for full access anyways.

Another common usage would be to secure ONLY Kibana. Many users of ELK only use Kibana and could care less what is happening on the back end and only want security in Kibana itself. Adding security in the back end is an unnecessary nuisance since only administrators have access to that level anyways. Remove the actions: restriction to the above example and this will allow full access for everythign except for Kibana. In short it makes RoR only secure Kibana, nothing else.

# Local command line
- name: "::CLI::"
  hosts_local: ["127.0.0.1"]

# Kibana service
- name: "::KIBANA::"
  auth_key: kibana:readonlyrest
  verbosity: error
  hosts_local: ["127.1.1.1"]
1 Like

[2018-04-23T02:19:03,598][INFO ][t.b.r.a.ACL ] FORBIDDEN by default req={ ID:1878695106–301404706#1022, TYP:FieldCapabilitiesRequest, CGR:N/A, USR:[no basic auth header], BRS:false, KDX:null, ACT:indices:data/read/field_caps, OA:172.21.32.159, DA:172.21.32.159, IDX:log_index-, MET:POST, PTH:/log_index-/_field_caps?fields=*&ignore_unavailable=true&allow_no_indices=false, CNT:<N/A>, HDR:{Connection=keep-alive, Content-Length=0, Host=172.21.32.159:9200}, HIS:[::LOGSTASH::->[auth_key->false]], [::DOCSHOUND::->[auth_key->false]], [::ES READONLY::->[auth_key->false]], [::ES ADMIN::->[auth_key->false]], [::KIBANA-SRV::->[auth_key->false]], [::KIBANA RW DEVELOPER::->[auth_key->false]], [::KIBANA RO DEVELOPER::->[auth_key->false]] }

I suspect you are not using the ROR Kibana plugin (PRO/Enterprise) and you are facing that old (but still valid) Kibana issue where some request don’t carry the Authorization header.

If this is the case, I provided explanation and a workaround in this thread:

It worked. Thanks @sscarduzio.

Hi @sscarduzio, is it possible to skip authentication screen for kibana RO access? Thanks

Not really, ROR PRO/Enterprise Kibana plugin always require an identity for the current user. The only way to skip the login is to have the (default) identity injected via x-forwarded-for header, or JWT. Read about this in the docs

Hi @sscarduzio,

We have seen client response time increased after installing ROR plugin in elastic search 6.2.1. It’s 3x increase. I installed it on all 3 nodes in our 3 node dev cluster. All of them are master eligible, data and client nodes. Generally what is the recommendation for plugin and ACL rules? Does it need to be only on master? Please help.

Thanks,
Sai

Hello @sairamvla!

  1. it’s not normal to have significant a performance hit with ROR. Are you calling external auth systems? How long is your ACL?

  2. You should install ROR as a stateless security “filter” only in the ES nodes that receive HTTP connections, which - unless your deployment is really simplistic - are not the same node which have data and are master eligible.

Thanks @sscarduzio for your response. No, I am not calling external auth systems. I configured 9 ACL rules on all 3 nodes…

Can we see your settings?

Here is yml:

readonlyrest:
    enable: true
    response_if_req_forbidden: Sorry, your request is forbidden.

    access_control_rules:

#    - name: "Accept all requests from localhost"
#      hosts_local: [172.21.32.159]
#      actions: ["cluster:monitor/main","indices:admin/types/exists","indices:data/read/*","indices:admin/template/*"]
    - name: "::LOGSTASH::"
      auth_key: xxx:xxx
      actions: ["cluster:monitor/main","indices:ad  min/types/exists","indices:data/read/*","indices:data/write/*","indices:admin/template/*","indices:admin/create"]
      indices: ["log_index-*"]
    - name: "::DOCSHOUND::"
      auth_key: xxx:xxx
      actions: ["cluster:monitor/health","cluster:monitor/main","cluster:monitor/state","cluster:monitor/stats","indices:admin/types/exists","indices:data/read/*","indices:data/write/*","indices:admin/template/*","indices:admin/create"]
      indices: ["docshound*"]
    - name: "::ES READONLY::"
      auth_key: xxx:xxx
      actions: ["indices:data/read/*","indices:monitor/*","cluster:monitor/health","cluster:monitor/main","cluster:monitor/state","cluster:monitor/stats"]
      indices: ["log_index-*",".kibana"]
    - name: "::ES SNAPSHOTRESTORE::"
      auth_key: xxx:xxx
      actions: ["cluster:admin/repository/*","cluster:admin/snapshot/*"]
      indices: ["log_index-*","docshound*",".kibana"]
    - name: "::ES ADMIN::"
      auth_key: xxx:xxx
      actions: ["cluster:monitor/*","cluster:admin/*","indices:admin/*","indices:data/*","indices:monitor/*"]
      indices: ["log_index-*","docshound*",".kibana"]
    - name: "::KIBANA-SRV::"
      auth_key: kibana:kibana
      verbosity: error
    - name: "::KIBANA RW DEVELOPER::"
      auth_key: xxx:xxx
      kibana_access: rw
      kibana_index: ".kibana"
      indices: ["log_index-*","docshound*",".kibana"]
    - name: "::KIBANA RO DEVELOPER::"
      auth_key: xxx:xxx
      kibana_access: ro
      indices: ["log_index-*","docshound*",".kibana"]
    - name: "workaround"
      actions: [ "indices:data/read/field_caps*", "indices:data/read/msearch", "indices:data/read/search" ]

Is it always this slow, or does the slowness build up after it’s been running for a while?

After 1-2 hours, it starts slowing down and continuous since then as requests coming in

is the memory allocation growing during that time?

No, memory usage is under control @16%…

Hi @sscarduzio, any recommendations? thank you.

@sairamvla could you connect a profiler like JVisualVM? In the past this has been the most useful tool to detect bottlenecks.

  1. Install JVisualVM
  2. Connect it to the Elasticsearch JVM instance when it’s slow
  3. Fire up the CPU “Sampler” (not the profiler, as it’s super slow)
  4. Send some traffic if it’s not there already, for 20-30 seconds
  5. Save the result, and send it here or at info AT readonlyrest DOT com
  6. Also, take a thread dump (there is a button in JVisualVM) and send that across too.