Hi,
We’re using RoR plugin together with the ELK stack. We’re unable to access the Index Management Kibana UI Page due to the following popup error:
[undefined] Forbidden, with { due_to={ 0=“OPERATION_NOT_ALLOWED” } }
I already tried to add the kibana_adm role with no success. After inserting it, ELK stack is not able to come up and we receive many Encountered a retryable error. Will Retry with exponential backoff in syslog.
Versioning is:
-
readonlyrest-1.18.2_es6.7.1
-
Kibana and Elasticsearch 6.7.1
The elasticsearch.yml file is as default except for:
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-headers: Authorization
http.type: ssl_netty4
xpack.security.enabled: false
The readonlyrest file is as follows:
readonlyrest:
enable: true # optional, defaults=true if at least 1 "access_control_rules" block
audit_collector: true
ssl:
enable: true
keystore_file: "/etc/elasticsearch/ssl/server.pfx"
keystore_pass: fihpqfd32d23f4
key_pass: fih4re3e23d4
response_if_req_forbidden: Forbidden
access_control_rules:
- name: "::LOGSTASH::"
# auth_key is good for testing, but replace it with `auth_key_sha1`!
auth_key_sha256: r34t35g4gfdwsdewdqwdqwdqw8eec190aecdfc48b1cc37e728fb
actions: ["cluster:monitor/main","indices:admin/types/exists","indices:data/read/*","indices:data/write/*","indices:admin/template/*","indices:admin/create"]
indices: ["logstash*","security*","stores*","mobileapps*","jmeter*",'auditbeat*']
verbosity: error # don't log successful request
# We trust Kibana's server side process, full access granted via HTTP authentication
- name: "::KIBANA-SRV::"
# auth_key is good for testing, but replace it with `auth_key_sha256`!
auth_key_sha256: 47e40r42rf34g35feqdwdsafb68c8b96
verbosity: error # don't log successful request
- name: "::CURATOR::"
# auth_key is good for testing, but replace it with `auth_key_sha256`!
auth_key_sha256: 08a79916e5t53wewewerwrewrer30f115a33
verbosity: error # don't log successful request
- name: "::GRAFANA-SRV::"
# auth_key is good for testing, but replace it with `auth_key_sha256`!
auth_key_sha256: 7103f686fd43r43r2er23r24r4r2r43a0fe5981f6
actions: ["indices:data/read/*","indices:admin/get","cluster:monitor/main","indices:admin/mappings/get"]
verbosity: error # don't log successful request
- name: "::monitoring::"
auth_key_sha256: bbd543r34r2ed354t3rd2e314a8971695
verbosity: error
actions: ["cluster:monitor/*","indices:monitor/*"]
- name: "::elastalert_on_own_index::"
auth_key_sha256: d10916704r43r2d4r4rwewer653fe38862
actions: ["cluster:monitor/main","indices:admin/types/exists","indices:data/read/*","indices:data/write/*","indices:admin/template/*","indices:admin/create"]
verbosity: error
indices: ["elastalert*"]
- name: "::elastalert_on_logstash::"
auth_key_sha256: d10914trwf4tr3f4r34r34tr34rd43r3fe38862
verbosity: error
actions: ["indices:data/read/*","cluster:monitor/*"]
indices: ["logstash*","security*","auditbeat*"]
- name: "::es2logs_on_logstash::"
auth_key_sha256: fa71r35t434rd43rf34tr3f4r43tf34f34f34rf3684f176
verbosity: error
actions: ["indices:data/read/*","cluster:monitor/*"]
indices: ["logstash*"]
- name: "::SALTSTACK-SRV::"
# auth_key is good for testing, but replace it with `auth_key_sha256`!
auth_key_sha256: 0a47dr43r43r34r34r35t465t3rr4r34r34r779b172b127c38a92
verbosity: error # don't log successful request
indices: ["releases"]
- name: "::SALTSTACK-SRV::MONITOR"
# auth_key is good for testing, but replace it with `auth_key_sha256`!
auth_key_sha256: 0a47d4r34r34r3r3ty65t34r34r3t4r23e24re72b127c38a92
verbosity: error # don't log successful request
actions: ["cluster:monitor/*"]
- name: "field_caps stuff"
verbosity: error # don't log successful request
type: allow
actions: ["indices:data/read/field_caps"]
hosts: [" OURDEVSUBNET/24”,”OURPRIVATESUBNET24"]
- name: "internal access to kibana index"
verbosity: error # don't log successful request
type: allow
hosts: ["OURDEVSUBNET/24","OURPRIVATESUBNET24/24"]
actions: ["indices:data/read/search","indices:data/read/get","indices:data/read/mget"]
indices: [".kibana"]
- name: "kibana_rw"
kibana_access: rw
indices: [".kibana", "app0*", "logstash*”,”app1*”,”app2*”]
ldap_authentication:
name: "ldap1"
cache_ttl_in_sec: 60
ldap_authorization:
name: "ldap1"
groups: ["kibana_rw"]
cache_ttl_in_sec: 60
- name: "elasticsearch_adm"
ldap_authentication:
name: "ldap1"
cache_ttl_in_sec: 60
ldap_authorization:
name: "ldap1"
groups: ["elasticsearch_adm"]
cache_ttl_in_sec: 60
- name: "kibana_ro"
kibana_access: ro
indices: [".kibana", "app0*", "logstash*”,”app1*”,”app2*”]
ldap_authentication:
name: "ldap1"
cache_ttl_in_sec: 60
ldap_authorization:
name: "ldap1"
groups: ["kibana_ro"]
cache_ttl_in_sec: 60
# This is the doomed section, whenever I add it, the ELK stack doesn't come up anymore.
# - name: "kibana_admin"
# kibana_access: rw
# cluster: ["manage-ilm", "manage_index_templates"]
# ldap_authentication:
# name: "ldap1"
# cache_ttl_in_sec: 60
# ldap_authorization:
# name: "ldap1"
# groups: ["kibana_admin"]
# cache_ttl_in_sec: 60
ldaps:
- name: ldap1
host: “ourldaphost.ourdevdomain”
port: 636 # default 389
ssl_enabled: true # default true
ssl_trust_all_certs: true # default false
search_user_base_DN: "ou=users,dc=dev,dc=ourdomain”
user_id_attribute: "uid" # default "uid"
search_groups_base_DN: "ou=groups,dc=dev,dc=ourdomain”
unique_member_attribute: "member" # default "uniqueMember"
connection_pool_size: 10 # default 30
connection_timeout_in_sec: 10 # default 1
request_timeout_in_sec: 10 # default 1
cache_ttl_in_sec: 60
I don’t see any errors in logstash.log
Thank you in advance for supporting.
Luigi