I’ve configured ReadOnlyREST on my cluster with ROR Pro on Kibana and have access for users/admin/etc fine for my cluster. I now need to ensure that monitoring and injest apps work. I’ve added access to the cluster for my subet with the following block:
name: “Allow my subnet”
hosts: [“192.168.1.0/24”]
actions: allow
But this is not enough for Cerebro sitting on 192.168.1.140 to access my cluster for monitoring and to make CURL requests. What am I doing wrong here? I’m going to have the same problem with my [homebrew] application that ingests data into my indices as well I assume… but want to beat the Cerebro problem first.
You can analise the ES log lines where it says “FORBIDDEN” when you use cerebro, look at “OA” field, to see if the requests are coming from another IP than what you expect.
Also, I hardly recommend using the hosts rule, in favour of setting up SSL and just passing some credentials i.e. auth_key (and eventually auth_key_sha256)
You can configure cerebro to use credentials to connect to ES:
you should configure dedicated user authentication in cerebro itself, then register your ES url with credentials.
and those credentials should be admin of your es node/cluster.
That configuration didn’t work for me since I was using SSL. I was receiving some promise already completed errors. I instead commented that out and added this at the bottom of the application.conf for Cerebro:
play.ws.ssl {
trustManager = {
stores = [
{ type = “JKS”, path = “C:\ProgramData\cerebro-0.8.1\conf\mykeystoreunlocked.jks”, password = “secretpassforkey” }
]
}
loose = {
acceptAnyCertificate = true // if your cert common name does not match to ngninx_server_name
}
}
This will then allow Cerebro to ask for credentials to the elasticsearch node that you could setup in ROR.