Errors Starting RoR on ES 6.8.x

Hi,
I’m getting errors when starting RoR:

[2019-07-10T14:02:15,516][WARN ][r.suppressed             ] [elk-lab-zone0-es-master-001] path: /, params: {}
tech.beshu.ror.es.RorNotReadyResponse: ReadonlyREST is not ready
        at tech.beshu.ror.es.IndexLevelActionFilter.lambda$apply$1(IndexLevelActionFilter.java:139) [readonlyrest-1.18.2_es6.8.0.jar:?]
        at java.security.AccessController.doPrivileged(Native Method) [?:?]
        at tech.beshu.ror.es.IndexLevelActionFilter.apply(IndexLevelActionFilter.java:133) [readonlyrest-1.18.2_es6.8.0.jar:?]
        at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:165) [elasticsearch-6.8.0.jar:6.8.0]
        at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:139) [elasticsearch-6.8.0.jar:6.8.0]
        at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:81) [elasticsearch-6.8.0.jar:6.8.0]
        at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:87) [elasticsearch-6.8.0.jar:6.8.0]
        at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:76) [elasticsearch-6.8.0.jar:6.8.0]
        at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:403) [elasticsearch-6.8.0.jar:6.8.0]
        at org.elasticsearch.rest.action.RestMainAction.lambda$prepareRequest$0(RestMainAction.java:54) [elasticsearch-6.8.0.jar:6.8.0]
        at org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:115) [elasticsearch-6.8.0.jar:6.8.0]
        at tech.beshu.ror.es.ReadonlyRestPlugin.lambda$null$4(ReadonlyRestPlugin.java:217) [readonlyrest-1.18.2_es6.8.0.jar:?]
        at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:240) [elasticsearch-6.8.0.jar:6.8.0]
        at org.elasticsearch.rest.RestController.tryAllHandlers(RestController.java:336) [elasticsearch-6.8.0.jar:6.8.0]
        at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:174) [elasticsearch-6.8.0.jar:6.8.0]
        at org.elasticsearch.http.netty4.Netty4HttpServerTransport.dispatchRequest(Netty4HttpServerTransport.java:551) [transport-netty4-client-6.8.0.jar:6.8.0]

Elasticsearch version is 6.8.0 and I’m trying to start the plugin on a master node with a minimal config:

readonlyrest:
    ssl:
      enable: false

    access_control_rules:
    - name: "First Rule"
      type: allow

I have xpack security disabled in elasticsearch.yml:

xpack.security.enabled: false

If I curl to the server I get:

{
  "error": {
    "root_cause": [
      {
        "reason": "Waiting for ReadonlyREST start"
      }
    ],
    "reason": "Waiting for ReadonlyREST start"
  },
  "status": 503
}

Could someone explain to me what else I need to do in order for the plugin to start? Can the plugin be made to write some additional log output for troubleshooting/testing purposes?

Thx
D

Doesn’t this resolve itself after some time when the cluster goes green?

Cluster is green but still the same behaviour.

Are your settings saved via our Kibana YAML editor app? Or you are just using the readonlyrest.yml?

Also, try to put Elasticsearch root logger to debug, and see if we have more information about what it’s trying to do.

BTW, are you an Enterprise or PRO subscriber?

Is this the same of Cannot set property 'es.set.netty.runtime.available.processors' · Issue #464 · sscarduzio/elasticsearch-readonlyrest-plugin · GitHub ?

Do you too have this?

java.security.AccessControlException: access denied ("java.util.PropertyPermission" "es.set.netty.runtime.available.processors" "write")

I’m just using the readonlyrest.yaml file. Atm, I’m just exploring it with a view to securing our interfaces. Am not using the kibana plugin.

Am seeing the following:

[CLUSTERWIDE SETTINGS] Checking index config failed: Cannot find index with ROR configuration

OK that message is cool, it should keep on checking on the index, but should not prevent ROR from starting. Did you check if you have that other exception?

The exception you referred to isn’t listed. There’s an elasticsearch exception though:

java.lang.UnsupportedOperationException: sun.misc.Unsafe unavailable
        at io.netty.util.internal.CleanerJava9.<clinit>(CleanerJava9.java:68) [netty-common-4.1.32.Final.jar:4.1.32.Final]
        at io.netty.util.internal.PlatformDependent.<clinit>(PlatformDependent.java:172) [netty-common-4.1.32.Final.jar:4.1.32.Final]
        at io.netty.util.ConstantPool.<init>(ConstantPool.java:32) [netty-common-4.1.32.Final.jar:4.1.32.Final]

I’ve seen this before when logging at DEBUG and assume it’s unimportant?

Yeah that should not be a problem

Ok. Well, RoR is still not starting. Can RoR itself be configured to log in more verbose fashion?

ROR goes in debug mode when also ES is configured in debug. I.e.

rootLogger.level = debug

in $ES_HOME/config/l4j2.properties

Can you please post your YAML settings? Do you have external authentication connectors?

Config is as simple as I can make it right now:

readonlyrest:
    #ssl:
    #  enable: false
    access_control_rules:
    - name: "Test Rule"
      type: allow

I want to start the service with the plugin enabled and doing nothing. And then enable functionality as I move forward…

have you added xpack.security.enabled: false to elasticsearch.yml ?

Try this please:

https://readonlyrest-data.s3-eu-west-1.amazonaws.com/tmp/readonlyrest-1.18.3-pre1_es6.8.0.zip?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAJEKIPNTOTIVGQ4EQ/20190711/eu-west-1/s3/aws4_request&X-Amz-Date=20190711T083412Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&X-Amz-Signature=a93415840102d4ec44287419730d7228caad8993231a6fe60d059b46e5a66d1d

Yes, xpack security is disabled. The xpack settings look like this:

xpack.monitoring.collection.enabled: true
xpack.monitoring.elasticsearch.collection.enabled: true
xpack.monitoring.collection.cluster.stats.timeout: 20s
xpack.monitoring.collection.node.stats.timeout: 20s
xpack.monitoring.collection.index.stats.timeout: 20s
xpack.security.enabled: false
xpack.watcher.enabled: false
xpack.ml.enabled: false

Tried that new version and it’s still not happy.

Same thing identical? The same stack trace forever and the cluster is green?

Yes, that’s correct. This is what I’m seeing:

[2019-07-11T09:29:56,086][DEBUG][t.b.r.b.Ror$             ] [lab-master-001] [CLUSTERWIDE SETTINGS] Loading ReadonlyREST config from index ...
[2019-07-11T09:29:56,087][DEBUG][t.b.r.b.RorInstance      ] [lab-master-001] [CLUSTERWIDE SETTINGS] Checking index config failed: Cannot find index with ROR configuration
[2019-07-11T09:29:56,087][DEBUG][t.b.r.b.RorInstance      ] [lab-master-001] [CLUSTERWIDE SETTINGS] Scheduling next in-index config check within 5 seconds
[2019-07-11T09:30:01,088][DEBUG][t.b.r.b.Ror$             ] [lab-master-001] [CLUSTERWIDE SETTINGS] Loading ReadonlyREST config from index ...
[2019-07-11T09:30:01,088][DEBUG][t.b.r.b.RorInstance      ] [lab-master-001] [CLUSTERWIDE SETTINGS] Checking index config failed: Cannot find index with ROR configuration
[2019-07-11T09:30:01,088][DEBUG][t.b.r.b.RorInstance      ] [lab-master-001] [CLUSTERWIDE SETTINGS] Scheduling next in-index config check within 5 seconds
[2019-07-11T09:30:02,006][WARN ][r.suppressed             ] [lab-master-001] path: /, params: {}
tech.beshu.ror.es.RorNotReadyResponse: ReadonlyREST is not ready
        at tech.beshu.ror.es.IndexLevelActionFilter.lambda$apply$1(IndexLevelActionFilter.java:139) [readonlyrest-1.18.3-pre1_es6.8.0.jar:?]
        at java.security.AccessController.doPrivileged(Native Method) [?:?]
        at tech.beshu.ror.es.IndexLevelActionFilter.apply(IndexLevelActionFilter.java:133) [readonlyrest-1.18.3-pre1_es6.8.0.jar:?]
        at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:165) [elasticsearch-6.8.0.jar:6.8.0]
        at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:139) [elasticsearch-6.8.0.jar:6.8.0]
        at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:81) [elasticsearch-6.8.0.jar:6.8.0]
        at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:87) [elasticsearch-6.8.0.jar:6.8.0]
        at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:76) [elasticsearch-6.8.0.jar:6.8.0]
        at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:403) [elasticsearch-6.8.0.jar:6.8.0]
        at org.elasticsearch.rest.action.RestMainAction.lambda$prepareRequest$0(RestMainAction.java:54) [elasticsearch-6.8.0.jar:6.8.0]
        at org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:115) [elasticsearch-6.8.0.jar:6.8.0]
        at tech.beshu.ror.es.ReadonlyRestPlugin.lambda$getRestHandlerWrapper$4(ReadonlyRestPlugin.java:217) [readonlyrest-1.18.3-pre1_es6.8.0.jar:?]
        at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:240) [elasticsearch-6.8.0.jar:6.8.0]
        at org.elasticsearch.rest.RestController.tryAllHandlers(RestController.java:336) [elasticsearch-6.8.0.jar:6.8.0]
        at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:174) [elasticsearch-6.8.0.jar:6.8.0]
        at org.elasticsearch.http.netty4.Netty4HttpServerTransport.dispatchRequest(Netty4HttpServerTransport.java:551) [transport-netty4-client-6.8.0.jar:6.8.0]
        at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:137) [transport-netty4-client-6.8.0.jar:6.8.0]
        at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.32.Final.jar:4.1.32.Final]

OK I finally can reproduce this. I can only reproduce it with your super small configuration. Will behave normally with a bigger one. Investigating…

edit: ok as a workaround, add at least one more rule to your super basic ACL block, i.e. indices: ["*"]. Will have a fix soon and provide a new build.

1 Like

Thx, that’s got me going. Will use the new build when that’s available.