Handling Inbound Traffic from K8S

Hi,
Most of our traffic comes in from pods running in k8s clusters and traverses a haproxy load-balancer enroute to elasticsearch. So, the XFF header shows the ip address of the inbound k8s node.

How do you recommend that we identify the inbound pod traffic so we can easily identify them at the backend? Fluentd has the abilty to set custom headers. So, we can do whatever works best for RoR.

Thx
D

If I read this correctly, you would like a x_forwarded_for rule, but with customizable header name. So you can inject the IP from a custom header in Fluentd?

Yes, that’s correct. But it doesn’t have to be ip - that’s just one option. We could also use an identifying label from the ingestion pipeline unique to each pod. Using labels would be less expensive on the transmission side…

I realise that x_forwarded_for or similar would need ip’s for filtering purposes. Human readable labels/strings would be useful at the kibana end though.

Oh then it’s easy: have you seen the headers rule?

If I send an XFF header how will RoR treat this? By the time the request arrives the XFF header will contain two ip addresses.

Also, if I send a custom header I presume it’s value will not be indexed. That won’t be ideal from the audit perspective.

With “will not be indexed” you mean that you think the custom header won’t appear in the ROR audit logs?

Using the default audit log serializer, you will still see the header names in the JSON document:

{
...
"headers": [
      "Accept",
      "Authorization",
      "content-length",
      "Host",
      "User-Agent",
      "x-your-custom-header"
    ],
...
}

If you want to have a first class JSON field with the value, you can implement a custom audit log serializer, if you already haven’t done so.

1 Like

Hey @cresta, did you end up using a custom serialiser?

Not yet. I’ve not yet set up a project to do it.