Hello,
I found this discussion on some older HAProxy forum. I have very similar “issue” to solve.
We have quite complex set-up so I don’t want to dive into details. To describe it simply - we are running HAProxy in multiple containers / pods in Kubernetes. There are multiple clients and one “type” of them is another HAProxy in the different Kuberentes cluster Between these two HAProxy deployments are also different L4 network components but I will omit them for simplicity as I don’t think they are important for this case
Additional notes:
- Between HAProxy deployments is higher latency. They are running in different geographical locations.
- Traffic is encrypted with https.
- They are using keepalive connection with timeout set to 1 min.
- We have lower amount of active connections 10-30 but high rate of reused connections 1000 - 3500 on HAProxy which initiate connections.
- If I mention HAProxy it actually means multiple / separate containers / pods.
The issue: I think that when HAProxy pod is terminating (receiving side) there are left keepalive connections (high reuse rate on client side as mentioned above) which are later killed forcefully generating 5xx. I’m trying to find a way how to terminate the long lived connections gracefully e.g. to say other said close connection once request is finished and create new one for the next request. Is there anything like that ?
Thx
1 post - 1 participant