Quantcast
Channel: HAProxy community - Latest topics
Viewing all articles
Browse latest Browse all 4726

Stops checking health

$
0
0

Hi,

I am using HaProxy in Kubernetes to do an active passive balancing on a statefulset with 2 nodes. This is the config I am using:

global
  log stdout format raw local0

defaults
  default-server init-addr libc,none
  log global
  mode http
  timeout client 20s
  timeout server 5s
  timeout connect 4s

frontend port_8080
  bind *:8080
  mode http
  default_backend repository

backend repository
  option log-health-checks
  mode http
  option httpchk GET /alfresco/api/-default-/public/alfresco/versions/1/probes/-live-
  default-server inter 10s downinter 5s
  server repo0 "${active}:8080/" check fall 2 rise 4
  server repo1 "${passive}:8080/" check backup fall 5 rise 2

{active} and {passive} are environment variables pointing to the active and the passive node.

This works great so far that all traffic is routed to the active node, and nothing to the passive node.

When I then kill the active node on purpose, haproxy realises this very quickly and all traffic gets routed to the passive node. But it then stopps checking the active node if it comes back. I don’t see any failed checks in the logs, it seems haproxy stopps checking all together.
When I then (after the active node is back) kill the passive node, haproxy states it has no backendservers available anymore and stopps working all together.

How can I configure haproxy to still do healthchecks and failback to the active after it is reachable again?

Thanks!

1 post - 1 participant

Read full topic


Viewing all articles
Browse latest Browse all 4726

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>