@majid021 wrote:
Hi Everyone,
I have simple load balancing scenario.
In my setup, I have HAProxy HA ( haproxy1, haproxy2 ) with a virtual IP (10.0.0.1) and keepalived in both haproxy1, haproxy2.
I am sending the syslogs from a ESX host to the Virtual IP (10.0.0.1), i am able to get the logs in haproxy1 and haproxy2 (checked via tcpdump) from haproxy1/haproxy2, the traffic is not getting forwarded to the backend servers (logstash1, logstash2), all the UDP or TCP ports are open.
I see in few of the posts or blogs about haproxy that the UDP is not supported in it, but to have it get supported, LVS has to be configured at the kernel level to load balance TCP or UDP traffic.
I tried multiple methods to configure LVS in Keepalived to have the traffic forwarded to the backend servers, but could not.
If anybody had encountered issue similar to this, then can you please shed some light into it or share some configs.
The following is the haproxy config and the keep alived configs.
haproxy config:
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user haproxy
group haproxy
daemon# Default SSL material locations ca-base /etc/ssl/certs crt-base /etc/ssl/private # Default ciphers to use on SSL-enabled listening sockets. # For more information, see ciphers(1SSL). This list is from: # https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/ # An alternative list with additional directives can be obtained from # https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS ssl-default-bind-options no-sslv3 # Fine-tuned configuration: nbproc 4 # equal to number of cores # default maxconn is 2000 (approx) maxconn 100000 cpu-map 1 0 cpu-map 2 1 cpu-map 3 2 cpu-map 4 3
defaults
log global
mode http
#option httplog
mode tcpoption tcplog
#option syslog option dontlognull timeout connect 5000 timeout client 50000 timeout server 50000 timeout http-keep-alive 10s errorfile 400 /etc/haproxy/errors/400.http errorfile 403 /etc/haproxy/errors/403.http errorfile 408 /etc/haproxy/errors/408.http errorfile 500 /etc/haproxy/errors/500.http errorfile 502 /etc/haproxy/errors/502.http errorfile 503 /etc/haproxy/errors/503.http errorfile 504 /etc/haproxy/errors/504.http
frontend vip
bind 10.10.10.11:5601
stats enable
default_backend kibana_uibackend kibana_ui
balance roundrobin
server prod-efk1.abc.com 10.10.10.1:5601 check
server prod-efk2.abc.com 10.10.10.2:5601 check backupbackend elasticsearch
balance roundrobin
server prod-es01.abc.com 10.10.10.3:9200 check
server prod-es02.abc.com 10.10.10.4:9200 check backup
server prod-es03.abc.com 10.10.10.5:9200 check backupbackend logstash
balance roundrobin
# mode tcp
server prod-fag01.abc.com 10.10.10.6:514 check
server prod-fag02.abc.com 10.10.10.7:514 check backuplisten stats
bind 10.10.10.152:1936
mode http
log global
stats enable
stats hide-version
stats realm LoadBalanced\ ELK\ Components
stats uri /haproxy?stats
stats auth haproxy:haproxymaxconn 10 clitimeout 100s srvtimeout 100s contimeout 100s timeout queue 100s stats enable stats hide-version stats refresh 30s stats show-node stats uri /
Keep alived config:
vrrp_script chk_haproxy {
#script “killall -0 haproxy”
#script “/usr/sbin/service haproxy status”
script “/usr/bin/pgrep haproxy”
interval 2
#weight 2
}global_defs {
#router_id prod-haproxy01
#lvs_sync_daemon_interface eth0 VI_1
}vrrp_instance VI_1 {
interface eth0
state MASTER
virtual_router_id 51
priority 110unicast_src_ip 10.10.10.8
unicast_peer {
10.10.10.9
}virtual_ipaddress {
10.10.10.152 # virtual ip address
}
track_script {
chk_haproxy
}
}
Posts: 1
Participants: 1