Is this also possible in haproxy?
location /.well-known/matrix/server {
return 200 '{"m.server": "chat.domain.com:443"}';
add_header Content-Type application/json;
}
1 post - 1 participant
Is this also possible in haproxy?
location /.well-known/matrix/server {
return 200 '{"m.server": "chat.domain.com:443"}';
add_header Content-Type application/json;
}
1 post - 1 participant
Hi,
When trying to access statistics page as “http://135.250.207.105:9000/stats”, it says “ERR_CONNECTION_REFUSED”.
Any input will be very helpful.
My config file looks as below,
global
log stdout format raw local0 debug
defaults
mode http
log global
option http-server-close
option httplog
option dontlognull
option redispatch
option forwardfor
retries 3
backlog 4096
timeout client 20s
timeout client-fin 20s
timeout connect 5s
timeout server 20s
timeout tunnel 15m
timeout http-keep-alive 30s
timeout http-request 10s
timeout queue 5s
timeout tarpit 60s
default-server inter 3s rise 2 fall 3
frontend web_gateway
bind *:449 ssl crt /usr/local/etc/haproxy/cert.crt
log 127.0.0.1 local0 debug
redirect scheme https if !{ ssl_fc }
http-request add-header X-Forwarded-Proto https
http-request set-query %[query,regsub(:449,g)]
http-request set-uri %[url,regsub(otntomcat,"$ipv4addr",g)]
acl otntomcat_path path_beg -i /oms1350
acl dojoroot_path path_beg -i /dojoroot
acl p1 path_beg -i /jointroot
acl p2 path_beg -i /jquery
acl p3 path_beg -i /lodash
acl p4 path_beg -i /openlayersroot
acl p5 path_beg -i /search
acl p6 path_beg -i /svgutilsroot
acl p7 path_beg -i /d3root
acl p8 path_beg -i /backbone
acl p9 path_beg -i /Cpb
use_backend otntomcat if otntomcat_path || dojoroot_path || p1 || p2 || p3 || p4 || p5 || p6 || p7 || p8 || p9
acl cas_path path_beg -i /cas
acl session_path path_beg -i /session-manager
use_backend nspos_serv if cas_path || session_path
acl fault_path path_beg -i /FaultManagement
use_backend nspos_serv1 if fault_path
acl int_path path_beg -i /internal
use_backend nspos_serv if int_path
default_backend nspos_serv
backend otntomcat
mode http
balance roundrobin
log stdout local0 debug
server serv01 otntomcat:8443 check ssl verify none
http-response replace-header Location (.)(\botntomcat\b)(.)$ \1"ipv4addr"\3
http-response replace-header Location (.*)(\b443\b)(.*) \1449\3
http-response replace-header Location (.)(\b8544\b)(.) \1449\3
http-response replace-header Location (.*)(\b8545\b)(.*) \1449\3
http-response replace-header Location (.)(\b8443\b)(.)$ \1449\3
http-response replace-header Location (.)(\b/oms1350/pages/otn\b)(.) \1:449/oms1350/pages/otn\3
http-request add-header X-Forwarded-Proto https if { ssl_fc }
backend nspos_serv
mode http
balance roundrobin
log stdout local0 debug
acl p1 path_beg -i /session-manager
acl p2 path_beg -i /cas
acl p3 path_beg -i /oms1350
acl c1 hdr(Location) urlp(service)
server srv01 nspos:443 check ssl verify none
http-response replace-header Location (.)(\botntomcat\b)(.)$ \1"ipv4addr"\3
http-response replace-header Location (.*)(\b\/cas\/login\b)(.*) \1:449\/cas\/login\3
http-response replace-header Location (.*)(\b\/oms1350\/login\b)(.*) \1:449\/oms1350\/login\3
http-response replace-header Location (.*)(\b\/session-manager\/login\b)(.*) \1:449\/session-manager\/login\3
http-response replace-header Location (.*)(\b:449:449\/session-manager\/login\b)(.*) \1:449\/session-manager\/login\3
http-response replace-header Location (.*)(\b8443\b)(.*) \1449\3
http-response replace-header Location (.)(\b/cas/logout\b)(.) \1:449/cas/logout\3
http-request add-header X-Forwarded-Proto https if { ssl_fc }
backend nspos_serv1
mode http
balance roundrobin
log 127.0.0.1 local0 debug
server srv01 nspos:8544 check ssl verify none
http-response replace-header Location (.)(\b8544\b)(.)$ \1449\3
http-request add-header X-Forwarded-Proto https if { ssl_fc }
listen stats
bind 0.0.0.0:9000 #Listen on all IP’s on port 9000
mode http
stats enable
stats uri /stats
stats refresh 10s
stats admin if TRUE
Thanks
2 posts - 1 participant
Hi,
we’ve recently found out that timeout client
and timeout server
settings timeout our grpc streaming application session even if it sends grpc keepalive every 10 seconds while the timeouts on haproxy are set to 30 seconds. Based on our understanding of GRPC the keepalive itself uses HTTP/2 PING which as it looks like based on behavior haproxy just consumes and doesn’t forward to client / server.
Is this observation correct? Is it possible to configure haproxy to forward http/2 pings to client / server and make it extend timeout timers? If not do you have any other recommendation how to configure haproxy so it doesn’t timeout our streaming session?
Edit: we are using haproxy 2.1.7, we’ve confirmed that the application sends the pings with tcpdump. The application also doesn’t timeout as long as it sends some grpc data back and forth, just when it is silent sending just grpc keepalives. Everything works correctly the only problem is that after 30 seconds the connection is closed by haproxy as long as the only data that was transferred over it during those 30 seconds was the grpc keepalives (HTTP/2 pings).
Thank you
1 post - 1 participant
I try to configure HA-Proxy 2.2.1 with an ADFS on the backend. is working but the haproxy.log show me an “warning” to use an another syntax.
parsing [/etc/haproxy/haproxy.cfg:116]: ‘option httpchk’ : hiding headers or body at the end of the version string is deprecated. Please, consider to use ‘http-check send’ directive instead.
not able to use the http-check someone have a syntax example ? below my backend configuration:
backend bk_adfs
mode http
option forwardfor header X-Client
option http-keep-alive
log /dev/log local0 debug
timeout connect 30000
timeout server 30000
balance roundrobin
option httpchk GET /adfs/ls/IdpInitiatedSignon.aspx HTTP/1.0\r\n
http-check expect string Sign\ in
stick-table type ip size 200k expire 30m
stick on src
server adfs01 172.20.1.202:443 ssl verify none check-sni server1.mydomain.com sni ssl_fc_sni
Thanks in advance for your help
1 post - 1 participant
I restart haproxy on configuration changes using a command like:
haproxy -L $PEERNAME -f /etc/haproxy/haproxy.cfg -p $PIDFILE -D -sf $(cat $PIDFILE)
and I’ve mostly gotten stick tables working, using information from the request to determine the sticky session.
However, during a reload, sessions seem to not be using the correct server for a short period of time. The documentation on peers states:
during a soft restart, the old process connects to the new one
using such a TCP connection to push all its entries before the new process
tries to connect to other peers… it typically takes a fraction of a second even for large tables
the table is not very large, but I wonder if during that fraction of a second, requests are getting routed to the wrong server.
I would hope that the new process doesn’t accept connections until after it has finished reading the state from the old process. But I can’t think of any other reason for this behavior, and the documentation isn’t super clear on this.
Has anyone else run into this? Or know how to ensure stickiness is preserved even during reloads?
1 post - 1 participant
I am new to HAProxy and using a docker image from
Now in browser i want to put a URL - https://123-10.1.2.3:8443/index.html
such that the request first lands on HAProxy IP and port 6443
and then i want to navigate to my servers based on the Hosts requests received-
say- Host is 123-10.1.2.3:8443 , navigate to backend server 11.1.2.3: 8443
say Host is 456-10.4.5.6:8443, navigate to backend server 11.4.5.6: 8443
Note- All Ips are just for example.
global
maxconn 50000
log /dev/log local0
log /dev/log local1 notice
user haproxy
group haproxy
ssl-default-bind-ciphers ECDHE+aRSA+AES256+GCM+SHA384:ECDHE+aRSA+AES128+GCM+SHA256:ECDHE+aRSA+AES256+SHA384:ECDHE+aRSA+AES128+SHA256:ECDHE+aRSA+RC4+SHA:ECDHE+aRSA+AES256+SHA:ECDHE+aRSA+AES128+SHA:AES256+GCM+SHA384:AES128+GCM+SHA256:AES128+SHA256:AES256+SHA256:DHE+aRSA+AES128+SHA:RC4+SHA:HIGH:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
ssl-dh-param-file /usr/local/etc/haproxy/dhparam.pem
ssl-default-bind-options ssl-min-ver TLSv1.2
ssl-server-verify none
stats socket /run/haproxy/admin.sock user haproxy group haproxy mode 660 level admin
defaults
timeout connect 60s
timeout client 120s
timeout server 120s
timeout tunnel 1h
timeout http-keep-alive 5s
timeout http-request 15s
timeout queue 15s
timeout tarpit 15s
default-server inter 3s rise 2 fall 3
log global
mode http
option httplog
frontend stats
bind *:8404
stats enable
stats uri /stats
stats refresh 10s
stats admin if LOCALHOST
frontend ssl
bind *:6443 ssl crt /usr/local/etc/haproxy/cert.pem #This is a let’s Encrypt certificate which contains cert + key together in one file
mode http
use_backend backend_123 if { hdr(host) -i 123-10.1.2.3:8443 }
use_backend backend_456 if { hdr(host) -i 456-10.4.5.6:8443 }
backend backend_123
mode http
server srvr_123 11.1.2.3:8443 check
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
backend backend_456
mode http
server srvr_456 11.4.5.6:8443 check
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
1 post - 1 participant
Trying to understand why this doesn’t work. Only the first condition path_beg is evaluated and the second condition checking the hostname isn’t.
I would expect the following but it seems that only the first condition is checked and the second is ignored. I’ve tried with “&&” “AND” and “” but it doesn’t make any difference.
https://example.com/crowd goes to crowd-backend
https://example.com/jira goes to jira-backend
https://auth.example.com/crowd goes to rancher-backend but it actually goes to jira-backend
# Atlassian Backends
use_backend crowd-backend if { path_beg /crowd && hdr(host) example.com }
use_backend jira-backend if { path_beg /jira && hdr(host) example.com }
# Rancher Backends
use_backend rancher-backend if { hdr(host) example.com }
use_backend keycloak-backend if { hdr(host) auth.example.com }
use_backend rancher-backend if { hdr(host) chat.example.com }
2 posts - 1 participant
Backend stays in MAINT and logs say "synapse~ synapse/ 0/-1/-1/-1/0 503 222 - - SC-- 3/1/0/0/0 0/0 “GET /_matrix/static/ HTTP/1.1"”
Where can I see what nameservers are being used by haproxy?
Should I add some time outs for getting new dns srv records?
haproxy.cfg has
resolvers dnssvrs1
parse-resolv-conf
hold valid 10s
backend synapse
mode http
balance roundrobin
option forwardfor
default-server check resolvers dnssvrs1
server-template synapse 3 _https._synapse.dev._tcp.marathon.mesos weight 100 resolve-opts allow-dup-ip
While command line :
# dig +short srv _https._synapse.dev._tcp.marathon.mesos
0 1 31212 synapse.dev-7qz9q-s4.marathon.mesos.
I can also ping and open the backend server with lynx.
1 post - 1 participant
Can I define a ‘network list’ in globa/defaults to be used in acl. Something like this
localnets 192.168.10.0/24, 127.0.0.1
And then use an acl like :
acl network_allowed src localnets
1 post - 1 participant
I would like the server of my backend to point to a link-local IPv6 address (something like [fe80::020a:14ff:febe:36f8]). Like that I could reach a device on the local network, not having to care about it’s IPv4 configuration. However it doesn’t seem to work (I get a 503 service unavailable). With other IPv6 addresses for the same device (in my case [fd00:dead:beef:0:20a:14ff:febe:36f8]), all is fine, so it’s not a problem of IPv6 in general. I found a bug of wget here. It says there, that for link-local IPv6 addresses, the interface has to be specified and wget doesnt support that. Could this be the problem in HAProxy as well or can i do that somehow? Adding %eth0 sadly didn’t do the job.
Thanks a lot for any input
1 post - 1 participant
Hello all!
Haproxy can balance traffic using http param:
balance url_param userid
if that param is new or absent than haproxy will use round robin.
Can I configure haproxy to use leastconn insead round robin with url_param balancing?
If there’s known user_id -> send request to known server
If there’s new user_id -> send request to server with least connection?
Thanks!
1 post - 1 participant
Hi
I have a working haproxy in transparent mode. I use it as a frontal, for several https servers using the same IP address.
My kernel have net.ipv4.ip_nonlocal_bind=1.
/etc/iproute2/rt_tables contains:
100 haproxy
I am using
ip rule add fwmark 1 lookup haproxy
ip route add local default dev lo table haproxy
My firewall rules have
iptables -t mangle -A PREROUTING -m socket --transparent -j MARK --set-mark 1
This works fine. But iptables is deprecated and will vanish at some point. So I’m trying to replace this by the new nftables system.
I tried this nft rule:
table inet haproxy {
chain prerouting {
type filter hook prerouting priority -150; policy accept;
socket transparent 1 mark set 0x00000001
}
}
It does work, but all traffic is routed to the haproxy socket, including outbound masqueraded connection… I mean when a box in the lan side connects to a foreign https serveur, the connection is grabbed by haproxy, which is not what I want.
Does any one know the proper equivalent to
iptables -t mangle -A PREROUTING -m socket --transparent -j MARK --set-mark 1
using nft?
1 post - 1 participant
The http-check send meth OPTIONS
directive is not working correct. I got a L7RSP
with response: TCPCHK got an empty response at step 2. However, if I use GET
instead OPTIONS
I will get L7OK
. This problem only occurs in the latest version.
Of course if I use curl I will get correct response, so it’s not backend server side issue.
My HAProxy 2.1.8 Config:
listen tcp_async_pg_cluster
bind 0.0.0.0:5432
mode tcp
default-server check check-ssl verify none port 8008 inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
http-check expect status 200
maxconn 100
option httpchk OPTIONS /async
option tcplog
option dontlog-normal
retries 2
server pg1 pg1-cluster.example.com:5432
server pg2 pg2-cluster.example.com:5432
server pg3 pg3-cluster.example.com:5432
and config of HAProxy 2.2.2:
listen tcp_async_pg_cluster
bind 0.0.0.0:5432
mode tcp
default-server check check-ssl verify none port 8008 inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
maxconn 100
option httpchk
option tcplog
option dontlog-normal
retries 2
http-check send meth OPTIONS uri /async
http-check expect status 200
server pg1 pg1-cluster.example.com:5432
server pg2 pg2-cluster.example.com:5432
server pg3 pg3-cluster.example.com:5432
1 post - 1 participant
Good Morning, here I have a problem.
The IP read on the access.log WebServer Apache when passing through the Haproxy is the IP Private Haproxy.
How do you make the IP read is the IP of the client on access.log (Webserver Apache) ?
Thank you
3 posts - 2 participants
Is there an option to generate a random string of a specific length?
1 post - 1 participant
I am using haproxy 2.2 and tried several redirects also the below with http-request as mentioned here[1] and splitting this into 2 different frontends.
Currently I am having configured as from:
frontend https
bind 0.0.0.0:80
redirect scheme https code 301 if !{ ssl_fc }
bind 0.0.0.0:"$PORT2"
Logs are showing this
https https/<NOSRV> -1/-1/1 0 SC 4/2/0/0/0 0/0
https https/<NOSRV> -1/-1/0 0 SC 4/2/0/0/0 0/0
https https/<NOSRV> -1/-1/0 0 SC 4/2/0/0/0 0/0
https https/<NOSRV> -1/-1/0 0 SC 13/11/0/0/0 0/0
https https/<NOSRV> -1/-1/0 0 SC 9/7/0/0/0 0/0
https https/<NOSRV> -1/-1/0 0 SC 4/2/0/0/0 0/0
https https/<NOSRV> -1/-1/0 0 SC 4/2/0/0/0 0/0
If I just type test.example.com in the browser, I do not seem to end up at https://test.example.com
If I just type https://test.example.com in the browser, I am getting a the correct page. ( actually redirected to https://test.example.com/path/path/
[1]
1 post - 1 participant
I have a setup of HAProxy 2.2 with connection reuse.
From a single external IP I’m sending many non-keepalive requests to the proxy but I’m getting reuse hits only when http-reuse is set to always. If I’m sending keep-alive connections, it works also with the http-reuse safe option.
So I’m wondering, what are the conditions for a connection to be considered to be reused in HAProxy?
The documentation states:
By default, a connection established between haproxy and the backend server which is considered safe for reuse is moved back to the server’s idle connections pool so that any other request can make use of it. This is the “safe” strategy below.
1 post - 1 participant
[root@haproxy_restored haproxy]# haproxy -c -f haproxy.cfg
Configuration file is valid
[root@haproxy_restored haproxy]#
[root@haproxy_restored haproxy]# systemctl restart haproxy.service
[root@haproxy_restored haproxy]# systemctl status haproxy.service
● haproxy.service - HAProxy Load Balancer
Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Fri 2020-08-14 15:04:27 +0530; 4s ago
Process: 4310 ExecStart=/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid $OPTIONS (code=exited, status=1/FAILURE)
Main PID: 4310 (code=exited, status=1/FAILURE)
Aug 14 15:04:27 haproxy_restored haproxy-systemd-wrapper[4310]: [ALERT] 226/150427 (4311) : Starting proxy xcurrent_coinone1_l2: cannot bind socket…4:5000]
Aug 14 15:04:27 haproxy_restored haproxy-systemd-wrapper[4310]: [ALERT] 226/150427 (4311) : Starting proxy xcurrent_coinone2_l2: cannot bind socket…4:3999]
Aug 14 15:04:27 haproxy_restored haproxy-systemd-wrapper[4310]: [ALERT] 226/150427 (4311) : Starting proxy xcurrent_coinone3_l3: cannot bind socket…4:3002]
Aug 14 15:04:27 haproxy_restored haproxy-systemd-wrapper[4310]: [ALERT] 226/150427 (4311) : Starting proxy xcurrent_coinone4_l4: cannot bind socket…4:3009]
Aug 14 15:04:27 haproxy_restored haproxy-systemd-wrapper[4310]: [ALERT] 226/150427 (4311) : Starting proxy xcurrent_sabb1_l1: cannot bind socket [1…44:443]
Aug 14 15:04:27 haproxy_restored haproxy-systemd-wrapper[4310]: [ALERT] 226/150427 (4311) : Starting proxy kapruka: cannot bind socket [192.168.5.44:8080]
Aug 14 15:04:27 haproxy_restored haproxy-systemd-wrapper[4310]: haproxy-systemd-wrapper: exit, haproxy RC=1
Aug 14 15:04:27 haproxy_restored systemd[1]: haproxy.service: main process exited, code=exited, status=1/FAILURE
Aug 14 15:04:27 haproxy_restored systemd[1]: Unit haproxy.service entered failed state.
Aug 14 15:04:27 haproxy_restored systemd[1]: haproxy.service failed.
Hint: Some lines were ellipsized, use -l to show in full.
[root@haproxy_restored haproxy]#
1 post - 1 participant
Hi everyone, we are seeing an issue wherein the number of active connections on the backend servers is spiking from time to time (> 50K). This apparently happens only on one of the backend servers (out of the 9 servers) at a time. The server with the spike keeps switching and there is no consistency. All other servers show < 1K connections during this time. I do not have access to the logs or all the configs as of now (will get this from the Ops team), but does anyone have any idea why this could be happening? Any help will be greatly appreciated.
The backend config looks like the following (I’ve obfuscated some parts):
backend CONFIG1
balance roundrobin
option http-keep-alive
timeout http-keep-alive 10m
option httpchk GET /xxxxx
http-check expect rstatus 200
http-reuse always
bind-process 11 12
server SERVER01 10.xx.y.zz:80 check weight 10
server SERVER02 10.xx.y.zz:80 check weight 10
server SERVER03 10.xx.y.zz:80 check weight 10
server SERVER04 10.xx.y.zz:80 check weight 10
server SERVER05 10.xx.y.zz:80 check weight 10
server SERVER06 10.xx.y.zz:80 check weight 10
server SERVER07 10.xx.y.zz:80 check weight 10
server SERVER08 10.xx.y.zz:80 check weight 10
1 post - 1 participant
I was testing a bit with scaling a backend, increasing instances and deleting them and now I am stuck in MAINT resolution for 20 minutes.
I am getting this in the logs
haproxy[2374513]: radosgw/radosgw21 changed its FQDN from (null) to rgw2.prod-meeoa-s3.marathon.mesos by 'SRV record'
haproxy[2374513]: radosgw/radosgw21 changed its FQDN from (null) to rgw2.prod-azzdy-s0.marathon.mesos by 'SRV record'
This is what I have in the config
server-template radosgw1 1 _https._rgw1.prod._tcp.marathon.mesos ssl weight 100 resolve-opts
server-template radosgw2 1 _https._rgw2.prod._tcp.marathon.mesos ssl weight 100 resolve-opts
It looks like it does not want to get the new ip. Because the srv fqdn has changed?
Should I add options to the server-template backend? Eg. to allow this fqdn to change in srv? Or is there different srv lookup I should use?
Both backends I can access with lynx.
4 posts - 2 participants