Quantcast
Channel: HAProxy community - Latest topics
Viewing all 4731 articles
Browse latest View live

Haproxy make TCP tunnel to HTTP Connect Proxy

$
0
0

@jfeddersen wrote:

Hi,

i am looking for a very “special” configuration.
The clients connect to the Company’s main HAProxy. From there our SEC team has a centralized SQUID instance to DMZ/Company network, no direct connections allowed

HAproxy :443 -> SNI Based routing, without SSL termination -> SQUID WebProxy that can connect to DMZ apache.

Now the clients do get the TLS connection IP of the HAProxy, the SNI routing works, but i am not able to figure out the best way to inject the HTTP Connect handshake before passing the encrypted tunnel to SQUID.

I would like to do the following:
HAProxy gets the TLS connection, including SNI information.
Extract host from SNI.
Create HTTP Connect to Squid with that SNI information.
Stream the encrypted data from input to the backend squid.

Is there a way to do that in HAProxy / Lua ?

Thanks for your help

Posts: 1

Participants: 1

Read full topic


'http-request set-header': failed to parse sample expression : unknown fetch method 'ID'

$
0
0

@bvoros wrote:

Hello Haproxy People,

I am looking at a config that was inherited and appears to have some problems.
Upon restarting haproxy (1.6) the following warnings can be seen in the log:

Startup alert:
‘http-request set-header’: failed to parse sample expression : unknown fetch method ‘ID’

Relevant line form the configuration:
http-request set-header X-Request-Id %[ID]

Could someone point me in the direction of what I should be looking at?
Thank you in advance,

PS:
Interestingly, this results in a warning on version 1.6 but the service restarts with a little bit of delay.
The same config prevents version 1.8 from starting.

Posts: 1

Participants: 1

Read full topic

Long delay when restarting haproxy servcie

$
0
0

@bvoros wrote:

Hello HAProxy people,

When restarting haproxy a considerable delay is observed and the following is printed in the log:
Jan 30 15:05:02 dev-api-proxy haproxy-systemd-wrapper[9576]: haproxy-systemd-wrapper: SIGINT -> 9595
Jan 30 15:05:02 dev-api-proxy haproxy-systemd-wrapper[9576]: haproxy-systemd-wrapper: exit, haproxy RC=0
Jan 30 15:05:38 dev-api-proxy haproxy-systemd-wrapper[9688]: haproxy-systemd-wrapper: executing /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds

Where could shall I look for the root cause of this delay?
The server isn’t used for anything else and is not really doing anything, so it is not likely to be caused by a resource bottleneck.

Haproxy version 1.6 on Ubuntu 16.04.

Thank you in advance,

Posts: 1

Participants: 1

Read full topic

Session count in stats

$
0
0

@Vlad wrote:

Hi all,

I’m a bit confused about haproxy status page. The numbers does not match.

Let’s imagine, we have 1 Frontend and 3 Backends. Now, the stats reports 50 current sessions on Frontend - and at the same time only 10 total current sessions on all 3 Backends.

Where is the rest of sessions?

Posts: 1

Participants: 1

Read full topic

Detect code 200 ( http-check expect )

$
0
0

@guif wrote:

Hi!

I have the following configuration to detect a status 200 after an script executes:

listen ReadWrite
    bind *:5000
    option httpchk
    http-check expect status 200
    default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
    server psql01 psql01:5432 maxconn 100 check port 23267
    server psql02 psql02:5432 maxconn 100 check port 23267

My script contains this lines to output this code 200

/bin/echo -e “HTTP/1.1 200 OK\r\n”
/bin/echo -e “Content-Type: Content-Type: text/plain\r\n”
/bin/echo -e “\r\n”
/bin/echo “Primary”
/bin/echo -e “\r\n”

This is the correct way to print the output?
Actuallly I see the Statistics Report all in red.

Any idea to debug this error?

Thanks!

Posts: 3

Participants: 2

Read full topic

Redirect to remove trailing slash, with query string

$
0
0

@lucid_thayne wrote:

What is the best way to write a rule to drop a trailing slash in a redirect, but keep the query string if any?

http-request redirect location %[path,regsub(/$,)] code 301 if {path_end /} 

will remove the trailing slash, but will drop any query string.

http-request redirect location %[path,regsub(/$,)]?%[query] code 301 if {path_end /}

on the other hand, will add a “?” to the end even if there isn’t a query string.

And I don’t think I can use a regsub on the full url because of https://github.com/haproxy/haproxy/issues/200 which prevents me from using a capture group.

Posts: 2

Participants: 2

Read full topic

HAProxy between tcp output and logstash

$
0
0

@NPC_1337 wrote:

Hi,

we have some custom applications that send their logs (so far) via unencrypted tcp (simple JSON).
We would like to collect those using a Logstash in front of an Elastic Cluster.
Since we have multiple Logstashes we’re also using HAProxy in front of them.
Now, we would like to encrypt the whole log traffic using TLS.
What works:

  1. sending unencrypted via HAProxy (though undesired)
  2. using HAProxy as a transparent Layer 4 (tcp) Proxy and configure TLS on the Logstashes
  3. using HAProxy as a Layer 4 (tcp) Proxy and configure TLS on HAProxy as well as on the Logstashes

The problem with solutions 2 and 3 are, that while the transport encryption works fine, we only see our HAProxy as the source of our logs in Logstash.
A solution for this (so we thought) was enabling the PROXY Protocol in HAProxy and in Logstash (which, according to documentation, only seems to understand PROXY v1).
So in our HAProxy Config, we enabled send-proxy in the according backend server line and enabled proxy_protocol => true in the according logstash tcp input config.

Logstash config:

input {
tcp {
    port => 1234
    codec => "json_lines"
    proxy_protocol => true
    ssl_enable => true
    ssl_verify => false
    ssl_key => "/etc/ssl/certs/host.key"
    ssl_cert => "/etc/ssl/certs/host.crt"
}
...

relevant HAProxy config:

frontend logstash
    mode tcp
    option tcplog
    bind *:1234
    use_backend logstash

backend logstash
    balance roundrobin
    mode tcp
    server logstash1 logstash1.example.com send-proxy
    server logstash2 logstash2.example.com send-proxy

However, in both configurations (transparent and SSL bridge), Logstash gives the following error:

[2020-01-31T14:04:21,032][ERROR][logstash.inputs.tcp      ] Error in Netty pipeline: io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: error:100000f7:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER

It seems to me that HAProxy encapsulates the TLS message in a Proxy Protocol header and Logstash expects it the other way around.
Is there anything we could do on HAProxy side to configure the encapsulation order?
Maybe there are other ways to make the original log source known to logstash via HAProxy while retaining the benefits of a redundant loadbalancer.

Thanks for your ideas and help!

Posts: 1

Participants: 1

Read full topic

New To HAproxy Setting Up In a Test Lab

$
0
0

@paul1 wrote:

Hi,

I have a virtualbox lab with two servers with IIS installed and a test website i found called guitarbugs, which is internally accessible, and which is setup in IIS on both servers with working bindings for http.

As well as adding the site in DNS (on the DC), also with A records for both the servers which point to the guitarbugs site.

I can access the site fine on a windows 10 machine on the same LAN.

I have setup HAproxy for http for the time being to see how it works, but for some reason when i access the stats page i get no information on sessions when i access the page from the win 10 machine.

It’s almost as if it is not seeing the site.

If i turn off one of the servers the stats page registers that the server is down, but it is almost as if it doesn’t register actual sessions.

I’m not sure if there is some sort of special setup i have to do in IIS?. I just copied the sites folder to each server.

This is my config as well. I really do appreciate your help and assistance, as I’m stumped. Thanks again!

global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user haproxy
group haproxy
daemon

# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private

# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL). This list is from:
#  https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
# An alternative list with additional directives can be obtained from
#  https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
ssl-default-bind-options no-sslv3

defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend localnodes
bind *:80
mode tcp
default_backend web_servers

backend web_servers
mode tcp
balance roundrobin
server dc-1 192.168.100.2:80 check
server sql-1 192.168.100.3:80 check

listen stats
bind *:32700
stats enable
stats uri /
stats auth paul:123456
stats admin if TRUE

Posts: 1

Participants: 1

Read full topic


Haproxy1.9.1 getting crashed

$
0
0

@ajaysingh wrote:

Hello All,

We are not able to figure out why the haproxy is getting killed.

Jan 31 15:58:16 ip-172-16-21-136.ca-central-1.compute.internal haproxy[5135]: Starting haproxy: [ OK ]
Jan 31 15:58:16 ip-172-16-21-136.ca-central-1.compute.internal systemd[1]: Started SYSV: HA-Proxy is a TCP/HTTP reverse proxy which is particularly suited for high availability environments…
Feb 01 00:14:25 ip-172-16-21-136.ca-central-1.compute.internal systemd[1]: haproxy.service: main process exited, code=killed, status=6/ABRT
Feb 01 00:14:25 ip-172-16-21-136.ca-central-1.compute.internal haproxy[28927]: Shutting down haproxy: [ OK ]
Feb 01 00:14:25 ip-172-16-21-136.ca-central-1.compute.internal systemd[1]: Unit haproxy.service entered failed state.
Feb 01 00:14:25 ip-172-16-21-136.ca-central-1.compute.internal systemd[1]: haproxy.service failed.
Feb 01 01:05:32 ip-172-16-21-136.ca-central-1.compute.internal systemd[1]: Starting SYSV: HA-Proxy is a TCP/HTTP reverse proxy which is particularly suited for high availability environments…
Feb 01 01:05:32 ip-172-16-21-136.ca-central-1.compute.internal haproxy[31699]: Starting haproxy: [ OK ]
Feb 01 01:05:32 ip-172-16-21-136.ca-central-1.compute.internal systemd[1]: Started SYSV: HA-Proxy is a TCP/HTTP reverse proxy which is particularly suited for high availability environments…
Feb 01 05:10:41 ip-172-16-21-136.ca-central-1.compute.internal systemd[1]: Stopping SYSV: HA-Proxy is a TCP/HTTP reverse proxy which is particularly suited for high availability environments…
Feb 01 05:10:41 ip-172-16-21-136.ca-central-1.compute.internal haproxy[12228]: Shutting down haproxy: [ OK ]
Feb 01 05:10:41 ip-172-16-21-136.ca-central-1.compute.internal systemd[1]: Starting SYSV: HA-Proxy is a TCP/HTTP reverse proxy which is particularly suited for high availability environments…
Feb 01 05:10:41 ip-172-16-21-136.ca-central-1.compute.internal haproxy[12237]: Starting haproxy: [ OK ]
Feb 01 05:10:41 ip-172-16-21-136.ca-central-1.compute.internal systemd[1]: Started SYSV: HA-Proxy is a TCP/HTTP reverse proxy which is particularly suited for high availability environments…
Feb 01 08:35:54 ip-172-16-21-136.ca-central-1.compute.internal systemd[1]: haproxy.service: main process exited, code=killed, status=6/ABRT
Feb 01 08:35:54 ip-172-16-21-136.ca-central-1.compute.internal haproxy[22275]: Shutting down haproxy: [ OK ]
Feb 01 08:35:54 ip-172-16-21-136.ca-central-1.compute.internal systemd[1]: Unit haproxy.service entered failed state.
Feb 01 08:35:54 ip-172-16-21-136.ca-central-1.compute.internal systemd[1]: haproxy.service failed.
Feb 01 08:41:34 ip-172-16-21-136.ca-central-1.compute.internal systemd[1]: Starting SYSV: HA-Proxy is a TCP/HTTP reverse proxy which is particularly suited for high availability environments…

I am also posting my sample config snipper below

global
nbproc 1
nbthread 4
maxconn 2000
spread-checks 50
daemon
nbproc 4
cpu-map 1 0
log 127.0.0.1 local2
lua-load /etc/haproxy/lua/400.lua
lua-load /etc/haproxy/lua/cors.lua
lua-load /etc/haproxy/lua/callback_handler.lua
#tune.chksize 1028
#tune.maxrewrite 16384
#tune.bufsize 32768
ssl-server-verify none
############

Defaults

############
defaults
log global
option log-separate-errors
maxconn 2000
timeout connect 60s
timeout queue 2s
timeout check 60s
timeout client 60s
timeout server 60s
timeout http-keep-alive 1s
timeout http-request 60s
mode http
retries 3
balance static-rr
timeout tunnel 60s
option http-server-close
option forwardfor
option http-pretend-keepalive

Posts: 3

Participants: 2

Read full topic

Authentication by client certificate OR basic auth

$
0
0

@dan12345 wrote:

Hi,

I’m just getting started with HAProxy, and would be grateful for help with what’s probably a stupid question.

I have some users accessing from a desktop (for whom basic authentication is fine), and others by iphone, which doesn’t work well with basic authentication, so use client-side certificates. I’ve achieved this with two different ports and front ends, one requiring basic auth, and one requiring a client certificate (example cfg below). But it would be much neater to combine them, with one front end permitting access with EITHER a client certificate OR basic authentication.

Is that possible?

Current config as follows:

frontend example_server_password
   bind *:443 ssl crt  /path/to/ssl_cert.pem
   acl Auth_Users http_auth(AuthUsers)
   http-request auth realm example_server_back if !Auth_Users
   default_backend example_server_back
 
frontend example_server_cert
   bind *:444 ssl crt  /path/to/ssl_cert.pem ca-file /path/to/client_cert.pem verify required 
   default_backend example_server_back

backend example_server_back
	http-request del-header Authorization
	server web01 localhost:8080 check
 
userlist AuthUsers
        user exampleuser insecure-password sillypassword 

Posts: 1

Participants: 1

Read full topic

Haproxy hash table limit for consistent hashing

$
0
0

@egozkan wrote:

Hello all,

I have configuration as shown below. It works very well but i want to know, is there a limitation for hash table? For example, what happens if I use this configuration for 10 million urls? What is the resource consumption of the hash table?

http-request set-header x-lb %[req.hdr(x-cons),lower]%[capture.req.uri,lower]
balance hdr(x-lb)
hash-type consistent

Thanks in advance

Posts: 1

Participants: 1

Read full topic

Log-format %HU for http/2 requests

$
0
0

@joel-l wrote:

Hi!

I’m unsure whether this is a bug or me misunderstanding the documentation.

In haproxy 2.1.2:

According to the documentation, in log-format, %HU should be: HTTP request URI (ex: /foo?bar=baz)

But, for http/2 requests, the full url, including the protocol and hostname is logged instead (https://host/some/url, not /some/url, as I would expect).

For example:

log-format '{"host":"%[capture.req.hdr(0),json(ascii)]","url":"%{+E}HU"}'

Results in:

{"host":"example.com","url":"https://example.com/something"}

For HTTP/1.1 requests, the resulting url in the log is /something.

Is this something special with regards to how haproxy handles http/2 requests? And, what would be the easiest way to get the correct url into the log?

Posts: 1

Participants: 1

Read full topic

Haproxy track bruteforce

$
0
0

@FrancisL wrote:

I’m evaluating to do some bruteforce rules ie:

I have this …

http-request track-sc0 hdr(x-forwarded-for) table Abuse if METH_POST url_wp_admin_test !is-whitelisted-ip

Is there a way to only add it to the track-sc0 table Abuse if the backend result is also 200
(which means invalid password…) ?

Posts: 1

Participants: 1

Read full topic

Metrics about usage of backup servers

$
0
0

@uves wrote:

I am using haproxy in as a loadbalancer in front of an http application that uses active-passive failover with active health checking. It works very well. Now I want to improve my internal monitoring dashboard, by showing which server is currently active. The dashboard is based on prometheus queries (actually it is grafana).

A few more details on the current setup.

So basically I have a proxy definition that some how looks like this:

    balance first
    option httpchk GET /health

    retries 1
    option redispatch

    # backend server
    default-server check fall 6 rise 4 inter 15s
    server backend-001 <ip1>:8080
    server backend-002 <ip2>:8080 backup
    server backend-003 <ip3>:8080 backup

Sometimes (mainly during deployment) it is changed by an external process to look like this:

    balance first
    option httpchk GET /health

    retries 1
    option redispatch

    # backend server
    default-server check fall 6 rise 4 inter 15s
    server backend-002 <ip1>:8080
    server backend-001 <ip2>:8080 backup
    server backend-003 <ip3>:8080 backup

So we are routing to the new version deployed on the second node first, but in case of failure failover to the old version is an option. As a result relying on a fixed order of server names is not really possible.

The information that I am now looking for is: Which server is the one that haproxy consideres healthy and hence will be served with the next request. The stats page seems somehow to convey that information, because it displays which server is backup and which not and it also knows about the order of the backup servers. But somehow this is all lost when exporting to prometheus or atleast I kind find it.

The best workaround so far is to measure which server actually is getting the requests, but that only shows me the information that I want in the presence of requests.

PS: After writing it down it kind of seems that I only need to know which server is the backup server.

Posts: 1

Participants: 1

Read full topic

Balance issue: redis failover. How to stay on master with longest available time?

$
0
0

@asp wrote:

Hi,

I am using redis with one master and one slave. I want to use haproxy to do the failover.
I must only forward data to the current master.

This is my current config:

frontend ft_redis
    bind 0.0.0.0:16380 name redis ssl crt /etc/haproxy/certs/loadbalancer_all_in_one.pem
    default_backend bk_redis

# Specifies the backend Redis proxy server TCP health settings
# Ensure it only forward incoming connections to reach a master.
backend bk_redis
    option tcp-check
    tcp-check connect
    tcp-check send AUTH\ qay\r\n
    tcp-check expect string +OK
    tcp-check send PING\r\n
    tcp-check expect string +PONG
    tcp-check send info\ replication\r\n
    tcp-check expect string role:master
    tcp-check send QUIT\r\n
    tcp-check expect string +OK

    server elastic01.internal.dtpublic.de_7000 localhost4:7000 check inter 2s
    server elastic02.internal.dtpublic.de_7001 localhost4:7001 check inter 2s

So only the server which tells me, it is master, should get the traffic.

I have following issue:

  • server1 is master, server2 is slave
  • stop redis on server 1
  • server1 is unavailable
  • both servers are marked red in haproxy
  • sentinel is doing the failover
  • server 2 is master
  • haproxy shows green for server2.
  • starting redis on server1
  • server1 is starting as master (because it was master before the crash / shutdown)
  • haproxy is showing both servers as master
  • when I throw requests against haproxy, it is distributing requests to both servers, looks like round robin.
  • after some seconds sentinel will tell server1 to be slave
  • server1 will be shown as red.
  • server1 will get replication of server2
  • all changes done to server1 during the time where I had two masters are lost.

I just searched a lot, but I did not find a way to improve sentinel behavior (i.e. always ask sentinel who the master is).

Is there a way to tell haproxy to forward the traffic to the server, which has the longest uptime? This should workaround this behavior.

Thanks, Andreas

Posts: 2

Participants: 2

Read full topic


HAproxy in PfSense

$
0
0

@ce13 wrote:

Hi, I just setup HAProxy in PfSense for reverse proxy usage. I am able to access the webpage but I found some issues:

  1. Edgerouter GUI dashboard graph/chart cannot be loaded. When this was setup in Sophos XG WAF, I need to passthrough websocket, but not sure how to do this in PfSense HAproxy

  2. RouterOS GUI will be kicked me out to the login page and states “gateway timeout”.

Does anyone know how to solve this?

Posts: 1

Participants: 1

Read full topic

Haproxy itself talks proxy protocol

$
0
0

@hgruber wrote:

Using the send-proxy option in a server allows the proxy protocol to be forwarded for the requester.
Is it possible that the haproxy itself generates the proxy protocol in the server section, so the requester does not know about the proxy protocol?
Example:
server server_behind_proxy x.x.x.x:443 proxy y.y.y.y:3128 ssl …

Posts: 2

Participants: 2

Read full topic

Search documentation to setup a Reverse Proxy

$
0
0

@Sponge_B0B wrote:

Hi everyone,

I have few Virtual Machines who are running web applications as web server, Word Press etc…

I would like to install HAProxy (on LinuxMint) and setup a simple Reverse Proxy with SSL Termination (SSL Client to HAProxy, and backend without ssl (easier to manage the certificates :slight_smile: )

Any idea where I could find a good tutorial / documentation to achieve this ?

Thank you.

Posts: 1

Participants: 1

Read full topic

Client certificate verification for subdomain

$
0
0

@mxsxs2 wrote:

Hi.

I made a simple setup where I check for a given subdomain if a client certificate is valid or not.

The below configuration checks if dev.example.com subdomain is called. It uses a backend that checks if a client cert exists and if it is valid. For every other subdomain or the domain itself it uses another backend.

It works so far. However it does not recognise any client certs. Its just logs ssl_c_used as 0

Here is a pastebin link for the config https://pastebin.com/4QjitbtL

Line 80 log-format “capture1: ssl_fc %{+Q}[ssl_fc], ssl_c_used %{+Q}[ssl_c_used], ssl_fc_has_crt %{+Q}[ssl_fc_has_crt], ssl_c_verify %{+Q}[ssl_c_verify], ssl_c_i_dn %{+Q}[ssl_c_i_dn]”

The above logs the following:
Feb 6 18:17:59 example haproxy[2613]: capture0: dev.exampl
Feb 6 18:17:59 example haproxy[2613]: capture1: ssl_fc “1”, ssl_c_used “0”, ssl_fc_has_crt “0”, ssl_c_verify “0”, ssl_c_i_dn “”

If I set the cert as required then it fails at SSL handshake.

/etc/haproxy/certs contains a wild cart cert for my domain. It is from Digicert.

/etc/haproxy/certs/verify_certs/ca.pem was generated by me with:
openssl genrsa -des3 -out ca.key 4096
openssl req -new -x509 -days 3650 -key ca.key -out ca.crt
cat ca.crt ca.key > ca.pem

The client cert then was generated with:
openssl genrsa -des3 -out client.key 4096
openssl req -new -key client.key -out client.csr
openssl x509 -req -days 3650 -in client.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out client.crt
cat client.crt client.key > client_unp.pem

I am trying to test it with:
curl https://dev.example.com --cert-type pem --cert client_unp.pem --cacert …/…/ca.crt

I also tried to import client.pem in chrome to personal certificates. It imports but then does not show up in the list.

If I do it in windows in mmc then it shows up but Chrome still does not display it.

I am not sure if my certs are wrong or my config.

Can some spot the issue? Thanks!

Posts: 1

Participants: 1

Read full topic

External health check not updates with changes server ip address

$
0
0

@sandergr wrote:

We have a docker swarm setup in which we deploy to haproxy services (version 2.0). For our mariadb database services we use an external healtch check script because it needs to check the replication status of a galera cluster setup. When testing for failover we notice that the arguments haproxy sends to the health check script (https://cbonte.github.io/haproxy-dconv/2.0/configuration.html#external-check%20command) are not updated with new ip address (which happens when scaling down and up the mariadb service). The stats page and the show server state commands however ARE immediately updated to reflect the new ip address. After a very long time the ip arguments seem to be updated also for the external health check, but im wondering if this is a bug in haproxy or some particular timeout setting. To be clear we use a docker resolvers section so haproxy can query docker swarm for the correct ip’s. Below is our haproxy configuration :

    global
    log          fd@2 local2
    pidfile      /var/run/haproxy.pid
    maxconn      4000
    user         haproxy
    group        haproxy
    stats socket /var/lib/haproxy/stats expose-fd listeners
    external-check




resolvers docker
    nameserver dns 127.0.0.11:53
    resolve_retries 3
    timeout resolve 1s
    timeout retry   1s
    hold other      10s
    hold refused    10s
    hold nx         10s
    hold timeout    10s
    hold valid      10s
    hold obsolete   10s

defaults
    timeout connect 10s
    timeout client 30s
    timeout server 30s
    
    timeout tunnel 1h
    load-server-state-from-file none
    option httplog

    default-server init-addr libc,none


 frontend fe_web
    bind *:80
    mode http
    use_backend stat if { path -i /my-stats }
    default_backend be_php
    option http-keep-alive

    acl host_php hdr(host) -i testcluster.cregora2.nmbtz.com
    acl host_phpmyadmin hdr(host) -i testcluster.phpmyadmin.nmbtz.com
    acl host_portainer hdr(host) -i testcluster.nagios.nmbtz.com
    acl host_stats hdr(host) -i proxystats.simbuka.nmbtz.com

    use_backend be_php if host_php
    use_backend stat if host_stats
    use_backend be_phpmyadmin if host_phpmyadmin
    use_backend be_portainer if host_portainer

frontend fe_mysql
    bind *:3306
    default_backend be_mysql

frontend fe_redis
    bind *:6379
    mode tcp
    default_backend be_redis

backend be_php
    balance leastconn
    balance roundrobin
    mode http
    option httpchk
    stick-table  type binary  len 8  size 100k  expire 10s  store http_req_rate(10s)

    # Track client by base32+src (Host header + URL path + src IP)
    http-request track-sc0 base32+src

    # Check map file to get rate limit for path
    http-request set-var(req.rate_limit)  path,map_beg(/etc/haproxy/rates.map,20)

    # Client's request rate is tracked
    http-request set-var(req.request_rate)  base32+src,table_http_req_rate()

    # Subtract the current request rate from the limit
    # If less than zero, set rate_abuse to true
    acl rate_abuse var(req.rate_limit),sub(req.request_rate) lt 0

    # Deny if rate abuse
    http-request deny deny_status 429 if rate_abuse

    timeout connect 120s
    timeout server 120s

    acl is_api path -i -m beg /api

    errorfile 429 /etc/haproxy/errorfiles/too_many_requests.http if is_api
    errorfile 503 /etc/haproxy/errorfiles/server_not_found.http if is_api

    server-template php- 4 php:80 check resolvers docker init-addr last,libc,none maxconn 50

backend be_phpmyadmin
    balance roundrobin
    mode http
    server-template phpmyadmin- 2 phpmyadmin:80 check resolvers docker init-addr last,libc,none 
maxconn 20

backend be_portainer
    balance roundrobin
    mode http
    server-template portainer- 2 portainer:9000 check resolvers docker init-addr last,libc,none 
maxconn 20
    option forwardfor
    option http-keep-alive

backend be_mysql
    balance leastconn
    option external-check
    external-check path "/etc/haproxy:/bin"
    external-check command /var/lib/haproxy/galerahealthcheck.sh
    #option log-health-checks
    timeout check 2s
    server-template mysql-1- 1 mysql1:3306 check inter 10s resolvers docker init-addr libc,none maxconn 200
    server-template mysql-2- 1 mysql2:3306 check inter 10s resolvers docker init-addr libc,none maxconn 200
    server-template mysql-3- 1 mysql3:3306 check inter 10s resolvers docker init-addr libc,none maxconn 200


backend be_redis
    balance roundrobin
    timeout connect     60000
    timeout server      60000
    retries         3
    mode tcp
    option tcp-check
    tcp-check connect port 6379
    tcp-check send PING\r\n
    tcp-check expect string +PONG
    tcp-check send info\ replication\r\n
    tcp-check expect string role:master
    tcp-check send QUIT\r\n
    tcp-check expect string +OK
    server-template redis-1- 1 redis:6379 check port 6379 inter 2s resolvers docker init-addr libc,none maxconn 10
    server-template redis-2- 1 redis-slave:6379 check port 6379 inter 2s resolvers docker init-addr libc,none maxconn 10
    server-template redis-3- 1 redis-slave2:6379 check port 6379 inter 2s resolvers docker init-addr libc,none maxconn 10
    server-template redis-4- 1 redis-slave3:6379 check port 6379 inter 2s resolvers docker init-addr libc,none maxconn 10


 backend stat
    stats enable
    stats uri /my-stats
    stats refresh 15s
    stats show-legends
    stats show-node
    mode http

I added logging to the external health check, for example:


the ip address that was given in the arguments after the scaling back and up of the mariadb is the old ip from before it was downed. The stats and show server state command however immediately show the correct ip address…

Posts: 1

Participants: 1

Read full topic

Viewing all 4731 articles
Browse latest View live