Quantcast
Channel: HAProxy community - Latest topics
Viewing all 4714 articles
Browse latest View live

How to maintain my persistence connection between my server and clent

$
0
0

@souravsum wrote:

Hi Team,

I am running a test for 20 users from the testing tool.
so the request flow is like this

LoadGenerator ----> haproxy(1.8) ------> tomcat.

Here is the configuration of haproxy

global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats timeout 30s
user haproxy
group haproxy
daemon

defaults
log global
mode http
option httplog
option dontlognull
option http-keep-alive
timeout connect 5000
timeout client 50000
timeout server 50000
timeout http-keep-alive 600000
max-keep-alive-queue 20
maxconn 40

frontend http_front
bind *:80
stats uri /haproxy?stats
default_backend http_back

backend http_back
balance roundrobin
http-reuse safe
server demo01 x.x.x.x:8080 check

Now what i want is create a persistence connection between haproxy and tomcat.
The thing i am observing on my tomcat server is the socket with ESTABLISHED connection(request comming from haproxy to tomcat).
Now the problem is i see the foreign host port keeps on changing even though the haproxy documentation says, it uses by default keep-alive setting.

If my understanding is correct then when you use persistence connection then the client making request to server with different port won’t change. If yes how can i solve this issue. If not please put a light on this.

Regards,
Sourav Suman

Posts: 3

Participants: 2

Read full topic


Problems getting rate limiting to work correctly with HTTPS

$
0
0

@ni3ls wrote:

Hi,
I am trying to set up a HAProxy loadbalancer to include simple rate-limit to avoid obvious abuse.
See drawing for simple schematic and note:

  • SNI is used to distinguish target “clusters”
  • SSL passthrough is needed so no content may be observed

The setup we have now works fine except for the rate limits.

We tried several suggested examples found but non seem to work properly.

What we need, is a simple way to avoid “abuse” rate attempts of connections coming from same IP, lets say 20 attempts per second, and ban that IP for further attempts for say 10 seconds.

We tried a lot of approaches based on especially the examples listed here - but none seem to work as expected. Many are very old and almost all only deal with HTTP or SSL being terminated on HAproxy.

Any hints and a link to a running example config is very very welcome.

List of links we looked at:
https://blog.serverfault.com/2010/08/26/1016491873/





Platform:
Ubuntu 18.04
HA-Proxy version 1.8.8-1ubuntu0.2 2018/10/02

Current setup:

global
log /var/lib/haproxy/dev/log local0 info
maxconn 65000
user haproxy
group haproxy
daemon
stats socket /run/haproxy/haproxy.sock mode 660 level admin
stats timeout 2m

defaults
log global
mode tcp
option tcplog
retries 2
timeout http-request 5s
timeout connect 5s
timeout client 60s
timeout server 60s
timeout http-keep-alive 10s
timeout check 10s
maxconn 65000

#Port 80 redirect immidiately to same HTTPS URL
frontend http-in
bind :80
log global
mode http
use_backend http-redirect if !{ ssl_fc }

frontend access_https
mode tcp
log global
bind :443
use_backend test1_cluster_https if { req_ssl_sni -i “test1.example.com” }
use_backend test2_cluster_https if { req_ssl_sni -i “test2.example.com” }
default_backend rate-limit

backend test1_cluster_https
mode tcp
balance roundrobin
option tcp-check
server test1-node1 10.1.1.10 send-proxy check
server test1-node2 10.1.1.20 send-proxy check

backend test2_cluster_https
mode tcp
balance roundrobin
option ssl-hello-chk
option tcp-check
server test2-node1 10.1.2.10 send-proxy check
server test2-node2 10.1.2.20 send-proxy check

backend rate-limit
server test1-node1 10.1.2.10 send-proxy check
server test1-node2 10.1.2.20 send-proxy check

backend http-redirect
mode http
redirect scheme https code 301 if !{ ssl_fc }

####################################################

Posts: 1

Participants: 1

Read full topic

HAProxy, single domain name, multiple subdomains with SSL

$
0
0

@PunkIsDaFunk wrote:

I am trying to setup a farm of servers to serve the same website across all the servers, including the subdomains. I posted my example configuration below.

When tested it seemed to allow for the main website to be loaded, but anytime the subdomains were accessed it would just redirect to the main domain’s content. Is there a way using the configuration I have below, or some other configuration, and I can accomplish the goal of having say 4 servers, all serving the same root domain and subdomains, while still utilizing an SSL certificate for all traffic? Each website also contains a 301 redirect to HTTPS to enforce the use of SSL.

global
        log 127.0.0.1   local0
        log 127.0.0.1   local1 notice
        maxconn 4096
        user haproxy
        group haproxy
        daemon

defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        option forwardfor
        option http-server-close
        stats enable
        stats auth admin:Password!
        stats uri /haproxyStats

frontend http-in
        bind *:80

        # Define hosts
        acl host_website_hdr(host) -i domain.com 
        acl host_website_hdr(host) -i c.domain.com
        acl host_website_hdr(host) -i s.domain.com
        acl host_website_hdr(host) -i i.domain.com 
        acl host_website_hdr(host) -i e.domain.com

        ## figure out which one to use
        use_backend website if host_website
        use_backend website if host_website
        use_backend website if host_website
        use_backend website if host_website
        use_backend website if host_website

backend website
        balance leastconn
        option httpclose
        option forwardfor
        cookie JSESSIONID prefix
        server node1 10.5.0.10 cookie A check
        server node2 10.5.0.11 cookie A check
        server node3 10.5.0.12 cookie A check
        server node4 10.5.0.13 cookie A check
        server node5 10.5.0.14 cookie A check

Posts: 1

Participants: 1

Read full topic

Server is going DOWN for maintenance (DNS timeout status)

$
0
0

@Hugo wrote:

Hi all,

Sometimes (about 5 times this month), I have the following errors in the haproxy logs and I don’t really understand why:

Server XXXXX is going DOWN for maintenance (DNS timeout status). 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Server XXXXX administratively READY thanks to valid DNS answer.
Server XXXXX ('XXXXXXX') is UP/READY (resolves again).

The HA Proxy version is 1.7.9 and I use the nbproc parameter in the configuration.

I know there is a bug about this parameter and the DNS resolution (DNS Resolution *Sigh* v1.7.1) but it seems to be resolved in the 1.7.3 (https://www.haproxy.org/download/1.7/src/CHANGELOG).

The HA Proxy servers use AWS DNS to resolve the backend name.

Thanks for your help,

Posts: 1

Participants: 1

Read full topic

Send-proxy-v2 passing through 0.0.0.0 to appserver?

$
0
0

@moophone wrote:

Hi.

I have a send-proxy aware app running on an appserver which only allows tcp port access on 3336 from an haproxy box (I have tested this from my remote machine and unable to see the open port and the ip is listed below you are also free to try it). On the haproxy side I have firewall rules blocking a specific ip, yet I see periodically 0.0.0.0 show up on the app server log (there is nothing from the app server that would be hitting that port)

Ubuntu 16.04 LTS
HA-Proxy version 1.8.12-1ppa1~xenial 2018/06/27

See logs:

2018-10-22 09:33:15 [0.0.0.0] , x16r, no data
2018-10-22 09:32:46 [90.143.45.55] , x16r, no data

I am using mode tcp as follows:
global
        log /dev/log local0
        log /dev/lo local1 notice
         stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
         maxconn 32768
        nbproc 1
        nbthread 4
         log /dev/log   local0
         log /dev/log   local1 notice
#         stats socket /run/haproxy/admin.sock mode 660 level admin
         stats timeout 30s
         user haproxy
         group haproxy
         daemon

defaults
        log     global
        mode    tcp
        option  tcplog
        balance leastconn
        timeout client 99m
        timeout connect 60s
        timeout server 99m

frontend rvn
         bind *:3636
         default_backend stratum_rvn

backend stratum_rvn
     server rvn1 198.27.74.99:3636 send-proxy-v2
     server rvn2 198.27.74.99:3635 send-proxy-v2

I have also enabled the “usual” anti-spoofing rules using iptables:
-N spoofing
-I spoofing -j LOG --log-prefix “Spoofed source IP”
-I spoofing -j DROP
-A INPUT -s 255.0.0.0/8 -j spoofing
-A INPUT -s 0.0.0.8/8 -j spoofing

which have not helped.

Appreciate any insight how I can deal with this.

Posts: 1

Participants: 1

Read full topic

HAProxy Active-Standby

$
0
0

@LucasRey wrote:

Hello community,
I’m trying to make a simple HAProxy Server Active-StandBy. I have 2 ports to handle, with two servers. What I have to do, is redirect requests from port 8123 and 1883 to respective ports on active server.

defaults
    log     global
    option  dontlognull
    option redispatch
    retries 3
frontend ft_app
 bind *:8123
 bind *:1883
 default_backend bk_app
backend bk_app
 server node1-gui 192.168.2.50:8123 check   
 server node1-sonoff 192.168.2.51:1883 check
 server node2-gui 192.168.2.51:8123 check backup   
 server node2-sonoff 192.168.2.51:1883 check backup

Unfortunately the above configuration doesn’t work (it doesn’t rederect to the server. If I configure the server in the following way, it works:

 server node1-sonoff 192.168.2.50 check
 server node2-gui 192.168.2.50 check backup  

THe problem is that if the primary node goes down, HAProzy never route requests to the other node and the logs says:

[ALERT] 294/152920 (43307) : parsing [/usr/local/etc/haproxy.conf:11] : server s1 has neither service port nor check port nor tcp_check rule ‘connect’ with port information. Check has been disabled.
[ALERT] 294/152920 (43307) : parsing [/usr/local/etc/haproxy.conf:12] : server s2 has neither service port nor check port nor tcp_check rule ‘connect’ with port information. Check has been disabled.
Starting haproxy.
[ALERT] 294/152920 (43308) : parsing [/usr/local/etc/haproxy.conf:11] : server s1 has neither service port nor check port nor tcp_check rule ‘connect’ with port information. Check has been disabled.
[ALERT] 294/152920 (43308) : parsing [/usr/local/etc/haproxy.conf:12] : server s2 has neither service port nor check port nor tcp_check rule ‘connect’ with port information. Check has been disabled.

Could someone help me?

Thank you
Lucas

Posts: 1

Participants: 1

Read full topic

Long lived TCP connections and HAProxy reloads

$
0
0

@dsouza_jude wrote:

Hi,

With reference to this post [1] and with my own experience of the exact same thing in the past, i’m wondering if you have any suggestions or best practices how to deal with it.

The problem, in summary is that, in a dynamic environment where there are many deploys happening every hour from a bunch of services in a microservices platform, HAProxy configurations needed to be reloaded quite often. Usually there are many REST based services that communicate via the HTTP protocol, but at times (as in my case), there are also a couple of services that require long-lived TCP connections. In such a case, during reloads, you end up with more than the usual number of HAProxy processes running on the host because there are still active connections to those long lived TCP connections (as they might be sending constant heartbeats). I understand this is by design if i’m not mistaken.

As a result, the old process is left with an outdated version of backend configurations. For some reason, due to this state, we’ve observed HTTP 503s being returned from HAProxy.

I’d basically like your thoughts and recommendations around this, in particular with respect to the newest version (1.8) of HAProxy and if we can expect any side-effects. Also i’m wondering if it would be better to run two HAProxy instances, one dedicated to just HTTP connections and the other for just TCP connections.

Please advise.

References:
[1] https://medium.com/@x_cray/don-t-balance-tcp-traffic-with-haproxy-d7923a6e379e

Posts: 1

Participants: 1

Read full topic

TLS1.3 not operating in V1.8.14

$
0
0

@AndyH wrote:

Hi All,

Hope I am not doing anything daft, but seems that I cannot get TLS1.3 to work in a self-compiled version of HAProxy 1.8.14 (with static OpenSSL 1.1.1)

When I try link using a browser, I just get TLS1.2 reported.
Sure enough, when I run nmap --script ssl-enum-ciphers I get;

443/tcp open https
| ssl-enum-ciphers:
| TLSv1.2:
| ciphers:
| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
| TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
| TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256

         compressors:
             NULL
         cipher preference: server

I also tried adding the force-tlsv13 directive, and then re-ran the above, result being that no ciphers were offered, which I think demonstrates that the TLS1.3 ciphers are not being selected.

A rough (redacted) version of the config I am running is as follows;

global
maxconn 4096
tune.ssl.default-dh-param 2048
ssl-default-bind-ciphers TLS13-AES-256-GCM-SHA384:TLS13-AES-128-GCM-SHA256:TLS13-CHACHA20-POLY1305-SHA256:EECDH+AESGCM:EECDH+CHACHA20
ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11

frontend gateway-main-secure-frontend
bind LBVIP:443 ssl crt /etc/haproxy/certs

Here’s the version dump:

haproxy -vv
HA-Proxy version 1.8.14-52e4d43 2018/09/20
Copyright 2000-2018 Willy Tarreau willy@haproxy.org

Build options :
TARGET = linux2628
CPU = generic
CC = gcc
CFLAGS = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -fno-strict-overflow -Wno-format-truncation -Wno-null-dereference -Wno-unused-label
OPTIONS = USE_OPENSSL=1 USE_LUA=1 USE_SYSTEMD=1 USE_STATIC_PCRE2=1 USE_PCRE2_JIT=1

Default settings :
maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.1.1 11 Sep 2018
Running on OpenSSL version : OpenSSL 1.1.1 11 Sep 2018
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.3
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Encrypted password support via crypt(3): yes
Built with multi-threading support.
Built with PCRE2 version : 10.31 2018-02-12
PCRE2 library supports JIT : yes
Built without compression support (neither USE_ZLIB nor USE_SLZ are set).
Compression algorithms supported : identity(“identity”)
Built with network namespace support.

Available polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
[SPOE] spoe
[COMP] compression
[TRACE] trace

And here is my build options;

sudo ./config --prefix=/blah/openssl/1.1.1/target --openssldir=/blah/openssl/1.1.1/target no-shared no-ssl2 no-ssl3 no-weak-ssl-ciphers
sudo make
sudo make install

sudo make clean
sudo make TARGET=linux2628 USE_OPENSSL=1 SSL_LIB=/blah/openssl/1.1.1/target/lib SSL_INC=/blah/openssl/1.1.1/target/include ADDLIB=-lpthread USE_STATIC_PCRE2=1 USE_PCRE2_JIT=1 PCRE2_LIB=/blah/pcre2/10.31/target/lib PCRE2_INC=/blah/pcre2/10.31/target/include USE_LUA=1 USE_SYSTEMD=1
sudo make install

Any ideas what I might be doing wrong? Thanks in advance

Posts: 1

Participants: 1

Read full topic


Total Time stat seems high on stats page

$
0
0

@waynewex wrote:

The total time on my stats seems a lot higher than it should be. Here, it seems to be telling me that the average request takes over 2 seconds.

Screenshot_1

I enabled my logs for a while and got results such as these:

Oct 23 11:59:42 MySiteLBNYC haproxy[25790]: CLIENT IP:36391 [23/Oct/2018:11:59:42.935] mysite~ mysite/MySite05 0/0/0/10/11 200 28119 - - --NI 567/567/0/1/0 0/0 “GET /file.php HTTP/1.1”

Oct 23 11:59:43 MySiteLBNYC haproxy[25790]: CLIENT IP:48547 [23/Oct/2018:11:59:43.281] mysite~ mysite/MySite07 0/0/0/9/9 200 28066 - - --NI 572/572/1/1/0 0/0 “GET /file.php HTTP/1.1”

Oct 23 12:00:54 MySiteLBNYC haproxy[25790]: CLIENT IP:37262 [23/Oct/2018:12:00:54.824] mysite~ mysite/MySite01 0/0/0/62/62 200 18515 - - --VN 515/515/0/1/0 0/0 “GET /file.php HTTP/1.1”

Oct 23 12:01:58 MySiteLBNYC haproxy[25790]: CLIENT IP:35693 [23/Oct/2018:12:01:58.562] mysite~ mysite/MySite08 0/0/0/71/71 200 31769 - - --VN 551/551/1/1/0 0/0 “GET /file.php HTTP/1.1”

Oct 23 12:02:45 MySiteLBNYC haproxy[25790]: CLIENT IP:56225 [23/Oct/2018:12:02:45.960] mysite~ mysite/MySite09 0/0/0/5/5 200 25056 - - --VN 522/522/0/1/0 0/0 “GET /file.php HTTP/1.1”

My config is as follows:

global
log 127.0.0.1 local0 notice
stats socket /var/run/haproxy.stat
maxconn 70000
tune.maxrewrite 16384
tune.bufsize 32768
tune.ssl.cachesize 10000000
user haproxy
group haproxy
nbproc 4
cpu-map 1 0
cpu-map 2 1
cpu-map 3 2
cpu-map 4 3

defaults
log global
mode http
maxconn 70000
option httplog
option dontlognull
option forwardfor
retries 5
option redispatch
timeout connect 25000
timeout client 25000
timeout server 25000

listen mysite
option httplog
option dontlog-normal
option dontlognull
option accept-invalid-http-request
log /dev/log local0
bind 0.0.0.0:80
bind :::80 v6only
bind :443 ssl crt /etc/ssl/mysite.com/mysite.com.pem
mode http
maxconn 70000
balance static-rr
option http-keep-alive
option forwardfor
http-request set-header X-Forwarded-Proto HTTPS_ON if { ssl_fc }
cookie SRVNAME insert
timeout connect 10s
timeout client 60s
timeout server 60s
reqidel ^X-Forwarded-For:.

redirect scheme https code 301 if !{ ssl_fc }

acl fb-img-acl hdr_dom(host) -i fb.mysite.com
use_backend varnish-backend if fb-img-acl

acl thumb-img-acl hdr_dom(host) -i thumbs.mysite.com
use_backend varnish-backend if thumb-img-acl

acl letsencrypt-acl path_beg /.well-known/acme-challenge/
use_backend letsencrypt-backend if letsencrypt-acl

server mysite01 10.136.109.25:80 cookie MS01 check
server mysite02 10.136.126.250:80 cookie MS02 check
server mysite04 10.136.127.19:80 cookie MS04 check
server mysite05 10.136.127.60:80 cookie MS05 check
server mysite06 10.136.63.133:80 cookie MS06 check

backend letsencrypt-backend
server letsencrypt 127.0.0.1:8888

backend varnish-backend
server varnish 127.0.0.1:6081

Does anyone have any idea on how to get to the bottom of this issue?

Posts: 1

Participants: 1

Read full topic

Haproxy reloads and total-to-server counter

$
0
0

@pavanrangain wrote:

we have an haproxy config two nodes and the balance alogrithm as “leastconn”. If we load 10 connections then each backend server end up getting 5 connections. Now we add another node to the backend server list and do “systemctl reload haproxy”. The existing 10 connections remain. But if we make another 5 new front end connections then they are supposed to go to 3rd backend server only. However we see that the first and second backend servers too get connections along with the newly added third backend server.
If we see haproxy logs we see that “total-to-server” counter has become 0 on the earlier existing nodes. Seems across reloads this stat is not persisted. Explored the option https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4.2-load-server-state-from-file. Even after doing
socat /tmp/socket - <<< “show servers state” > /tmp/server_state
before reloading the haproxy service when new connections arrive the “total-to-server” still seems to start all over from 0 for existing servers. I also do not see this stat stored in the “/tmp/server_state” file.

Is there something we are missing here or is it working as expected? Please let know

Posts: 2

Participants: 2

Read full topic

Haproxy with TIME_WAIT problem

$
0
0

@xia wrote:

Hi, I am using Haproxy 1.5.18 as load blancer. The architecture is like:

client(10.34.132.10) <—> haproxy(10.135.98.168) <—> backend server(10.128.137.22).

When the client initiates some short http post request, there would be a lot of TIME_WAIT found in haproxy. If the TIME_WAIT appears between the tcp connection between haproxy and backend server, it would be easy to understand. But I found that the TIME_WAIT existes in the connections between client and haproxy. So I traced the

packet stream through client to backend. I am more confused because I expected 4 handshake for each client<-> haproxy and haproxy<->backend connection broken. But there are only haproxy sending FIN to backend and client sending FIN to haproxy. I do not see the reverse. And I do not know why the tcp connection status between client and haproxy becoming TIME_WAIT.

Any comment would be appreciated. Thanks a lot.

Posts: 1

Participants: 1

Read full topic

Ssh timeout and sometimes not working

$
0
0

@anog10 wrote:

Hello
I use this configuration. web work perfect but when i try to use ssh sometimes not working and when is working after 1 min that i am not use it is timeout. how i can fix this. how i can remove do not make me timeout. i change the ssh port on my proxy server

global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user haproxy
group haproxy
daemon

    # Default SSL material locations
    ca-base /etc/ssl/certs
    crt-base /etc/ssl/private

    # Default ciphers to use on SSL-enabled listening sockets.
    # For more information, see ciphers(1SSL). This list is from:
    #  https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
    # An alternative list with additional directives can be obtained from
    #  https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy
    ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
    ssl-default-bind-options no-sslv3

defaults
log global
mode http
option httplog
option dontlognull
timeout connect 1h
timeout client 1h
timeout server 1h
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http

frontend main
bind *:80
mode http
option forwardfor
option http-server-close
default_backend app-main

backend app-main
balance roundrobin
server web1 ip:80 check fall 3 rise 2

frontend sshd
bind *:22
default_backend ssh
timeout client 1h
mode tcp

backend ssh
mode tcp
server localhost-bitbucket-ssh ip:22

Posts: 1

Participants: 1

Read full topic

Alert emails not being sent

$
0
0

@mysticalunicorn wrote:

I have haproxy setup locally and I am using a SMTP server to send email alerts which works fine. I setup the mailers section and then add the config to the backend which sends alerts when any of the 3 backend servers are down.

I moved the configuration to our aws vm and it does not work. I take a backend server down and no alerts are sent. checked the stats page and the health check fails which means haproxy sees it as down and an alert should be sent. I manually connected to the smtp server with telnet and sent an email and it works fine, this means port 25 is open and the vm CAN connect to the SMTP server.

I then installed tcpdump and monitored port 25 then took one of the backend servers down. NO data on port 25 which i think means haproxy is not even trying to connect. any ideas ?
Im using haproxy 1.8
thanks for any help.

mailers mta
mailer smtp1 e2ksmtp01.e2k.mydomain.com:25

backend test_backend
mode tcp
balance leastconn
email-alert mailers mta
email-alert level notice
email-alert from v.san@mydomain.com
email-alert to testing1@mydomain.com

   server node1 1.1.1.1:443 fall 3 rise 2 maxconn 500
   server node2 1.1.3.168:443 fall 3 rise 2 maxconn 500
   server node3 1.1.77.1:443 fall 3 rise 2 maxconn 500

Posts: 1

Participants: 1

Read full topic

Conditional Persistance / Affinity

$
0
0

@Exocomp wrote:

I have a case where I’d like to enable persistence / affinity when needed:

For example, to keep things simple just using source IP with the following:

stick-table type ip size 1m expire 1h
stick on src
  • Is there a way I can set this up so that I can issue a command and enable this and one to disable it ?
  • Or perhaps to read a setting from disk and based on the value make it conditional ?

Posts: 1

Participants: 1

Read full topic

Session persistence during failover to passive node in HA Proxy

$
0
0

@Skinnyguinea wrote:

Hi,
We have configured HA Proxy to act as a Highly Available load balancer for our application (PIM).

This application is used for accessing remote servers via either mstsc or putty from a common interface.

We need to know how to ensure that sessions are not disrupted if one of the nodes in the HA proxy setup goes down.

Can someone let us know how to ensure that the session remains active if one of the HA proxy nodes
is down?

Posts: 1

Participants: 1

Read full topic


Reference ACL from frontend in backend

$
0
0

@Exocomp wrote:

Is it possible to reference an ACL that you created in the frontend in a backend ?

I have a few backends that rely on the same ACL so instead of creating it multiple times in each backend I’d like to create one ACL that can be shared among all the backends.

Posts: 2

Participants: 2

Read full topic

Unable to Configure Load Balancing Per Request Over Persistent Connection

$
0
0

@ebarlas wrote:

I have a simple goal: I’d like to load balance a couple servers using the uri hash balance algorithm and I’d like to support persistent connections to haproxy with load balancing for each request. Unfortunately, I haven’t been able to achieve this behavior. I’ve tried a variety of configurations, namely option http-keep-alive, but nothing is working. The configuration file is below.

The behavior I’m witnessing is the hash is used on the first request to bind to a server for the duration of the persistent client connection. I was able to achieve load balancing per request with option httpclose, but that prevents persistent client connections. Is there a way to have persistent client and server connections with load balancing on each request?

In summary, I see tunnel behavior and not keep-alive behavior.

I using HA-Proxy version 1.8.14-52e4d43 2018/09/20

global
    daemon

frontend http-in
    mode http
    timeout client 60s
    bind *:8080
    default_backend servers

backend servers
    mode http
    timeout connect 5s
    timeout server 60s
    balance uri
    option httpchk GET /health
    http-check expect status 200
    server server1 <ip>:<port> check
    server server2 <ip>:<port> check

Posts: 1

Participants: 1

Read full topic

Clarification needed on sharing a stick table and server selection

$
0
0

@Exocomp wrote:

The documentation at the following page https://www.haproxy.com/documentation/hapee/1-8r1/onepage/#4-stick%20on makes this comment:

It is possible to share a table with other backends by
referencing it using the "table" keyword. If another table is referenced,
the server's ID inside the backends are used. By default, all server IDs
start at 1 in each backend, so the server ordering is enough. But in case of
doubt, it is highly recommended to force server IDs using their "id" setting.

This point "If another table is referenced, the server’s ID inside the backends are used." is vague in that what server id will it be using and from where?

Also this point “By default, all server IDs start at 1 in each backend, so the server ordering is enough.”, I’m not sure what it is referring to by server ordering is enough ?

Also, the following example is given in that doc:

In my testing when I share a table with another backend and then see what is in the table it only shows one record, for example.

# table: one, type: ip, size:1048576, used:1
0x1a7bb04: key=10.0.0.2 use=0 exp=17718 server_id=1

Since there is one record in the table and it is being shared by another backend and each back having it’s own servers then what is the logic to select a server in each backend?

In the doc note (see above) it states that it is highly recommended to use ids, when I do that for example.

backend one
	stick on src table two
	server s1 10.0.0.5:123 id 100
	
backend two
	stick-table type ip size 200k expire 30m
	stick on src
	server s1 10.0.0.7:222 id 200

When I view table two it still shows only one record:

# table: two, type: ip, size:1048576, used:1
0x1a7bb04: key=10.0.0.2 use=0 exp=17718 server_id=200

So then how would backend one server selection work when there is only one record in the table where the server id = 200 - which is not one of the ids in backend one ?

Posts: 6

Participants: 2

Read full topic

Problem with MySQL connections to servers connected to HAProxy

$
0
0

@PunkIsDaFunk wrote:

I have successfully (for the most part) setup my server farm to use HAProxy for the 2 servers I have. But I’m running into more problems. I have it successfully redirecting to HTTPS when possible, however the server is still returning a lot of elements in HTTP (non-SSL). Now I can’t make any connections to a MariaDB server on the same LAN.

I’ve checked my permissions. I’ve pinged all servers involved they are able to see each other. And even went as far as to whitelist the user for the entire LAN subnet, and it refuses the connection. If I use a server that is not involved in the HAProxy configuration. It connects fine using the exact same user information. Can anyone tell me why this is happening, how to fix it, and also help me to verify my configuration is only displaying assets in HTTPS.

Here is my configuration:

global
log /dev/log    local0
log /dev/log    local1 notice
#chroot /var/lib/haproxy
stats timeout 5s
user root
group root
daemon

ssl-default-bind-ciphers kEECDH+aRSA+AES:kRSA+AES:+AES256:RC4-SHA:!kEDH:!LOW:!EXP:!MD5:!aNULL:!eNULL
ssl-default-bind-options no-sslv3

defaults
log     global
mode    http
option  forwardfor
option  http-server-close
option  httplog
option  dontlognull
timeout connect 1000
timeout client  1000
timeout server  1000

errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http

listen admin_stats 127.0.0.1:8080
mode http
stats enable
stats uri /haproxy-stats
stats refresh 10s
stats realm HAProxy\ Statistics
stats auth admin:password

frontend http
    bind 0.0.0.0:80
    option forwardfor
    default_backend webtraffic

frontend ssl-traffic
    mode tcp
    bind 0.0.0.0:443 ssl crt /etc/haproxy/certs/domain.com.pem
    tcp-request inspect-delay 5s
    tcp-request content accept if { req_ssl_hello_type 1 }
    default_backend webtraffic

backend webtraffic
    server server1 10.5.0.10:80 check
    server server2 10.5.0.11:80 check
    server server3 10.5.0.12:80 check
    server server4 10.5.0.13:80 check

For obvious reasons I’ve changed my domain name.

Posts: 1

Participants: 1

Read full topic

One IP, fistful of domains, pack of subdomains and HAProxy in front of it

$
0
0

@scorry wrote:

Good day, friends.

Almost all in the title.
How to configure my little zoo of, say, containers to work correctly behind one proxy? I have a separate certificate for every subdomain of every domain; I do not plan to use plain HTTP, only HTTPS; I do not plan to terminate SSL on the proxy, so I need SNI.

Where I can find some working example of the config file for my case or at least base to start and to ask question(s) about it? I tried some variants from the various examples I found, but all they did not work as expected.

Thank you in advance.

Posts: 1

Participants: 1

Read full topic

Viewing all 4714 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>