Quantcast
Channel: HAProxy community - Latest topics
Viewing all 4737 articles
Browse latest View live

How do I setup acls to split a server by directory

$
0
0

@jbrahy wrote:

I still have a lot to learn with HA Proxy. I know it’s done through acls but I’m not sure how to do it properly. Can someone show me the right way to make all of the server urls except one go to the default server and the /other url to another server? Thanks in advance!

global
    log         127.0.0.1 local2 debug
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon
    stats socket /var/lib/haproxy/stats
    tune.ssl.default-dh-param 1024

defaults
    mode                    http
    option                  httplog
    log                     global
    option                  dontlognull
    option                  http-server-close
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

frontend frontend-http
    bind 1.2.3.4:80
    default_backend backend-http

backend backend-http
    option forwardfor
    server web-1 www.website1.com:80 check

frontend frontend-https
    bind 1.2.3.4:443 ssl crt /etc/haproxy/ssl/default.pem crt /etc/haproxy/ssl
    default_backend backend-https

backend backend-https
    option forwardfor
    server web-1 www.website1.com:443 check ssl verify none

Posts: 2

Participants: 2

Read full topic


Least-stick-table-entries balance algorithm

$
0
0

@user5396668 wrote:

Hi all,

I am planning a system with client and server microservices and haproxy as a load balancer between them.
In my system I need to have stickiness on the HTTP request ‘path’ and I want to have an equal distribution of such stickiness to the servers in the backend.
For high availability I plan to have few haproxy instances configured as peers and synch-ed on the stick-table.

Is it possible with haproxy to have a stick-table and a balance algorithm as:
if there is a match in the stick-table – take it, otherwise, select the server with the lowest number of entries in the stick-table.
It is something similar to the ‘leastconn’ algorithm, just instead of referencing open connections it should reference the number of already assigned paths.

Thanks!

Posts: 1

Participants: 1

Read full topic

Enable conditional HTTP logging

$
0
0

@erlo wrote:

Hello community !

I looking for a solution for my specific need. I have HAproxy servers which are serving huge amount of traffic (peaks with more than 60000 req/s). Currently, HTTP logging is not enable for normal traffic (option dontlog-normal), traffic volume is to high to permit logging in good condition.

I’m looking for a solution to evaluate how many client are SNI compatible. This can be achieve with custom logging format with “%[ssl_fc_sni]”.

Do you know if there is a way to enable HTTP logging only when matching criteria (such as the presence of SNI host field) ?

Thanks!

Posts: 1

Participants: 1

Read full topic

Trafic routing based on client IP from X-Forwarded-For header

$
0
0

@kfrydrys wrote:

Hi,
I have problem. I must make traffic load balancing between two internal machines.

There is requirement - traffic must be routed basing on client IP in such way that from one IP client connects always to the same backend server.

It is simple, but traffic from clients goes from external proxy service and the source IP is always the same. The originial source IP is only available in X-Forwarde-For HTTP header.

Is it possible to user XFF header in HAProxy?

Thanks in advance.

Posts: 1

Participants: 1

Read full topic

How do I change path for backend

$
0
0

@jbrahy wrote:

I have a path I’m using to identify itself to for an acl but I need a different path for the backend server. How do I remove the first three characters of the path before passing to the backend? passed in the path is /rd/asdffdsa1213 and to the backend server it should be /asdffdsa123.

backend backend-http
    option forwardfor
    server web-1 site.website1.com:80 check

Is that possible? I saw an example using this.

http-request set-uri %[url,regsub(^/rd,/,)]

Is that the right approach? I tried it but it gave me an error. It may be the version of haproxy.

[root@pm-prod-haproxy01 b738cbea49eaded323220988d9e184ee]# haproxy -f /etc/haproxy/haproxy.cfg
[ALERT] 163/161449 (10421) : parsing [/etc/haproxy/haproxy.cfg:45]: 'http-request' expects 'allow', 'deny', 'auth', 'redirect', 'tarpit', 'add-header', 'set-header', 'replace-header', 'replace-value', 'set-nice', 'set-tos', 'set-mark', 'set-log-level', 'add-acl', 'del-acl', 'del-map', 'set-map', but got 'set-uri'.
[ALERT] 163/161449 (10421) : Error(s) found in configuration file : /etc/haproxy/haproxy.cfg

[root@pm-prod-haproxy01 b738cbea49eaded323220988d9e184ee]# haproxy -v
HA-Proxy version 1.5.18 2016/05/10
Copyright 2000-2016 Willy Tarreau <willy@haproxy.org>

[root@pm-prod-haproxy01 b738cbea49eaded323220988d9e184ee]#

Posts: 4

Participants: 2

Read full topic

Warnings to redirect to /dev/null

$
0
0

@dyadaval wrote:

Hello,

Log to systemd’s /dev/log compatibility socket

#log /dev/null local0 warning alert
#log /dev/null local1 warning alert
#log /dev/null local1 notice alert
#log 127.0.0.1 local0 warning
log /dev/log local0 warning

defaults
timeout connect 10000ms
timeout client 60000ms
timeout server 60000ms
log global

frontend fe_http2
mode tcp
option dontlognull
option dontlog-normal
#bind 0.0.0.0:8443 ssl ca-file /f0/base/haproxy/ca.pem crt /f0/base/haproxy/server.pem alpn h2,http/1.1 ciphers ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-
GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA25
6:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384
default backend be_grpc

My front end would have bind commented at first and later when system has a condition, the bind line would be uncommented. so the situation is only till I have favourable one, which is short time.
However, when i checked the source, I see that it is Warning()–>printf

Is there a way, i can re-direct them to /dev/null??

I tried dontlognull, log /dev/null warning , but it didnt help.

Please suggest.
Thanks,
DY

Posts: 1

Participants: 1

Read full topic

How to give ssl key password in haproxy config

$
0
0

@prasa880 wrote:

Hi team ,i have configure HAPROXY after adding ssl option configuration service not starting .when i check the configuration with haproxy -f /etc/haproxy/haproxy.cfg -c …it is prompting the password for private key. please some one tell me how to enter password of ssl private key in config itself.

Posts: 1

Participants: 1

Read full topic

Cannot Bind Socket reloading HAProxy 1.89 on IBM System i

$
0
0

@gianluca_medici wrote:

Hi there, I’m having a hard time figuring out why I cannot reload HAProxy 1.89 compile by myself on AIX 7.2 and running on an ISeries 7.3 (IBM i is supposed to be AIX compliant through the PASE environment).

After I updated HAProxy from 1.62 (which gives me no problems running and reloading) to 1.89 I get this message:
Starting frontend_xxx cannot bind socket [0.0.0.0:80]
I’m actually soft reloading like this -sf $(cat /tmp/haproxy.pid).
Setting hard-stop-after didn’t help.

The strange thing is also that the same version of HAProxy on AIX 7.2 actually supports soft reload without a glitch.
Can somebody point me in the right direction to find a workaround or understand this behavior?

Posts: 1

Participants: 1

Read full topic


Acl hdr(host) seem to be not working , How to debug?

$
0
0

@Golem wrote:

with first config, the access don’t work .
with second config, the access work.

need to define the difference on first case to implement the right acl.
I tried to use log and extract host and log said that it is the same in both case.
any idea to dig more ?

version Haproxy 1.7.5-2.
My config
frontend https-in
bind xxx.xxx.xxx.xxx:443 ssl crt /etc/haproxy/certs/xxx.xxxx.xx
option logasap
# log the name of the virtual server
capture request header Host len 500

log-format "<%CC/%CS><%HM|%HP|%HV|%r>%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%T$

acl letsencrypt-acl path_beg /.well-known/acme-challenge/
acl server1_acl hdr(host) -i server1.mydomain.net
....
default_backend trash_backend

or
_ default_backend server1_backend_

backend letsencrypt-backend
server letsencrypt 127.0.0.1:54321

backend server1_backend
redirect scheme https if !{ ssl_fc }
server server1 xxx.xxx.xxx.xxx:443 ssl verify none

backend trash_backend
http-request deny

Version with default_backend trash_backend
%hr %hs --> server1.mydomain.net:443|50|} {||||}
Version with default_backend server1_backend
%hr %hs --> server1.mydomain.net:443|50|} {nginx||||}

Posts: 1

Participants: 1

Read full topic

Config reload with dynamic service discovery via DNS

$
0
0

@scarey wrote:

Hi,

I’m noticing the configuration reload isn’t seamless when using dynamic service discovery via DNS. My simple test makes requests every second or so and I trigger a reload somewhere in the middle and there’re 8 seconds or so where services are unavailable. Is that the behavior you would expect? Is there some configuration I can use to minimize the downtime? Any help would be appreciated. If I run the same test with more static configuration the reload works very well.

Test output:

scarey@static:~/dev/data$ ./test.sh 
{"time":1529005368803}Thu Jun 14 19:42:48 UTC 2018
{"time":1529005370141}Thu Jun 14 19:42:50 UTC 2018
{"time":1529005371504}Thu Jun 14 19:42:51 UTC 2018
{"time":1529005372839}Thu Jun 14 19:42:52 UTC 2018
{"time":1529005374175}Thu Jun 14 19:42:54 UTC 2018
{"time":1529005375533}Thu Jun 14 19:42:55 UTC 2018
<html><body><h1>500 Internal Server Error</h1>
An internal server error occured.
</body></html>
Thu Jun 14 19:42:56 UTC 2018
<html><body><h1>500 Internal Server Error</h1>
An internal server error occured.
</body></html>
Thu Jun 14 19:42:58 UTC 2018
<html><body><h1>500 Internal Server Error</h1>
An internal server error occured.
</body></html>
Thu Jun 14 19:42:59 UTC 2018
{"time":1529005380854}Thu Jun 14 19:43:00 UTC 2018
<html><body><h1>500 Internal Server Error</h1>
An internal server error occured.
</body></html>
Thu Jun 14 19:43:02 UTC 2018
{"time":1529005383481}Thu Jun 14 19:43:03 UTC 2018
{"time":1529005384835}Thu Jun 14 19:43:04 UTC 2018
{"time":1529005386178}Thu Jun 14 19:43:06 UTC 2018
{"time":1529005387524}Thu Jun 14 19:43:07 UTC 2018
{"time":1529005388881}Thu Jun 14 19:43:08 UTC 2018
{"time":1529005390241}Thu Jun 14 19:43:10 UTC 2018

HAProxy logs:

Jun 14 19:42:57 192.168.64.146 haproxy[9525]:  Proxy haproxy-statistics started.
Jun 14 19:42:57 192.168.64.146 haproxy[9525]:  Proxy http-in started.
Jun 14 19:42:57 192.168.64.146 haproxy[9525]:  Proxy backend_secure_test-app6 started.
Jun 14 19:42:57 192.168.64.146 haproxy[9525]:  Proxy backend_secure_test-log4j2 started.
Jun 14 19:42:57 192.168.64.146 haproxy[11831]:  Stopping proxy haproxy-statistics in 0 ms.
Jun 14 19:42:57 192.168.64.146 haproxy[11831]:  Stopping frontend http-in in 0 ms.
Jun 14 19:42:57 192.168.64.146 haproxy[11831]:  Stopping backend backend_secure_test-app6 in 0 ms.
Jun 14 19:42:57 192.168.64.146 haproxy[11831]:  Stopping backend backend_secure_test-log4j2 in 0 ms.
Jun 14 19:42:57 192.168.64.146 haproxy[11831]:  Proxy haproxy-statistics stopped (FE: 0 conns, BE: 0 conns).
Jun 14 19:42:57 192.168.64.146 haproxy[11831]:  Proxy http-in stopped (FE: 65 conns, BE: 0 conns).
Jun 14 19:42:57 192.168.64.146 haproxy[11831]:  Proxy backend_secure_test-app6 stopped (FE: 0 conns, BE: 0 conns).
Jun 14 19:42:57 192.168.64.146 haproxy[11831]:  Proxy backend_secure_test-log4j2 stopped (FE: 0 conns, BE: 65 conns).
Jun 14 19:42:57 192.168.64.146 haproxy[19038]:  backend_secure_test-app6/test-app6_secure_secure1 changed its FQDN from (null) to sqainfra-docker04.marathon.mesos by 'SRV record'
Jun 14 19:42:57 192.168.64.146 haproxy[19038]:  backend_secure_test-log4j2/test-log4j2_secure_secure1 changed its FQDN from (null) to sqainfra-docker03.marathon.mesos by 'SRV record'
Jun 14 19:42:57 192.168.64.146 haproxy[19038]:  backend_secure_test-log4j2/test-log4j2_secure_secure2 changed its FQDN from (null) to sqainfra-docker04.marathon.mesos by 'SRV record'
Jun 14 19:42:57 192.168.64.146 haproxy[19038]:  backend_secure_test-log4j2/test-log4j2_secure_secure3 changed its FQDN from (null) to sqainfra-docker05.marathon.mesos by 'SRV record'
Jun 14 19:43:00 192.168.64.146 haproxy[19038]:  backend_secure_test-app6/test-app6_secure_secure1 changed its IP from  to 192.168.64.149 by DNS cache.
Jun 14 19:43:00 192.168.64.146 haproxy[19038]:  backend_secure_test-log4j2/test-log4j2_secure_secure2 changed its IP from  to 192.168.64.149 by DNS cache.
Jun 14 19:43:00 192.168.64.146 haproxy[19038]:  backend_secure_test-log4j2/test-log4j2_secure_secure3 changed its IP from  to 192.168.64.163 by DNS cache.
Jun 14 19:43:00 192.168.64.146 haproxy[19038]:  backend_secure_test-log4j2/test-log4j2_secure_secure1 changed its IP from  to 192.168.64.207 by localdns/dns1.
Jun 14 19:43:00 192.168.64.146 haproxy[19038]:  Health check for server backend_secure_test-app6/test-app6_secure_secure1 succeeded, reason: Layer7 check passed, code: 200, info: "OK", check duration: 1ms, status: 2/2 UP.
Jun 14 19:43:00 192.168.64.146 haproxy[19038]:  Health check for server backend_secure_test-app6/test-app6_secure_secure2 failed, reason: Socket error, check duration: 0ms, status: 0/3 DOWN.
Jun 14 19:43:00 192.168.64.146 haproxy[19038]:  Server backend_secure_test-app6/test-app6_secure_secure2 is DOWN. 9 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jun 14 19:43:00 192.168.64.146 haproxy[19038]:  Health check for server backend_secure_test-app6/test-app6_secure_secure3 failed, reason: Socket error, check duration: 0ms, status: 0/3 DOWN.
Jun 14 19:43:00 192.168.64.146 haproxy[19038]:  Server backend_secure_test-app6/test-app6_secure_secure3 is DOWN. 8 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jun 14 19:43:01 192.168.64.146 haproxy[19038]:  Health check for server backend_secure_test-app6/test-app6_secure_secure4 failed, reason: Socket error, check duration: 0ms, status: 0/3 DOWN.
Jun 14 19:43:01 192.168.64.146 haproxy[19038]:  Server backend_secure_test-app6/test-app6_secure_secure4 is DOWN. 7 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jun 14 19:43:01 192.168.64.146 haproxy[19038]:  Health check for server backend_secure_test-app6/test-app6_secure_secure5 failed, reason: Socket error, check duration: 0ms, status: 0/3 DOWN.
Jun 14 19:43:01 192.168.64.146 haproxy[19038]:  Server backend_secure_test-app6/test-app6_secure_secure5 is DOWN. 6 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jun 14 19:43:01 192.168.64.146 haproxy[19038]:  Health check for server backend_secure_test-app6/test-app6_secure_secure6 failed, reason: Socket error, check duration: 0ms, status: 0/3 DOWN.
Jun 14 19:43:01 192.168.64.146 haproxy[19038]:  Server backend_secure_test-app6/test-app6_secure_secure6 is DOWN. 5 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jun 14 19:43:01 192.168.64.146 haproxy[19038]:  Health check for server backend_secure_test-app6/test-app6_secure_secure7 failed, reason: Socket error, check duration: 0ms, status: 0/3 DOWN.
Jun 14 19:43:01 192.168.64.146 haproxy[19038]:  Server backend_secure_test-app6/test-app6_secure_secure7 is DOWN. 4 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jun 14 19:43:01 192.168.64.146 haproxy[19038]:  Health check for server backend_secure_test-app6/test-app6_secure_secure8 failed, reason: Socket error, check duration: 0ms, status: 0/3 DOWN.
Jun 14 19:43:01 192.168.64.146 haproxy[19038]:  Server backend_secure_test-app6/test-app6_secure_secure8 is DOWN. 3 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jun 14 19:43:01 192.168.64.146 haproxy[19038]:  Health check for server backend_secure_test-app6/test-app6_secure_secure9 failed, reason: Socket error, check duration: 0ms, status: 0/3 DOWN.
Jun 14 19:43:01 192.168.64.146 haproxy[19038]:  Server backend_secure_test-app6/test-app6_secure_secure9 is DOWN. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jun 14 19:43:01 192.168.64.146 haproxy[19038]:  Health check for server backend_secure_test-app6/test-app6_secure_secure10 failed, reason: Socket error, check duration: 0ms, status: 0/3 DOWN.
Jun 14 19:43:01 192.168.64.146 haproxy[19038]:  Server backend_secure_test-app6/test-app6_secure_secure10 is DOWN. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.

Jun 14 19:43:02 192.168.64.146 haproxy[19038]:  Health check for server backend_secure_test-log4j2/test-log4j2_secure_secure1 succeeded, reason: Layer7 check passed, code: 200, info: "OK", check duration: 1ms, status: 2/2 UP.
Jun 14 19:43:02 192.168.64.146 haproxy[19038]:  Health check for server backend_secure_test-log4j2/test-log4j2_secure_secure2 succeeded, reason: Layer7 check passed, code: 200, info: "OK", check duration: 0ms, status: 2/2 UP.
Jun 14 19:43:02 192.168.64.146 haproxy[19038]:  Health check for server backend_secure_test-log4j2/test-log4j2_secure_secure3 succeeded, reason: Layer7 check passed, code: 200, info: "OK", check duration: 0ms, status: 2/2 UP.
Jun 14 19:43:02 192.168.64.146 haproxy[19038]:  Health check for server backend_secure_test-log4j2/test-log4j2_secure_secure4 failed, reason: Socket error, check duration: 0ms, status: 0/3 DOWN.
Jun 14 19:43:02 192.168.64.146 haproxy[19038]:  Server backend_secure_test-log4j2/test-log4j2_secure_secure4 is DOWN. 9 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jun 14 19:43:02 192.168.64.146 haproxy[19038]:  Health check for server backend_secure_test-log4j2/test-log4j2_secure_secure5 failed, reason: Socket error, check duration: 0ms, status: 0/3 DOWN.
Jun 14 19:43:02 192.168.64.146 haproxy[19038]:  Server backend_secure_test-log4j2/test-log4j2_secure_secure5 is DOWN. 8 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jun 14 19:43:03 192.168.64.146 haproxy[19038]:  Health check for server backend_secure_test-log4j2/test-log4j2_secure_secure6 failed, reason: Socket error, check duration: 0ms, status: 0/3 DOWN.
Jun 14 19:43:03 192.168.64.146 haproxy[19038]:  Server backend_secure_test-log4j2/test-log4j2_secure_secure6 is DOWN. 7 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jun 14 19:43:03 192.168.64.146 haproxy[19038]:  Health check for server backend_secure_test-log4j2/test-log4j2_secure_secure7 failed, reason: Socket error, check duration: 0ms, status: 0/3 DOWN.
Jun 14 19:43:03 192.168.64.146 haproxy[19038]:  Server backend_secure_test-log4j2/test-log4j2_secure_secure7 is DOWN. 6 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jun 14 19:43:03 192.168.64.146 haproxy[19038]:  Health check for server backend_secure_test-log4j2/test-log4j2_secure_secure8 failed, reason: Socket error, check duration: 0ms, status: 0/3 DOWN.
Jun 14 19:43:03 192.168.64.146 haproxy[19038]:  Server backend_secure_test-log4j2/test-log4j2_secure_secure8 is DOWN. 5 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jun 14 19:43:03 192.168.64.146 haproxy[19038]:  Health check for server backend_secure_test-log4j2/test-log4j2_secure_secure9 failed, reason: Socket error, check duration: 0ms, status: 0/3 DOWN.
Jun 14 19:43:03 192.168.64.146 haproxy[19038]:  Server backend_secure_test-log4j2/test-log4j2_secure_secure9 is DOWN. 4 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jun 14 19:43:03 192.168.64.146 haproxy[19038]:  Health check for server backend_secure_test-log4j2/test-log4j2_secure_secure10 failed, reason: Socket error, check duration: 0ms, status: 0/3 DOWN.
Jun 14 19:43:03 192.168.64.146 haproxy[19038]:  Server backend_secure_test-log4j2/test-log4j2_secure_secure10 is DOWN. 3 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jun 14 19:43:03 192.168.64.146 haproxy[19038]:  Health check for server backend_secure_scarey-log4j2/scarey-log4j2_secure_secure1 succeeded, reason: Layer7 check passed, code: 200, info: "OK", check duration: 1ms, status: 1/3 DOWN.
Jun 14 19:43:13 192.168.64.146 haproxy[19038]:  Health check for server backend_secure_scarey-log4j2/scarey-log4j2_secure_secure1 succeeded, reason: Layer7 check passed, code: 200, info: "OK", check duration: 1ms, status: 2/3 DOWN.

Thanks,
Steve

Posts: 1

Participants: 1

Read full topic

Http/2 breaks http-reuse always

$
0
0

@ced wrote:

I have a setup where the frontend is configured with keepalive and backend configured with http-reuse always. This allows having a lot of client connections to haproxy with only a few connections to apache backend.

It was working perfectly with very low apache processes until I enabled http/2 !
I have the feeling that haproxy no longer reuse http connections the way it did without http/2 and uses a lot more connections to apache.

Here is apache connection graph : https://pasteboard.co/HqwG6nA.png

backend bk
    option http-keep-alive
    option persist
    http-reuse always
    server web1 x.x.x.x:8000

frontend front
    bind x.x.x.x:443 ssl crt all/ alpn h2,http/1.1
    option http-keep-alive
    default_backend bk

Posts: 3

Participants: 2

Read full topic

Dismal haproxy performance

$
0
0

@webski wrote:

Retrieving assets using an extremely simple configuration takes 10 times longer than when wget is used to retrieve the same asset from the same machine. What can be tuned to alleviate this? Load on the machine is practically zero, CPU usage never worth noting

Posts: 2

Participants: 2

Read full topic

Serving static content in /var/www/html

$
0
0

@br1 wrote:

Hi guys, this is my first post so I want to say hello to everybody. This is my problem. I have two web applications (tomcat 7.0.63) running with docker in two different containers (using docker-compose). Another container contains an instance of haproxy.
This is the docker-compose.yml configuration:

services:
haproxy:
build: haproxy
container_name: haproxy
volumes:
- ./haproxy/conf/:/usr/local/etc/haproxy/
- ./haproxy/ssl/:/etc/ssl/xip.io/
ports:
- “80:80”
- “443:443”
web1:
build: web1
container_name: web1
ports:
- “8080:8080”
depends_on:
- haproxy
web2:
build: web2
container_name: web2
ports:
- “8081:8081”
depends_on:
- haproxy
version: ‘2’

And this is my haproxy configuration file:

global
log 127.0.0.1 local2
maxconn 4096
#chroot /usr/share/haproxy
tune.ssl.cachesize 0
tune.ssl.lifetime 600
tune.ssl.default-dh-param 2048
daemon

defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 600s
timeout server 600s
timeout http-keep-alive 10s
timeout check 10s
maxconn 4096
timeout connect 5000
timeout client 50000
timeout server 50000
stats enable

frontend http-in
bind *:80 #ssl crt /etc/ssl/xip.io/xip.io.pem
bind *:443 ssl crt /etc/ssl/xip.io/xip.io.pem
mode http
reqadd X-Forwarded-Proto:\ http
option forwardfor

Add CORS headers when Origin header is present

capture request header origin len 128
http-response add-header Access-Control-Allow-Origin %[capture.req.hdr(0)] if { capture.req.hdr(0) -m found }
rspadd Access-Control-Allow-Methods:\ GET,\ HEAD,\ OPTIONS,\ POST,\ PUT if { capture.req.hdr(0) -m found }
rspadd Access-Control-Allow-Credentials:\ true if { capture.req.hdr(0) -m found }
rspadd Access-Control-Allow-Headers:\ Origin,\ Accept,\ X-Requested-With,\ Content-Type,\ Access-Control-Request-Method,\ Access-Control-Request-Headers,\ Authorization if { capture.req.hdr(0) -m found }

use_backend is_web1 if { path_beg /web1 }
use_backend is_web2 if { path_beg /web2 }

backend is_web1
mode http
balance roundrobin

option forwardfor

cookie JSESSIONID prefix nocache

server web1 172.17.0.1:8080/web1 check cookie web1 inter 2s fastinter 500ms downinter 2s

http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }

backend is_web2
mode http
balance roundrobin

option forwardfor

cookie JSESSIONID prefix nocache

server web2 172.17.0.1:8081/web2 check cookie web2 inter 2s fastinter 500ms downinter 2s

http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }

In order to redirect users to the correct web app checking if is /web1 or /web2. Everythings works fine.

Now I want to serve static content. This static pages are in the path /var/www/html, but I don’t know how to modify my haproxy configuration. Can anyone help me?
Thanks

Posts: 2

Participants: 2

Read full topic

How to proced with the maintenance

$
0
0

@moscardo wrote:

Hi,
Since I am going to production soon, I would like to first get an idea about how I should proceed with upgrades, for example when rolling out a new RPM. (I am running master-worker mode)
Would all the connections drop when installing a new package?

Do we have any post explaining best practices on upgrades and maintenance tasks?

Cheers.

Posts: 1

Participants: 1

Read full topic

Debugging backend urls

$
0
0

@jbrahy wrote:

I’ve started haproxy using -d and I’m watching the output to figure out why I’m getting a 404 to the frontend but I’m not seeing the actual url requested to the back end. In the front end I see this:

00000000:frontend-http.clireq[000a:ffffffff]: GET /rd/eac29cced489c9000 HTTP/1.1

but in the back end it shows

00000000:backend-hitpath.srvrep[000a:adfd]: HTTP/1.1 404 Not Found

without the url.

In the backend I have this code:

http-request set-uri %[url,regsub(^/rd,/,)]

which is working fine in another server but it’s not working as expected on this server and I want to verify the url that’s being called behind the scenes.

Posts: 1

Participants: 1

Read full topic


Some listeners failed to start generic error

$
0
0

@ingshtrom wrote:

I am working on upgrading our HAProxy deployment from 1.6 to 1.8 while also putting it into containers that are running in Kubernetes (used to run on host). I’ve gotten it all working from what I can tell, but I keep getting this error when reloading/starting HAProxy with an updated config:

[ALERT] 172/125245 (1) : [haproxy.main()] Some protocols failed to start their listeners! Exiting.
[ALERT] 172/125245 (1) : [haproxy.main()] .

I’ve narrowed it down to my peers and stick-table config. When I disable stick-table, everything runs as expected, but once it’s enabled that listener fails to start–or well I assume it’s that listener since that is the only change when it happens. My thinking is that it is a problem with my peers config since when I make that change things fail. I’m not sure how to further debug this, though, as I cannot find any logs that explain the issue from stdout/stderr or from syslog.

Here is a config that is succcessful:

global
  log 127.0.0.1:9515 local0 debug
  log 127.0.0.1:9514 local1 debug
  log-send-hostname
  tune.bufsize  65536
  maxconn 8000
  stats socket /var/run/haproxy.sock mode 0600 level admin expose-fd listeners
  stats timeout 2m

defaults
  log global
  option dontlognull
  mode http
  maxconn 8000

  option redispatch
  retries 3
  timeout connect 5000
  timeout client 400000
  timeout server 400000
  timeout tarpit 2s

  errorfile 400 /etc/haproxy/static/errors/400.http
  errorfile 403 /etc/haproxy/static/errors/403.http
  errorfile 408 /etc/haproxy/static/errors/408.http
  errorfile 500 /etc/haproxy/static/errors/500.http
  errorfile 502 /etc/haproxy/static/errors/502.http
  errorfile 503 /etc/haproxy/static/errors/503.http
  errorfile 504 /etc/haproxy/static/errors/504.http

listen stats
  # bind :8000
  bind :35000
  stats uri /
  stats enable
  
frontend http
  description poop1
  bind :35001

  stick-table type ip size 500k expire 5m store http_req_rate(30s)
  tcp-request inspect-delay 2s
  tcp-request content track-sc1 src

  redirect scheme https code 301

Here is a config that fails to start some listeners:

global
  log 127.0.0.1:9515 local0 debug
  log 127.0.0.1:9514 local1 debug
  log-send-hostname
  tune.bufsize  65536
  maxconn 8000
  stats socket /var/run/haproxy.sock mode 0600 level admin expose-fd listeners
  stats timeout 2m

defaults
  log global
  option dontlognull
  mode http
  maxconn 8000

  # if a request to a backend fails, redispatch it to another backend.
  # the number of times to attempt this is controlled by the retries property below
  option redispatch
  # if sending a request to one server fails, try to send it
  # to another at most n times before aborting the request
  retries 3
  timeout connect 5000
  timeout client 400000
  timeout server 400000
  timeout tarpit 2s

  errorfile 400 /etc/haproxy/static/errors/400.http
  errorfile 403 /etc/haproxy/static/errors/403.http
  errorfile 408 /etc/haproxy/static/errors/408.http
  errorfile 500 /etc/haproxy/static/errors/500.http
  errorfile 502 /etc/haproxy/static/errors/502.http
  errorfile 503 /etc/haproxy/static/errors/503.http
  errorfile 504 /etc/haproxy/static/errors/504.http

listen stats
  # bind :8000
  bind :35000
  stats uri /
  stats enable

peers haproxy_peers
  peer <node_name> <node_ip>:34999 # this ip is actually the node this is running on, so it's a cluster of 1

frontend http
  description poop2
  bind :35001
  
  stick-table type ip size 500k expire 5m store http_req_rate(30s) peers haproxy_peers
  tcp-request inspect-delay 2s
  tcp-request content track-sc1 src

  redirect scheme https code 301

How else can I debug this? Any other thoughts? I’m not very fluent at C, so when I look at the source code I don’t see anything that looks problematic–it’s only trying to start the listeners and then it fails. ¯_(ツ)_/¯

Posts: 1

Participants: 1

Read full topic

How to concatenate parts to a mapping key

$
0
0

@jurgenhaas wrote:

I’m using the map feature to redirect from one path to another looking up a list in an external file. This is how that work:

http-request redirect code 301 location %[capture.req.uri,map(/etc/haproxy/my.map)] if { capture.req.uri,map(/etc/haproxy/my.map) -m found }

Now I need to enhance that such that the key which is capture.req.uri in the above example will now be the domain AND the path so I wonder how I could concatenate those 2 parts (hdr(host) and capture.req.uri) into one string to work as the key for the lookups.

Posts: 1

Participants: 1

Read full topic

CRL reload and long life tcp connections

$
0
0

@jfillatre wrote:

Hi,

I have to use HAProxy to enforce TLS encryption to application flow.

I use following feature

  • Multiple server certificates
  • client certificate verify
  • crl checking

Application flow imply long life tcp connections, which have as consequence that we have to avoid to restart or reload configuration. Indeed as the reload keep current connections attached on original process, it will result in a multiplication of HAProxy instance.

It’s especially true with CRL refresh that I have to do periodically. Is there work in progress to add he hot reload of crl file?
For example using unix socket command, like to set new maxconn , it could allow to use these new data for new connections…

An over approach wil be to deal with long term connection during reload, but I haven’t found relevant resouces, only high availability seems to be handle.

Anyone have similar needs or may be solutions?

Regards

Posts: 1

Participants: 1

Read full topic

Using HAproxy to forward rtmp?

$
0
0

@rayj00 wrote:

I want to try HAproxy to forward incoming rtmp to LXD containers. Most all tutorials and examples are
for load balancing. I am not trying to do load balancing.

So say I have an Ubuntu server with HAproxy installed.

I also have a couple of containers on the same server. I want to be able to
forward the incoming rtmp to a specific container.
The incoming rtmp would come from OBS for instance using this format: rtmp://“server IP”:1935/Container1
or Container2, etc.

On the server end, HAproxy would route the incoming rtmp to Container1. This is not exactly load balancing.
You can consider each container is owned by a specific user for the purpose of live streaming.

Any recommendations how to configure HAproxy to handle this?

Thanks,

Ray

Posts: 2

Participants: 1

Read full topic

Stats socket reporting empty stats

$
0
0

@asgoel13 wrote:

I have a multi-file haproxy configuration that looks something like this:

Global config file:

global
        log /dev/log local0
        log /dev/log local1 notice
        chroot /var/lib/haproxy
        # used for newer reload mechanism
        stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
        stats timeout 30s
        user haproxy
        group haproxy
        daemon
        maxconn 2000

defaults
        log global
        mode http
        option httplog
        option dontlognull
        timeout connect 5000
        errorfile 400 /etc/haproxy/errors/400.http
        errorfile 403 /etc/haproxy/errors/403.http
        errorfile 408 /etc/haproxy/errors/408.http
        errorfile 500 /etc/haproxy/errors/500.http
        errorfile 502 /etc/haproxy/errors/502.http
        errorfile 503 /etc/haproxy/errors/503.http
        errorfile 504 /etc/haproxy/errors/504.http

Example redis listener file:

listen redis
       bind localhost:6379
       mode tcp
       option tcplog
       server redis001 <internal_redis_host>:6379 check

I am running haproxy using the -f with a path to the folder containing the configuration files.

For some reason, when I query my stats, HAproxy is reporting 0 bytes in/out for my redis backend, even though I am confident that data is flowing through this listener. Other listeners (for example, my PostgreSQL listener), are reporting ok. Any ideas?

Posts: 1

Participants: 1

Read full topic

Viewing all 4737 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>