Quantcast
Channel: HAProxy community - Latest topics
Viewing all 4731 articles
Browse latest View live

Custom http-request header not getting logged

$
0
0

@abhi wrote:

I have custom header set and it is not getting logged (i.e I am unable to see in the haproxy log)

Here is my configuration

frontend haproxynode
bind *:80
mode http
log global
timeout client 60000
maxconn 50000

acl host_abhi hdr_beg(host) -i -f /Users/abpatti/haproxy/test/testing.lst

http-request deny if !host_abhi

http-request add-header validtest %H if host_abhi
http-request add-header invalidtest %H if !host_abhi 
capture request header validtest len 64
capture request header invalidtest len 64
http-request set-header X-Custom-Header %[url]
capture request header X-Custom-Header len 64
log-format %{+Q}o\ %{-Q}ci\ -\ -\ [%T]\ %r\ %ST\ %B\ %cp\ %ms\ %ft\ %b\ %s\ \%TR\ %Tw\ %Tc\ %Tr\ %Tt\ %tsc\ %ac\ %fc\ %bc\ %sc\ %rc\ %sq\ %bq\ %CC\ %CS\ %hrl\ %hsl\ %sslv\
use_backend backendnodel

Here is what I see in the log

Sep 18 09:44:57 haproxy[95312]: 127.0.0.1 - - [18/Sep/201]: 16:44:57 +0000] “GET / HTTP/1.1” 200 155 61669 163 “haproxynode” “backendnodel” “127.0.0.1” \0 0 1 1 2 ---- 1 1 0 0 0 0 0 “” “” “” “” “” “”
Sep 18 09:44:57 haproxy[95312]: 127.0.0.1 - - [18/Sep/201]: 16:44:57 +0000] “GET /favicon.ico HTTP/1.1” 200 155 61669 166 “haproxynode” “backendnodel” “127.0.0.1” \0 0 0 0 127 ---- 1 1 0 0 0 0 0 “” “” “” “” “” “”
Sep 18 09:44:59 haproxy[95312]: 127.0.0.1 - - [18/Sep/201]: 16:44:59 +0000] “GET / HTTP/1.1” 200 155 61672 843 “haproxynode” “backendnodel” “127.0.0.1” \0 0 0 0 0 ---- 2 2 0 0 0 0 0 “” “” “” “” “” “”
Sep 18 09:44:59 haproxy[95312]: 127.0.0.1 - - [18/Sep/201]: 16:44:59 +0000] “GET /favicon.ico HTTP/1.1” 200 155 61672 843 “haproxynode” “backendnodel” “127.0.0.1” \0 0 0 1 123 ---- 2 2 0 0 0 0 0 “” “” “” “” “” “”
Waiting for data… (interrupt to abort)

The reason I say acl is working fine because, on the backend server I am printing the custom headers and I am unable to see the values

Posts: 1

Participants: 1

Read full topic


Termination state missing for haproxy log

$
0
0

@ykiran_k wrote:

Hi,

I am seeing this haproxy log which status code is returned as -1 but based on my knowledge whenever haproxy returns -1 there will be a termination state that explains why haproxy returned that unusual status code

   Sep 18 15:55:13 haproxy-v044-n8k1 haproxy[13422]: 1.2.3.4:63311 [18/Sep/2019:15:55:13.588] http-in~ primary-backend/app4 0/0/-1/-1/43 -1 1442 - - ---- 1717/1716/134/5/0 0/0 {} {} "GET /myapp/path HTTP/1.1"

Posts: 1

Participants: 1

Read full topic

Confused by stats page

$
0
0

@bcookatpcsd wrote:

Looking for some criticism/critique on my setup as it’s me and the walls here… Also trying to make sense of what I’m seeing on the stats page… Hopefully I’m not shooting myself in the foot somehow…

Stats page: https://imgur.com/a/u03CbsT

egrep -v “^$|^#” /etc/haproxy/local-haproxy.cfg
global
maxconn 32768
log 10.20.0.16:514 local0 info
user haproxy
group haproxy
nbthread 2
stats socket /run/haproxy.sock # mode 660 level admin
pidfile /run/haproxy.pid
profiling.tasks auto
tune.maxaccept -1
server-state-file /var/cache/haproxy/state
defaults
maxconn 32768
backlog 16384
mode http
log global
http-reuse safe
option httplog
option dontlognull
option dontlog-normal
option tcp-smart-accept
option tcp-smart-connect
timeout connect 1024s
timeout client 10240s
timeout server 10240s
option http-keep-alive
option redispatch
option clitcpka
option http-no-delay
option logasap
balance static-rr
listen stats
stats enable
bind :9999 tfo
stats show-node
stats uri /stats
stats refresh 30s
frontend proxy
backlog 32768
bind 0.0.0.0:3128 tfo # tfo alpn h2,http/1.1
use_backend proxy_pool
backend proxy_pool
retry-on all-retryable-errors
server proxypy 10.20.0.33:8899 tfo check # send-proxy
server privoxy 10.20.0.33:8118 tfo check # send-proxy
server squid 10.20.0.33:3128 tfo check # send-proxy

running haproxy out of runit with a finish script to dump the stats…

the run file fwiw:

exec
chpst -o 65536
chpst -e env
/usr/bin/haproxy -W -f /etc/haproxy/local-haproxy.cfg

I’m initially confused with the stats of privoxy vs squid if it’s in static-rr, Session rate Max looks balanced… but Sessions current and Sessions Max look very different from each other… What does that mean (in my case) of where I’m using haproxy in front of two forward proxies…

the haproxy machine is Archlinux with 4.19.72-1-lts kernel as is the 10.20.0.33 machine. This machine is virtual, 0.33 is physical. I also have a tweaked ‘rc.local’ (Arch people seem to hate the ‘old ways’…)

I can share if it might be anything applicable…

Also on the frontend portion of the stats… does Max 60 mean 60 unique users/ips?

Open to questions or criticism…

Thanks in advance.

Posts: 1

Participants: 1

Read full topic

How to disable connection closing on error responses generated by HAProxy itself

$
0
0

@ciprian.craciun wrote:

If HAProxy generates an error response (either by itself or by an HTTP rule), like for example in case of 502 / 503, it also appends the Connection: close header, and terminates the connection (in HTTP/1.1 mode).

However, when HAProxy itself is behind another proxy (like for example a CDN like CloudFlare), closing the connection is counter-productive, as it’s not the CDN’s fault, but instead the clients.

Thus my question is how to disable (if possible) this connection closing behaviour?

(I’m using HAProxy 1.8.21, and by looking at the documentation I didn’t find any solution to this. Using option http-keep-alive in the frontend doesn’t do anything in this situation.)

Posts: 1

Participants: 1

Read full topic

Is this a bug? "Unable to convert message in HTX ..."

$
0
0

@ingshtrom wrote:

I think it is a bug, but a pretty minor one. I wanted to check here before I create a ticket unnecessarily. To start, I created a reference repository: https://github.com/ingshtrom/haproxy-htx-conversion-issue

You can clone it and run ./test.sh. Essentially what it does is have a single backend which uses the errorfile directive to serve a file. There are two configs and two error files to load. The only difference in the configs is the file path. The difference in the error files is that the bad one includes an extra space prior to the newline on line 6. I will include a bad error file for convenience below:

HTTP/1.1 429 Too Many Requests
Cache-Control: no-cache
Connection: close
Content-Type: text/plain
Retry-After: 60
 
Too Many Requests (HAP429).

Bad Config:

backend 429_tarpit
  errorfile 500 /etc/haproxy/429-bad.http

When I run the test script in the repository, it shows the difference in behavior between the two error files:

You need docker to continue
Running the failure test...
[ALERT] 262/011446 (1) : config: backend '429_tarpit': Unable to convert message in HTX for HTTP return code 500.
[ALERT] 262/011446 (1) : Fatal errors found in configuration.
Failed test finished!
Running the success test...
Configuration file is valid
Good test finished!

I’m really just using the 2.0.5 docker image, but here is the haproxy -vv output if you would like:

docker run --rm --name haproxy -v $(pwd)/haproxy:/usr/local/etc/haproxy:ro -v $(pwd)/haproxy-errors:/etc/haproxy/static/errors:ro haproxy:2.0.5 haproxy -vv HA-Proxy version 2.0.5 2019/08/16 - https://haproxy.org/
Build options :
  TARGET  = linux-glibc
  CPU     = generic
  CC      = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -Wno-format-truncation -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered -Wno-missing-field-initializers -Wno-implicit-fallthrough -Wno-stringop-overflow -Wno-cast-function-type -Wtype-limits -Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference
  OPTIONS = USE_PCRE2=1 USE_PCRE2_JIT=1 USE_GETADDRINFO=1 USE_OPENSSL=1 USE_LUA=1 USE_ZLIB=1

Feature list : +EPOLL -KQUEUE -MY_EPOLL -MY_SPLICE +NETFILTER -PCRE -PCRE_JIT +PCRE2 +PCRE2_JIT +POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED -REGPARM -STATIC_PCRE -STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H -VSYSCALL +GETADDRINFO +OPENSSL +LUA +FUTEX +ACCEPT4 -MY_ACCEPT4 +ZLIB -SLZ +CPU_AFFINITY +TFO +NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL -SYSTEMD -OBSOLETE_LINKER +PRCTL +THREAD_DUMP -EVPORTS

Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_THREADS=64, default=4).
Built with OpenSSL version : OpenSSL 1.1.1c  28 May 2019
Running on OpenSSL version : OpenSSL 1.1.1c  28 May 2019
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.3
Built with network namespace support.
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with PCRE2 version : 10.32 2018-09-10
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with the Prometheus exporter as a service

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as <default> cannot be specified using 'proto' keyword)
              h2 : mode=HTX        side=FE|BE     mux=H2
              h2 : mode=HTTP       side=FE        mux=H2
       <default> : mode=HTX        side=FE|BE     mux=H1
       <default> : mode=TCP|HTTP   side=FE|BE     mux=PASS

Available services :
        prometheus-exporter

Available filters :
        [SPOE] spoe
        [COMP] compression
        [CACHE] cache
        [TRACE] trace

What do you think?

Posts: 1

Participants: 1

Read full topic

Tons of "ssl_termination/1: SSL handshake failure"

$
0
0

@alex1452 wrote:

I am using HAProxy 1.8.20 with an 2048 bit certificate from Let’s encrypt. SSL labs has confirmed that the certificate is OK (full certificate chain). However, I still get tons of “SSL handshake failures” in my log. I have tried the suggestions from other threads in this forum but they did not solve my issue.

Any ideas what I can do to resolve the issue? Below are the error messages (small clip from system log, haproxy -vv and my haproxy.conf). Any help would really be appreciated!

Fri Sep 20 06:12:43 2019 local0.info haproxy[13893]: 2401:7400:c802:925e:190:75bb:1f07:55f2:58170 [20/Sep/2019:06:12:43.877] ssl_termination/1: Connection closed during SSL handshake
Fri Sep 20 06:12:43 2019 local0.info haproxy[13893]: ::ffff:192.168.1.233:50590 [20/Sep/2019:06:12:43.889] ssl_termination/1: SSL handshake failure
Fri Sep 20 06:12:43 2019 local0.info haproxy[13893]: ::ffff:192.168.1.233:50592 [20/Sep/2019:06:12:43.955] ssl_termination/1: SSL handshake failure
Fri Sep 20 06:12:44 2019 local0.info haproxy[13893]: ::ffff:192.168.1.233:50594 [20/Sep/2019:06:12:44.014] ssl_termination/1: SSL handshake failure
Fri Sep 20 06:12:44 2019 local0.info haproxy[13893]: ::ffff:192.168.1.233:50596 [20/Sep/2019:06:12:44.044] ssl_termination/1: SSL handshake failure
Fri Sep 20 06:12:44 2019 local0.info haproxy[13893]: ::ffff:192.168.1.233:50598 [20/Sep/2019:06:12:44.150] ssl_termination/1: SSL handshake failure
Fri Sep 20 06:12:44 2019 local0.info haproxy[13893]: ::ffff:192.168.1.233:50600 [20/Sep/2019:06:12:44.175] ssl_termination/1: SSL handshake failure



haproxy -vv
HA-Proxy version 1.8.20-1 2019/06/27
Copyright 2000-2019 Willy Tarreau <willy@haproxy.org>

Build options :
  TARGET  = linux2628
  CPU     = generic
  CC      = x86_64-openwrt-linux-musl-gcc
  CFLAGS  = -Os -pipe -fno-caller-saves -fno-plt -fhonour-copts -Wno-error=unused-but-set-variable -Wno-error=unused-result -iremap/mnt/data/share/data/software/openwrt/build_dir/target-x86_64_musl/haproxy-ssl/haproxy-1.8.20:haproxy-1.8.20 -Wformat -Werror=format-security -fpic -fstack-protector -D_FORTIFY_SOURCE=1 -Wl,-z,now -Wl,-z,relro -DBUFSIZE=16384 -DMAXREWRITE=1030 -DSYSTEM_MAXCONN=165530
  OPTIONS = USE_LINUX_SPLICE=1 USE_LINUX_TPROXY=1 USE_GETADDRINFO=1 USE_ZLIB=yes USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1 USE_PCRE_JIT=1 USE_TFO=1

Default settings :
  maxconn = 165530, bufsize = 16384, maxrewrite = 1030, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.0.2s  28 May 2019
Running on OpenSSL version : OpenSSL 1.0.2s  28 May 2019
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2
Built with Lua version : Lua 5.3.5
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Encrypted password support via crypt(3): yes
Built with multi-threading support.
Built with PCRE version : 8.41 2017-07-05
Running on PCRE version : 8.41 2017-07-05
PCRE library supports JIT : no (libpcre build without JIT?)
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with network namespace support.

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
        [SPOE] spoe
        [COMP] compression
        [TRACE] trace

haproxy.conf
global
	# Log events to a remote syslog server at given address using the
	# specified facility and verbosity level. Multiple log options 
	# are allowed.
	#log 10.0.0.1 daemon info
	log /dev/log local0 debug

	# Specifiy the maximum number of allowed connections.
	maxconn 20480

	# Raise the ulimit for the maximum allowed number of open socket
	# descriptors per process. This is usually at least twice the
	# number of allowed connections (maxconn * 2 + nb_servers + 1) .
	ulimit-n 65535

	# Drop privileges (setuid, setgid), default is "root" on OpenWrt.
	uid 0
	gid 0

	# Perform chroot into the specified directory.
	#chroot /var/run/haproxy/

	# Daemonize on startup
	daemon

	nosplice
	# Enable debugging
	#debug

	# Spawn given number of processes and distribute load among them,
	# used for multi-core environments or to circumvent per-process
	# limits like number of open file descriptors. Default is 1.
	nbproc 2
	nbproc 1
	nbthread 4
	cpu-map auto:1/1-4 0-3

	ssl-default-bind-ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
	ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
	tune.ssl.default-dh-param 2048

    # Default parameters
    defaults
    	# Default timeouts
    	timeout connect 3m
    	timeout client 60m # timeout in ms
    	timeout server 60m # timeout in s?
    	timeout http-keep-alive 60m # timeout in s

    	log global
    	mode http
    	option httplog
    	maxconn 3000

    frontend ssl_termination
    	bind :::443 v4v6 ssl crt /mnt/container/datadir/certificate/haproxy/ alpn h2,http/1.1
    	mode http
    	option http-server-close

    	option forwardfor
    	http-request set-header X-Forwarded-Proto https if { ssl_fc }
    	
    	# Serve an internal statistics page on /stats:
    	#stats enable
    	#stats uri /stats

    	# Enable HTTP basic auth for the statistics:
    	#stats realm HA_Stats
    	#stats auth username:password

    	default_backend bk_kopano

    	acl host_app1 hdr(host) -i one.xxx.xxx 
    	acl host_app2 hdr(host) -i two.xxx.xxx

    	use_backend bk_app1 if host_app1
    	use_backend bk_app2 if host_app2

    	default_backend bk_app2

    backend bk_app1
    	#redirect scheme https if !{ ssl_fc }
    	http-response del-header X-Varnish
    	http-response del-header X-Varnish-Cache
    	http-response del-header X-Varnish-Server
    	http-response del-header X-Cache

    	server app1 192.168.1.200:6081 check

    backend bk_app2
    	#redirect scheme https if !{ ssl_fc }
    	server app2 192.168.1.220:80 check

Posts: 1

Participants: 1

Read full topic

HAProxy 2.0.5 often fails to quickly update SRV records

$
0
0

@eedwards-sk wrote:

I’m attempting to use HAProxy Resolvers along with SRV Records and server-template to allow services on dynamic ports to register with HAProxy.

I’m using AWS Service Discovery (with Route53, TTL: 10s) and ECS.

It works successfully, given enough time, and any services in the DNS record eventually become available backends.


If I have 2 containers running for a service, with 4 defined using server-template, then the first 2 will be “green” and the second two will be “red”.

During an HA deployment, where the 2 containers are replaced 1 by 1, HAProxy fails to register the updated records in time to prevent an outage.

So e.g. during a deployment, you might have an SRV record with 2 results:

_foo_.my.service:

  - A._foo.my.service
  - B._foo.my.service

as the first container (A) is stopped, the SRV record only returns 1 result:

_foo_.my.service:

  - B._foo.my.service

at this point, I would expect HAProxy to remove the server from the server list, and it would appear “red” similar to other servers that were missing when the service started

However, instead, the server ends up marked as “MAINT” (orange), due to “resolution”, and will sit “stuck” for up to 5+ minutes sometimes, failing to acquire the new IP information.

Meanwhile, the SRV record is updated again as the services are replaced/updated:

_foo_.my.service:

  - B._foo.my.service
  - C._foo.my.service

then again as B is removed:

_foo_.my.service:

  - C._foo.my.service

and finally D is added:

_foo_.my.service:

  - C._foo.my.service
  - D._foo.my.service

This whole time, performing a dig SRV _foo_.my.service @{DNS_IP} on the haproxy host IMMEDIATELY resolves the correct service IPs and Ports as each of the above deployment steps happens. So the issue isn’t with upstream DNS being up-to-date.

This makes the SRV system basically useless to me currently, as even with a rolling deployment with HA services, I end up with an outage.

I have 2 HAProxy servers and the behavior is not identical between them, either (even though they’re identically configured).

Whether one of the server entries stays in “MAINT” for long seems to vary between them.

Eventually, it ends up resolving – but having to wait 5+ minutes and having the services go completely unavailable (even though they’re up, dns is updated, and they’re ready to receive traffic) is not adequate for production usage.


here’s a sanitized and trimmed config excerpt:

global
        log /dev/log    local0
        log /dev/log    local1 notice
        chroot /var/lib/haproxy
        stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
        stats timeout 30s
        user haproxy
        group haproxy
        daemon

        # Default SSL material locations
        ca-base /etc/ssl/certs
        crt-base /etc/ssl/private

        # See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
        ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
        ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
        ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets

        spread-checks     5

defaults
        log           global
        mode          http
        option        httplog
        option        dontlognull
        timeout       connect 5000
        timeout       client  50000
        timeout       server  50000
        errorfile     400 /etc/haproxy/errors/400.http
        errorfile     403 /etc/haproxy/errors/403.http
        errorfile     408 /etc/haproxy/errors/408.http
        errorfile     500 /etc/haproxy/errors/500.http
        errorfile     502 /etc/haproxy/errors/502.http
        errorfile     503 /etc/haproxy/errors/503.http
        errorfile     504 /etc/haproxy/errors/504.http

        option        httpclose
        monitor-uri   /elb-check

        maxconn       60000
        rate-limit    sessions 100
        backlog       60000

resolvers aws-sd
        accepted_payload_size   8192
        hold valid              5s # keep valid answer for up to 5s
        nameserver aws-sd1      169.254.169.253:53

listen stats
        bind              0.0.0.0:9000
        mode              http
        balance
        stats             enable
        stats             uri /stats
        stats             realm HAProxy\ Statistics

frontend HTTP_IN
        bind              0.0.0.0:80
        capture           request header User-Agent len 200
        capture           request header Host len 54
        capture           request header Origin len 54
        capture           request header X-Forwarded-For len 35
        capture           request header X-Forwarded-Proto len 5
        capture           response header status len 3
        option            http-server-close
        option            forwardfor except #sanitized#
        option            forwardfor except #sanitized#

        # environments
        acl               dev        hdr_beg(host)  #sanitized#. #sanitized#.

        # web-services routes
        acl               locations         path_beg /locations

        # dev backend
        use_backend       DEV_HOME if dev !locations
        use_backend       DEV_LOCATIONS if dev locations

backend DEV_HOME
        balance roundrobin
        option httpchk GET /healthcheck
        http-check expect status 200
        default-server inter 10s downinter 2s fastinter 2s rise 5 fall 2
        server-template web 4 _http._tcp.web-service-home-dev-web.my.service resolvers aws-sd check init-addr none resolve-opts allow-dup-ip resolve-prefer ipv4

backend DEV_LOCATIONS
        balance roundrobin
        option httpchk GET /locations/healthcheck
        http-check expect status 200
        default-server inter 10s downinter 2s fastinter 2s rise 5 fall 2
        server-template web 4 _http._tcp.web-service-locations-dev-web.my.service resolvers aws-sd check init-addr none resolve-opts allow-dup-ip resolve-prefer ipv4

Posts: 3

Participants: 1

Read full topic

Troubleshooting cL termination cause

$
0
0

@Tim wrote:

Hi Lukas,

I have similar problem with high ta and low tr and small bytes. that is normal http connection not tunnel mode.

I wonder if it means we have a network issue between the backend and HAproxy or the network issue between HA and clients?

From what I understand tr doesn’t include the data transferred time from Backend to client

Thanks in advance

Posts: 6

Participants: 2

Read full topic


Disable HTTP/2 in v2.0

$
0
0

@ptrd wrote:

Hi, We were using v1.6 for a long time, but we had to enable x-request-start headers, which was fully working in v1.9.

But Is there any way to disable Http/2 in the latest version? It breaks our headers (lowercase)

Posts: 1

Participants: 1

Read full topic

Set a cookie from a file string match

$
0
0

@chrisotherwise wrote:

Hello,

The following rules in my haproxy config search for a url fragment against lines in a text file and use the result to choose a back-end.

acl aws_path_us urlp_sub -i -f /etc/haproxy/aws_clients_us
acl aws_path_us path_sub -i -f /etc/haproxy/aws_clients_us
acl aws_referer_us req.hdr(Referer) -i -m sub -f /etc/haproxy/aws_clients_us
acl aws_cookie_us req.hdr(Cookie) -i -m sub -f /etc/haproxy/aws_clients_us

use_backend wms_cloud_us if aws_path_us or aws_referer_us or aws_cookie_us or aws_param_us

I would like to set a cookie (or a header) with the results of the match. Example:

url is: https://www.foo.com/abc123/default.html

textfile contains:

prq345
abc123
fmv592

This will match on “abc123”. I would like to set a cookie with value “abc123”. As an added bonus, I would like to set the cookie text forced to lower case regardless of the case of the URL or the text in the file.

Is either or both of these requirements possible?

Many thanks.

Posts: 1

Participants: 1

Read full topic

Haproxy Configuration with squid

$
0
0

@agoutamreddy wrote:

Hi All,

I am new to HAPROXY and trying to integrate HAPROXY with SQUID
HAPROXY VERSION = Latest
SQUID VERSION = latest

Currently able to access internet via squid by point my browser to squid IP.
Now trying to place haproxy in front of squid and expecting to access internet.

USER (browser pointing to haproxy IP) ----> squid ----> Internet

Is this achievable?

Below are configuration of my SQUID (192.168.10.31) and HAPROXY(192.168.10.2)

SQUID.CONF

acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7 # RFC 4193 local private network range
acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines
acl haproxy src 192.168.10.2 127.0.0.1

acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow localnet
http_access allow localhost
http_access allow haproxy
http_access deny all
http_port 3128
visible_hostname 192.168.10.31

refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|?) 0 0% 0
refresh_pattern . 0 20% 4320

HAPROXY.CONF

global
log 127.0.0.1 local0
log 127.0.0.1 local1 debug
chroot /appl/haproxy206/
stats socket /appl/haproxy206/run/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user root
group root

    # Default SSL material locations
	#ca-base /etc/ssl/certs
	#crt-base /etc/ssl/certs

    ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
    ssl-default-bind-options no-sslv3

defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
#errorfile 504 /etc/haproxy/errors/504.http

listen haproxy-monitoring
bind *:1000
mode http
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /
stats auth xxx:xxxx

#--------------------------
frontend squid_front
bind *:80
mode tcp
option tcplog
use_backend squid1
default_backend squid2

backend squid1
balance source
mode tcp
server squid1 192.168.10.31:3128 check port 3128 send-proxy inter 2000 rise 2 fall 3

backend squid2
balance source
mode tcp
server squid2 192.168.10.32:3128 check port 3128 send-proxy inter 2000 rise 2 fall 3

Regards,
Gowtham

Posts: 2

Participants: 1

Read full topic

HAProxy 2.0.5 accepting strict http2 request when bind line does not have proto h2

$
0
0

@tt100 wrote:

I tested with HAProxy 2.0.5 with following basic configuration using curl
curl -v --http2-prior-knowledge http://IP

global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
user haproxy
group haproxy
daemon
#maxconn 65536

defaults
log global
mode http
http-reuse never
no log
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http

frontend test
bind :80
mode http
maxconn 65536
default_backend nginx

backend nginx
mode http
balance leastconn
server <IP> <IP>:81 weight 255

When tested with HAProxy 1.9.8, I get the following error:
http2 error: Remote peer returned unexpected data while we expected SETTINGS frame. Perhaps, peer does not support HTTP/2 properly.

Is this the expected behaviour?

Posts: 1

Participants: 1

Read full topic

Haproxy backend default virtualhost

Is there a minimum character count for path_beg patterns?

$
0
0

@eyepulp wrote:

Howdy;
I’m doing some testing, and running into a weird limit – the path_beg (and maybe other path variants) seems to require a minimum of 4 characters total in the pattern before it will match. In the example below, be-four be-five and be-six can all be reliably curled, but any variation with 3 chars or less never match.

frontend http-in
    bind *:80
    use_backend be-three if { path_beg /a/ }
    use_backend be-four if { path_beg /ab/ }
    use_backend be-five if { path_beg /abc/ }
    use_backend be-six if { path_beg /abcd/ }

Is this a documented/known behavior of path_beg? I’m using the haproxy:1.9.10 Docker instance.
Thanks!

Posts: 1

Participants: 1

Read full topic

Haproxy routing to invalid backend under the load

$
0
0

@ykiran_k wrote:

Hi,

During a load we are observing when the TCP connections are high haproxy is sending traffic to invalid backend but from the logs i don’t have proof to show. In our setup, we run nginx server on the same node to render static files/responses. Within haproxy config we have backend rule to route to the localhost nginx server & other backend rules to the proper app backend. Under the load instead of sending traffic to app backend, few requests are incorrectly routed to nginx & resulting in 404 responses. Did anyone else notice this issue?

We are running haproxy, version info here…

HA-Proxy version 1.9.4 2019/02/06 - https://haproxy.org/
Build options :
  TARGET  = linux2628
  CPU     = generic
  CC      = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered -Wno-missing-field-initializers -Wtype-limits
  OPTIONS = USE_LINUX_SPLICE=1 USE_LINUX_TPROXY=1 USE_LIBCRYPT=1 USE_ZLIB=1 USE_DL=1 USE_OPENSSL=1 USE_LUA=1 USE_SYSTEMD=1 USE_PCRE=1 USE_PCRE_JIT=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.0.2k-fips  26 Jan 2017
Running on OpenSSL version : OpenSSL 1.0.2k-fips  26 Jan 2017
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2
Built with Lua version : Lua 5.3.2
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Built with zlib version : 1.2.7
Running on zlib version : 1.2.7
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with PCRE version : 8.32 2012-11-30
Running on PCRE version : 8.32 2012-11-30
PCRE library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with multi-threading support.

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as <default> cannot be specified using 'proto' keyword)
              h2 : mode=HTX        side=FE|BE
              h2 : mode=HTTP       side=FE
       <default> : mode=HTX        side=FE|BE
       <default> : mode=TCP|HTTP   side=FE|BE

Available filters :
	[SPOE] spoe
	[COMP] compression
	[CACHE] cache
	[TRACE] trace

Here is config snippet

  daemon
  log 127.0.0.1:5432 local0 info
  log 127.0.0.1:5433 local0 emerg notice
  log /dev/log local0 info
  log /dev/log local0 emerg notice
  maxconn 200000
  stats socket /var/run/haproxy/socket mode 600 level admin
  stats socket /var/run/haproxy/stats mode 640 group telegraf level operator
  stats timeout 2m
  server-state-file global
  server-state-base /var/state/haproxy/
  tune.bufsize 32768

defaults
   load-server-state-from-file global
   mode http
   log global
   retries 3
   backlog 10000
   timeout http-request 10s
   timeout queue 1m
   timeout connect 10s
   timeout client 135s
   timeout server 135s
   timeout tunnel 3600s
   timeout http-keep-alive 10s
   timeout check 10s
   timeout tarpit 60s
   option httplog
   option dontlognull
   option http-server-close
   option http-ignore-probes
   option redispatch
   maxconn 120000

frontend http-in
  bind *:80

  acl url_mypath path_reg /path
  use_backend app-backend if url_mypath
  default_backend nginx-local

backend nginx-local
  server nginxlocal 127.0.0.1:81 check inter 1m

backend app-backend
 server myapp myapp-lb:80 check inter 1m

Posts: 1

Participants: 1

Read full topic


HAProxy (haproxy-1.8.8-1.el7.x86_64) seamless reload without traffic impact

$
0
0

@fchiorascu wrote:

Good morning.

I’m looking for a haproxy configuration with soft reload without traffic interruption or another way of doing this smooth.

HAproxy:
Version -> haproxy-1.8.8-1.el7.x86_64 (free).
Configuration -> https://pastebin.com/7AceYvyq

I found on internet this two aspects.
1). is for haproxy enterprise edition I guess: https://www.haproxy.com/blog/truly-seamless-reloads-with-haproxy-no-more-hacks/
2). Reload in a smooth manner: haproxy -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -x /var/run/haproxy/admin.sock -sf $(cat /var/run/haproxy.pid)

Any ideas/ recommendation?

Posts: 3

Participants: 2

Read full topic

Http-request set-path not working

$
0
0

@void_in wrote:

I am using haproxy version 2.0.6. Following are the relevant front and back ends:

frontend test
bind 127.0.0.1:1234
mode http
option httplog
option http-buffer-request
acl receipt req.body -m reg data
use_backend receipt_back if receipt
default_backend prod

backend receipt_back
http-request set-path /app2/receipt.jsp
mode http
balance roundrobin
option forwardfor
server SERVER-1 192.168.1.1:443 ssl verify none check

Everything is working fine but the line http-request set-path /app2/receipt.jsp is not working. The default URL is /app1/receipt.jsp and logs are showing that although the backend selected is receipt_back, the URL is still /app1/receipt.jsp

Output of haproxy -vv:
HA-Proxy version 2.0.6 2019/09/13 - https://haproxy.org/
Build options :
TARGET = linux-generic
CPU = generic
CC = gcc
CFLAGS = -O2 -g -fno-strict-aliasing -DTCP_USER_TIMEOUT=18
OPTIONS = USE_PCRE2=1 USE_PCRE2_JIT=1 USE_THREAD=1 USE_OPENSSL=1 USE_ZLIB=1 USE_CPU_AFFINITY=1 USE_SYSTEMD=1

Feature list : -EPOLL -KQUEUE -MY_EPOLL -MY_SPLICE -NETFILTER -PCRE -PCRE_JIT +PCRE2 +PCRE2_JIT +POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED -REGPARM -STATIC_PCRE -STATIC_PCRE2 -TPROXY -LINUX_TPROXY -LINUX_SPLICE -LIBCRYPT -CRYPT_H -VSYSCALL -GETADDRINFO +OPENSSL -LUA -FUTEX -ACCEPT4 -MY_ACCEPT4 +ZLIB -SLZ +CPU_AFFINITY -TFO -NS -DL -RT -DEVICEATLAS -51DEGREES -WURFL +SYSTEMD -OBSOLETE_LINKER -PRCTL -THREAD_DUMP -EVPORTS

Default settings :
bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_THREADS=64, default=64).
Built with OpenSSL version : LibreSSL 2.9.2
Running on OpenSSL version : LibreSSL 2.9.2
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2
Built with transparent proxy support using: IP_TRANSPARENT IP_FREEBIND
Built with zlib version : 1.2.7
Running on zlib version : 1.2.7
Compression algorithms supported : identity(“identity”), deflate(“deflate”), raw-deflate(“deflate”), gzip(“gzip”)
Built with PCRE2 version : 10.32 2018-09-10
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes

Available polling systems :
poll : pref=200, test result OK
select : pref=150, test result OK
Total: 2 (2 usable), will use poll.

Available multiplexer protocols :
(protocols marked as cannot be specified using ‘proto’ keyword)
h2 : mode=HTX side=FE|BE mux=H2
h2 : mode=HTTP side=FE mux=H2
: mode=HTX side=FE|BE mux=H1
: mode=TCP|HTTP side=FE|BE mux=PASS

Available services : none

Available filters :
[SPOE] spoe
[COMP] compression
[CACHE] cache
[TRACE] trace

Thank you very much.

Posts: 1

Participants: 1

Read full topic

[Solved] Rewrite a complete URL instead of just path or uri

$
0
0

@theonlydoo wrote:

I put my example in here: https://dooby.fr/links/?IxO7cg

http-request redirect code 301 location https://my.url.com%[path,regsub(^/my_old_path,,)] if { path_beg /my_old_path }

here, the purpose is to redirect a hit on https://url.com/my_old_path/foo/bar/baz to https://my.url.com/foo/bar/baz

Posts: 1

Participants: 1

Read full topic

302 redirect with keep-alive

$
0
0

@ajohnson1200 wrote:

hey everyone!

Trying to do something that I think should be simple but for whatever reason won’t work… End result I want: an HTTP request for:

http://example.com/foo/bar.jpg

should result in a 302 to

http://acme.com/foo/bar.jpg

but I want the original HTTP request to maintain it’s connection to example.com. My problem is that I see this:

HTTP/1.1 302 Found
Cache-Control: no-cache
Content-length: 0
Location: http://acme.com/foo/bar.jpg
Connection: close

when I want to see this:

HTTP/1.1 302 Found
Cache-Control: no-cache
Content-length: 0
Location: http://acme.com/foo/bar.jpg

and then to see this in curl:

  • Connection #0 to host localhost left intact

I’ve tried every combination of adding “option http-keep-alive” to the defaults, frontend and backend sections. The syntax I’m using, inside of a backend config, looks like this:

http-request redirection location http://acme.com/foo/bar.jpg

Is there any way to configure haproxy to not close the original connection to example.com after doing a redirect to acme.com?

Posts: 5

Participants: 2

Read full topic

Trying to use stick-table with tcp content

$
0
0

@pat wrote:

Hello

i use haproxy (2.07) for balancing incoming tcp connection on port 13 to the same server on port 13,14 & 15
i would like to use the content of the first bytes sent, for the key in the stick table
but the stick table dont add entries and stay empty

here is how i build the cfg :

this is my first working config

frontend port13
bind *:13
default_backend myback

backend myback
mode tcp
balance leastconn
server srv1 <server_ip>:13

the config is working, i added a stick table on ip
backend myback

stick-table type ip size 1m expire 1h
stick on src

this config is working too. i can see entries added with socat
but the IP of my peers change often, so i want to use the peer id which is in the first packet sent.
so i change like that :
stick-table type binary len 25 size 1m expire 1h
stick on req.payload(1,25)

and now, the peers can connect, but no entries are stored in the table. the table remain empty
i ve added in the frontend :
tcp-request inspect-delay 2s
tcp-request content accept if WAIT_END

but it does not change anything
i ve tried to launch in debug mode, but i see nothing of interest

any idea ?

Posts: 1

Participants: 1

Read full topic

Viewing all 4731 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>