Quantcast
Channel: HAProxy community - Latest topics
Viewing all 4714 articles
Browse latest View live

Question about URL rewrites - HA Proxy 1.8

$
0
0

@omegagx wrote:

I have a newbie http URL rewrite question. When someone goes to http://oms.mydomain.com or to http://oms (and mydomain.com is automatically added as a suffix), I need it instead to go to http://omsproduction.mydomain.com:5500/OMSURL but in the browser in the address bar, it should show http://oms.mydomain.com:5500/OMSURL or http://oms:5500/OMSURL, respectively. Is that something HA Proxy can do and if yes, what would be the syntax?

Thanks a lot.

Posts: 1

Participants: 1

Read full topic


Routing traffic over VPN

$
0
0

@kwevoel wrote:

Hi everyone, can anyone help me with this scenario please? I’m very new to HA Proxy.

My client has an HA Proxy running in http mode. They have just setup a VPN to a 3rd party, with only the HA Proxy as a remote host. We need to be able to route http traffic from IP’s on our network behind the HA Proxy over this VPN tunnel to a host on the 3rd party’s network, can someone point me in the right direction?

From what I gather, it will require using both http and tcp modes, as the ACL will be by IP.

Posts: 1

Participants: 1

Read full topic

Help me with ACL

$
0
0

@lucas_yanez wrote:

I got:
acl host_bacon hdr(host) -i re*****.l*e.
acl host_milkshake hdr(host) -i u.y***.***
use_backend bacon_cluster if host_bacon
^^ alreday.

I want the folder “reports” in host_bacon to go to another server. Only the folder “reports”.
How to make this?

Posts: 1

Participants: 1

Read full topic

Prevent automatic failback on active/passive backend not working

$
0
0

@yostie wrote:

I am attempting to do an haproxy setup with one frontend VIP and two backend web servers. I want the backend to be active/passive so that all requests go to server #1 unless server #1 is down, then send to server #2. When server #1 comes alive, stay on server #2 until server #2 fails.

I followed the guide below using stick tables to implement and it was working but now it seems to have stopped and I don’t know why. When I fail a server, it correctly sends to the backup but when the failed server comes back online, it is sending the traffic to the newly fixed server instead of staying on the backup.

I am running HAProxy 1.8.17. Here is a sanitized copy of the haproxy.cfg. Any ideas??

#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    #
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    #
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #
    #    local2.*                       /var/log/haproxy.log
    #
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    tune.ssl.default-dh-param 2048

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats mode 600 level admin
    stats timeout 2m

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option                  http-server-close
    option                  forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

#---------------------------------------------------------------------
# Load Balancer Stick-Table Sync
#---------------------------------------------------------------------

peers lb_peers
    peer peer1 10.255.0.4:9969
    peer peer2 10.255.0.5:9969

#---------------------------------------------------------------------
# Stats interface
#---------------------------------------------------------------------

listen  stats
        bind            10.255.0.3:8080
        mode            http
        log             global

        maxconn 10

        timeout client      100s
        timeout server      100s
        timeout connect     100s
        timeout queue       100s

        stats enable
        stats hide-version
        stats refresh 30s
        stats show-node
        stats auth <REMOVED>
        stats uri /haproxy?stats

#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------

frontend  solarwinds_http_fe

    mode http
    bind 10.255.0.3:80
    http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
    default_backend solarwinds_be

frontend  solarwinds_https_fe

    mode http
    bind 10.255.0.3:443 ssl crt /etc/ssl/solarwinds/solarwinds.pem
    http-request set-header X-Forwarded-Proto https if { ssl_fc }
    default_backend solarwinds_be

#---------------------------------------------------------------------
# Solarwinds backend
#---------------------------------------------------------------------

backend solarwinds_be
    stick-table type ip size 1 nopurge peers lb_peers
    stick on dst
    redirect scheme https if !{ ssl_fc }
    option httpchk HEAD /Orion/Login.aspx HTTP/1.1\r\nHost:\ mysite.domain.net
    server bru-monweb01 10.255.0.6:80 check fall 3 fastinter 5s downinter 5s rise 6
    server bru-monweb02 10.255.0.7:80 check fall 3 fastinter 5s downinter 5s rise 6 backup

Posts: 1

Participants: 1

Read full topic

HAProxy as a FIPs compliance

$
0
0

@alitahir wrote:

Experts,

What needs to done on Haproxy part to enable it as FIPs compliance? Is there any specific configurations to enable the FIPs in HAproxy?

Thanks

Posts: 1

Participants: 1

Read full topic

SSL certificate management

$
0
0

@pattie wrote:

I want to use haproxy as SSL termination for a content management system, but it’d need to be dynamic since users can add new domains and upload their site’s certs by themselves through their site management. I could do a script in python or bash that would do that, copy the certs to the haproxy host and trigger a seamless reload, but I was wondering if something already exists. Thanks.

Posts: 1

Participants: 1

Read full topic

Some h2pec test cases for HTTP/2 failed, are those bugs for haproxy?

$
0
0

@hedong0411 wrote:

I am a follower of haproxy. I tested HTTP/2 fuction in haproxy_1.8.17 with the tool h2spec, but some test cases failed. I wonder if those are bugs for haproxy.
See the tool here https://github.com/summerwind/h2spec .

Those failed cases are as follow:

gaohd@host:~/.golang/gopath/src/github.com/summerwind/h2spec$./h2spec http2 -h www.axddos.com -p 443 -t -k

Failures:

Generic tests for HTTP/2 server

  1. Frame Definitions
3.10. CONTINUATION

  × 1: Sends a CONTINUATION frame

    -&gt; The endpoint MUST accept CONTINUATION frame.

       Expected: HEADERS Frame (stream_id:1)

         Actual: Connection closed

  × 2: Sends multiple CONTINUATION frames

    -&gt; The endpoint MUST accept multiple CONTINUATION frames.

       Expected: HEADERS Frame (stream_id:1)

         Actual: Connection closed
  1. HTTP Message Exchanges
× 4: Sends a POST request with trailers

  -&gt; The endpoint MUST respond to the request.

     Expected: HEADERS Frame (stream_id:1)

       Actual: Connection closed

Hypertext Transfer Protocol Version 2 (HTTP/2)

  1. HTTP Frames
4.2. Frame Size

  × 3: Sends a large size HEADERS frame that exceeds the SETTINGS_MAX_FRAME_SIZE

    -&gt; The endpoint MUST respond with a connection error of type FRAME_SIZE_ERROR.

       Expected: GOAWAY Frame (Error Code: FRAME_SIZE_ERROR)

                 Connection closed

         Actual: DATA Frame (length:624, flags:0x01, stream_id:1)
  1. Streams and Multiplexing
5.1. Stream States

  × 13: closed: Sends a CONTINUATION frame

    -&gt; The endpoint MUST treat this as a connection error of type STREAM_CLOSED.

       Expected: GOAWAY Frame (Error Code: STREAM_CLOSED)

                 GOAWAY Frame (Error Code: PROTOCOL_ERROR)

                 Connection closed

         Actual: RST_STREAM Frame (length:4, flags:0x00, stream_id:1)
  1. Frame Definitions
6.10. CONTINUATION

  × 1: Sends multiple CONTINUATION frames preceded by a HEADERS frame

    -&gt; The endpoint must accept the frame.

       Expected: HEADERS Frame (stream_id:1)

         Actual: Connection closed

  × 4: Sends a CONTINUATION frame preceded by a HEADERS frame with END_HEADERS flag

    -&gt; The endpoint MUST respond with a connection error of type PROTOCOL_ERROR.

       Expected: GOAWAY Frame (Error Code: PROTOCOL_ERROR)

                 Connection closed

         Actual: RST_STREAM Frame (length:4, flags:0x00, stream_id:1)
  1. HTTP Message Exchanges
8.1. HTTP Request/Response Exchange

  8.1.2. HTTP Header Fields

    8.1.2.6. Malformed Requests and Responses

      × 1: Sends a HEADERS frame with the "content-length" header field which does not equal the DATA frame payload length

        -&gt; The endpoint MUST treat this as a stream error of type PROTOCOL_ERROR.

           Expected: GOAWAY Frame (Error Code: PROTOCOL_ERROR)

                     RST_STREAM Frame (Error Code: PROTOCOL_ERROR)

                     Connection closed

             Actual: DATA Frame (length:182, flags:0x01, stream_id:1)

      × 2: Sends a HEADERS frame with the "content-length" header field which does not equal the sum of the multiple DATA frames payload length

        -&gt; The endpoint MUST treat this as a stream error of type PROTOCOL_ERROR.

           Expected: GOAWAY Frame (Error Code: PROTOCOL_ERROR)

                     RST_STREAM Frame (Error Code: PROTOCOL_ERROR)

                     Connection closed

             Actual: DATA Frame (length:182, flags:0x01, stream_id:1)

HPACK: Header Compression for HTTP/2

  1. Binary Format
6.3. Dynamic Table Size Update

  × 1: Sends a dynamic table size update larger than the value of SETTINGS_HEADER_TABLE_SIZE

    -&gt; The endpoint MUST treat this as a decoding error.

       Expected: GOAWAY Frame (Error Code: COMPRESSION_ERROR)

                 Connection closed

         Actual: DATA Frame (length:624, flags:0x01, stream_id:1)

Finished in 18.9586 seconds

145 tests, 135 passed, 0 skipped, 10 failed

gaohd@host: ~/open_source/haproxy-1.8.17$ ./haproxy -vv
HA-Proxy version 1.8.17 2019/01/08
Copyright 2000-2019 Willy Tarreau willy@haproxy.org

Build options :
TARGET = linux2628
CPU = generic
CC = gcc
CFLAGS = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -Wno-format-truncation -Wno-null-dereference -Wno-unused-label
OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_PCRE=1 USE_TFO=1 USE_NS=1

Default settings :
maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.1.0g 2 Nov 2017
Running on OpenSSL version : OpenSSL 1.1.0g 2 Nov 2017
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Encrypted password support via crypt(3): yes
Built with multi-threading support.
Built with PCRE version : 8.39 2016-06-14
Running on PCRE version : 8.39 2016-06-14
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity(“identity”), deflate(“deflate”), raw-deflate(“deflate”), gzip(“gzip”)
Built with network namespace support.

Available polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
[SPOE] spoe
[COMP] compression
[TRACE] trace

haproxy config as follow:

global

ulimit-n 900
maxconn 400000
maxconnrate 0
nbproc 1
tune.http.cookielen 2048
tune.http.maxhdr 101

tune.bufsize 24576
log global
maxconn 30000
backlog 8192
option clitcpka
option redispatch

frontend http
bind ipv4@0.0.0.0:80
mode http
log global
log-format %ci:%cp\ [%t]\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ST\ %B\ %CC\ %CS\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %{+Q}r
rate-limit sessions 500000
option httplog
option forwardfor
capture request header Host len 256
capture request header User-Agent len 256
capture request header Referer len 256
capture request header Cookie len 256
capture request header X-Forwarded-For len 256
default_backend http_backend

frontend https
bind 0.0.0.0:443 ssl crt /usr/local/l7/resource/l7/crt/sina.pem alpn h2,http/1.1
mode http
log global
log-format %ci:%cp\ [%t]\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ST\ %B\ %CC\ %CS\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %{+Q}r
rate-limit sessions 500000
option httplog
option forwardfor
capture request header Host len 256
capture request header User-Agent len 256
capture request header Referer len 256
capture request header Cookie len 256
capture request header X-Forwarded-For len 256
default_backend https_backend

backend http_backend
log global
mode http
balance roundrobin
option http-server-close
server myserver 10.6.130.69:80

backend https_backend
log global
mode http
balance roundrobin
option http-server-close
server MyHttps 10.6.130.69:80 check inter 5000

Posts: 1

Participants: 1

Read full topic

Client certificate revocation (CRL), long life connections, etc

$
0
0

@starves wrote:

I am trying to understand the CRL capabilities in HAProxy. Specifically, we are running HAProxy in a container and are refreshing the CRL data source outside of the container several times a day. We support around 30 different certificate issuers, all of whom re-issue their CRLs on various schedules, and with varied frequency.

Some specific questions I have:

  1. has anyone solved the hot-reloading issue - that is, is there a “reload” type command that will gracefully reload the CRL files?
  2. if restart of HAProxy is required, is there a preferred approach (like a master HAProxy process that does no CRL verification and lives for a long time that offloads to n child processes that can come and go?
  3. Is there a limit to the size of a single CRL file? Apache supports using an entire directory of CRL files (via SSLCARevocationPath ), which comes into play for us. Several of the CRL files we support (issued by US Government) are many MB in size. A combined CRL file often reaches 300+MB. nginx has been ruled out for us because of this exact issue (it can’t handle large CRL files and doesn’t support the directory approach)

Posts: 1

Participants: 1

Read full topic


Nbsrv acl port monitoring

$
0
0

@nostalgia wrote:

Hi,
I have two server and two port for per server,I want if one port of per server down The other port on this server is also down and send all request to other server.
Thanks

Posts: 1

Participants: 1

Read full topic

Leastconn algorithm for short lived connections

$
0
0

@Skinnyguinea wrote:

Hi all,

We have configured HA Proxy for forwarding connections to two application servers.

We have used roundrobin algorithm for the balance parameter in the backend code section.

We wish to know if we can use leastconn algorithm for this purpose.

The connections are of a short duration and we are aware that leastconn is generally used for long running sessions.

Can we replace roundrobin with leastconn in our config file? What kind of behaviour can we expect if we were to implement this?

Posts: 1

Participants: 1

Read full topic

[noob alert] Basic configuration going wrong

$
0
0

@arnaudrigole wrote:

Hi everyone,

I’m trying to use haproxy instead of apache2 to setup a reverse proxy in my company.
I followed/compared some documentations to make my own haproxy.cfg

Haproxy is running on its own server, and has to redirect https requests on another web server. As you’ll see in the below configuration (i think) i redirect any http request on https.

The problem is that i get the “BAD REQUEST” error when trying to access my test website. The certificates seems to work because i can see it in my web browser, and i got no error about it in the log, however, in that /var/log/haproxy.log, i can only see that the request seems to be correctly redirected on the backend web server :

Jan 25 16:48:14 haproxyserver haproxy[5570]: 37.169.147.6:37646 [25/Jan/2019:16:48:14.100] localhost-443~ redirect-website1/webserver1:443 75/0/131 619 -- 1/1/0/0/0 0/0

What i’m doing wrong ? :cry:

For information, that process worked correctly reverse proxy by apache, but i can’t use it anymore since i have to setup another redirection for another domain, and apache2 can’t handle more than 1 public certificate (or maybe i’m doing something wrong too, but i’m not here for that, anyway haproxy seems to be a more convenient and powerful tool :wink: )

Find below the configuration file

Thanks in advance for your precious help!
Arnaud

Here is the complete haproxy.cfg, (commented!)

global
log /dev/log    local0
log /dev/log    local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
maxconn 4096
user haproxy
group haproxy
daemon


defaults
log     global
mode    tcp
option  tcplog
option  dontlognull
timeout connect 15s
timeout client  15s
timeout server  15s
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http


frontend rvsproxyserver-80
bind *:80
	mode tcp
redirect scheme https code 301 if !{ ssl_fc } #redirect everything to https

frontend rvsproxyserver-443
bind *:443 ssl crt /var/www/certs/haproxy #concatened .pem certs location for websites
option tcplog
mode tcp

acl tls req.ssl_hello_type 1

tcp-request inspect-delay 5s
tcp-request content accept if tls

acl is_website1 hdr(host) website1   #simple acl1
acl is_website2 hdr(host) website2   #simple acl2   

use_backend redirect-website1 if is_website1 #declare backend1
	use_backend redirect-website2 if is_website2 #declare backend2


backend redirect-website1
mode tcp
option ssl-hello-chk
server webserver1 10.10.10.10:443 check #name and IP of my webserver1


backend redirect-website2
mode tcp
option ssl-hello-chk
server webserver2 10.11.11.11:443 check #name and IP of my webserver2

Posts: 2

Participants: 2

Read full topic

Haproxy 1.9.2 info logs in stdout

$
0
0

@ykiran_k wrote:

I am using the new haproxy 1.9.2 with this line in global section
log stdout format raw local0

With that line i am able to see only the notice logs, I wanted to know if there is way to get all the info logs as well.

What i meant by notice logs are these lines…

Proxy stats started.
Proxy frontend-http-in started.
Proxy my-local started.
Proxy app-backend started.

I want to see the raw haproxy info logs for my access requests. Can someone pls suggest.

Posts: 6

Participants: 2

Read full topic

How can a frontend utilize all servers from multiple backend without repeating code

$
0
0

@deepika wrote:

Is it possible to combine multiple backend (like in fe-b frontend below) on HAProxy 1.8 so that the servers from both backend can be used within frontend without repeating the servers on backend.

frontend fe-a
	...
	mode tcp
	use_backend be-test-1

frontend fe-b
	...
	mode tcp
	use_backend >>>### USE SERVERS FROM BOTH be-test-1 AND be-test-2 ###<<<

backend be-test-1
	...
	mode tcp
	server srv-1 111.111.111.1:4567 check
	server srv-2 111.111.111.2:4567 check

backend be-test-2
	...
	mode tcp
	server srv-1 222.222.222.1:4567 check
	server srv-2 222.222.222.2:4567 check

Posts: 1

Participants: 1

Read full topic

How to use capture.req.uri?

$
0
0

@trococ wrote:

Hi

I would like to add some header to response if the path begins with something,

here is my config:

global
  debug
frontend www
  bind *:8000
  mode http
  http-response set-header TEST1 YES1 if { capture.req.uri -m beg /test/ }
  default_backend be

backend be
  mode http
  acl is_test capture.req.uri -m beg /test/
  http-response set-header TEST2 YES2 if is_test
  server s1 127.0.0.1:8080

But it does not work. Please help

I’m using v1.9.2.

Thanks

Posts: 1

Participants: 1

Read full topic

Multiple backend redirections

$
0
0

@rootfs wrote:

Hi
I have some problem with backend redirection

Here is my configs
frontend
acl somesite-acl path_sub /service/v2/amount/balance path_sub /service/v2/amount/fund path_sub /service/pg path_sub /service/testing
use_backend be_somesite if somesite-acl

backend be_somesite
http-request set-header Host somesite.com
reqirep ^([^\ :])\ /(.) \1\ /service/v2/amount/\2
reqirep ^([^\ :])\ /(.) \1\ /service/pg/\2
reqirep ^([^\ :]*)\ /testing \1\ /service/tested/\2
server site_node somesite.com:443 cookie S2 cookie S2 check ssl verify none

Below is what I want to achieve

example.com/service/v2/amount/balance
somesite.com/service/v2/amount/balance

example.com/service/v2/amount/fund
somesite.com/service/v2/amount/fund

example.com/service/pg
somesite.com/service/pg

example.com/service/testing
somesite.com/service/tested

Can someone try to help as I am new to haproxy and I have been working from few days to fix this, but haven’t find any solution yet

Posts: 1

Participants: 1

Read full topic


How to set timeout for external-check

$
0
0

@ktchow wrote:

Hi, I have set up haproxy on DB servers with external check execute a shell script to run a query as the health check. I would like to know how to configure the timeout for the external check. I have tried “timeout check 10s” but it seemed not work.

Posts: 1

Participants: 1

Read full topic

SSL Passthrough tcp mode failed during SSL Handshake

$
0
0

@mludy wrote:

Hello All,
I fight with this problem for some time now but unable to figure it out.
I tested HProxy SSL Passthrough with simple configuration using listen directive
Here is working sample:

listen my_listener
    bind *:443
    mode tcp
    option tcplog
    balance leastconn
    option ssl-hello-chk
    server app lb-test.xyz:443 check 

Now I would like to use SNI to have option to route ssl traffic to multiple backends. First step was to move this configuration to frontend and backend directives:

frontend https
    bind *:443
    mode tcp
    option tcplog
    default_backend app

backend app
  mode tcp
  option tcplog
  balance roundrobin
  option ssl-hello-chk
  server app_backend lb-test.xxx:443 check

Unfortunatelly this configuration is not correct. When I try to test it i got:

curl https://yyy.com -v
* Rebuilt URL to: https://yyyy.com/
*   Trying 18.207.74.42...
* TCP_NODELAY set
* Connected to yyy.com (xxx) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/cert.pem
  CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS alert, Server hello (2):
* error:14004438:SSL routines:CONNECT_CR_SRVR_HELLO:tlsv1 alert internal error
* stopped the pause stream!
* Closing connection 0
curl: (35) error:14004438:SSL routines:CONNECT_CR_SRVR_HELLO:tlsv1 alert internal error

So it looks communication is not possible because server send hello packet with alert.

Communication to backend works when tried it directly without haproxy:

curl -vvvv https://xxxxx -v
* Rebuilt URL to: https://xxxxx/
*   Trying 55.*.*.*...
* TCP_NODELAY set
* Connected to xxxxxxx (yyyyy) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/cert.pem
  ..........
* Server certificate:
*............

haproxy version: 1.6.3 2015/12/25

Any ideas appreciated,
Michal

Posts: 1

Participants: 1

Read full topic

Can http_auth_group() be executed only when an acl is true

$
0
0

@kuma wrote:

Here is my relevant configuration to authenticate/authorize service requests -

global

userlist myusers
group u_svc1 users a,b
group u_svc2 users c
user a password hashedpwda
user b password hashedpwdb
user c password hashedpwdc

listen http-in
mode http
acl svc1flag hdr_sub(service) “svc1”
acl svc1auth http_auth_group(myusers) u_svc1 if svc1flag
use_backend …

acl svc2flag hdr_sub(service) “svc2”
acl svc2auth http_auth_group(myusers) u_svc2 if svc2flag
use_backend …

when i start, i get following errors -

invalid group if
invalid group svc1flag
invalid group if
invalid group svc2flag

If i remove (if svc1flag/if svc2flag/…) from http_auth_group lines like below, everything works fine -

acl svc1auth http_auth_group(myusers) u_svc1
acl svc2auth http_auth_group(myusers) u_svc2

But i want to keep (if svc1flag) this condition otherwise i will be running http_auth_group() acl for every service configuration until i find the matching service.

Any way to address it?

Thanks

Posts: 1

Participants: 1

Read full topic

Combination of path_beg and hdr not working

$
0
0

@Linja wrote:

Hi,

I am trying to send requests to api.mydomain.com/some-service to another backend serving the actual some-service.
Since I’m hosting many different domains in haproxy, I need to make sure requests going to some-service arehaving api.mydomain.com on host header not anything else.

here is a sample of my configuration

frontend server
    bind *:443 ssl crt /etc/haproxy/ssl/cent.pem
    acl API hdr(Host) -i api.mydomain.com
    acl SERVICE path_beg -i /some-service 
    use_backend back_service if SERVICE API
    use_backend back_api if API 

backend back_service
  option forwardfor
    http-request set-header X-Forwarded-Port %[dst_port]
    http-request add-header X-Forwarded-Proto https if { ssl_fc }
    server service 172.16.1.62:80 check maxconn 1000

backend back_api
  option forwardfor
    http-request set-header X-Forwarded-Port %[dst_port]
    http-request add-header X-Forwarded-Proto https if { ssl_fc }
    server api 172.16.1.60:80 check maxconn 1000

the result with this config is all requests are going to bach_api.

Posts: 1

Participants: 1

Read full topic

Strange connection issues with tcp (redis) under load

$
0
0

@the_glu wrote:

Hello,

I have a very strange connection issue, trying to proxy redis servers.

First, the relevant configuration extract:

defaults
    mode http
    log     global
    option  httplog
    option  dontlognull
    option  dontlog-normal
    option  http-ignore-probes
    timeout connect 5000
    timeout client  50000
    timeout server  50000
    maxconn 25000

frontend redis
    bind *:6362
    mode tcp
    use_backend redis
    timeout client 3h
    option srvtcpka

backend redis
    mode tcp
    timeout server 3h
    option clitcpka
    
    server cache01 cache01:6381 check

Under heavy load, connections are randomly dropped, but limits are not hit when I check haproxy stats (~50 open connections).

(To add extra details, it’s a celery ‘client’ instance, using redis as a result backend, failing to submit new tasks).

HaProxy logs show this:

Jan 28 11:40:30 localhost haproxy[17014]: 127.0.0.1:37852 [28/Jan/2019:11:08:59.155] redis redis/cache01 1/0/1891613 2421222 SD 133/98/97/97/0 0/0
Jan 28 11:40:41 localhost haproxy[17014]: 127.0.0.1:43558 [28/Jan/2019:11:40:30.805] redis redis/cache01 1/0/10482 2605056 CD 133/98/97/97/0 0/0

Extra remark: it seems the double disconnect always together.

Extra details: the client application receive a socket.timeout (Python).

There is no issues with a direct connection: this only happens with HaProxy in between (but we need it ^^).

The issue happens on different system / linux kernls: We can reproduce it locally on a single “basic” archlonux machine or in debian stable containers on a “powerfull” server.

Having no idea who was responsible (Celery <-> Ha or Ha <-> Redis), we started to dump networks packets:

For the first connection (37852, closed at 11:40:30), we have the following timeline:

11.08.59 celery -> ha Connection opened. ha -> redis opened a few milliseconds later
11:09:00 ha -> celery: At tcp level, the window seem to be full (On celery side)
11:09:00 celery -> ha: Celery is starting to send tcp packets with a window of zero
11:09:10 After a few keep-alive from ha to celery, celery start again to send data, but window is still set to 0. No data traffic from ha to celery anymore in the full dump.

11:11:05 redis -> ha: At tcp level, window seems to be full (On haproxy side)
11:11:05 ha -> redis: HaProxy is starting to send tcp packets with a window of zero
11:11:11 After a few keep-alive, haproxy start again to send data, but window is still set to 0. No data traffic form redis to celery will be seen again.

11:21:17 redis -> ha: At tcp level, window seems to be full (On redis side)
11:21:17 redis start to send tcp packets with a window of zero as well
11:22:35 redis close the connection

11:36:45: ha -> redis: At tcp level, window seems to be full (On haproxy side). At thing point no more traffic is exchanged
11:38:50 Last tcp ack
11:40:30: Celery closed the connection

Remark 1: There is data “lost” in HaProxy buffer: the traffic redis -> ha -> celery is not send anymore at 11:09:00, and redis -> ha is stopped at 11:11:05 (so there is ~2 minutes of traffic in haproxy buffers).

Remark 2: Why redis didn’t closed the front connection when the back connection was lost ? I’m 99% sure that I see the two connections in tcp dump (correct timing, same data when everything was working)

For the second connection, the issue is quite similar:

Timeline:

11:40:30: Connection opened
11:40:33: Last message from HA to celery (window full), but no pause in data from celery to ha
11.40:41: Celery close the connection
11:40:41: Ha close the connection to redis

This second case make more sense (no pause in data flow, front is closed to back is closed immediately).

Some screenshot of tcpdumps in case I missed something:

Case 1:

SYNs

Celery full

Backend Haproxy full

Redis full and clsoe

Frontend Haproxy full and clsoe

Case 2:

SYNs


Celery full

Celery close

Redis close

I don’t understand why this is happening only with haproxy in between. There is load on the system, but it’s seems unlikely that the celery client is unable to find time to consume his buffer for ~30 minutes, especially when he find time to send data.

This cloud be a client (redis-py is used by celery) issue, but I was wondering if it could be a haproxy configuration issue and/or someone have any hits to try to fix the issue.

Thanks for your help!

Posts: 1

Participants: 1

Read full topic

Viewing all 4714 articles
Browse latest View live