Quantcast
Channel: HAProxy community - Latest topics
Viewing all 4731 articles
Browse latest View live

HAProxy WordPress abuse protection

$
0
0

@Hoggins wrote:

Hello !

I have already posted this on the mailing-list, but maybe it's easier to discuss my issue here. Here's the thing :

Hi folks !

I'm trying to follow instructions found here :


on how to protect some WordPress sites against blind and repetitive
login attempts, but I can't seem to make it effective.

Would your mind commenting on my config to help me why it's not working ?
It's only the protection part that is not working : the rest of the
service is ok, and has always been. It's just that I recently added the
recipe found on HAProxy blog, and I get no result.

My peers definition :

peers mypeers
    peer machine1.fqdn haproxy-1:1024
    peer machine2.fqdn haproxy-2:1024
    peer machine3.fqdn haproxy-3:1024

My frontend definition :

frontend wordpress-frontend
    bind X.X.X.X:80
    bind Y.Y.Y.Y:80
    mode http

    tcp-request inspect-delay 10s
    tcp-request content accept if HTTP # <==== despite what was written on the blog post, HAProxy refused to let "tcp-request" without "content" keyword. I guess this is expected.
    acl wp_login                 path_beg     -i /wp-login.php
    acl flagged_as_abuser        sc1_get_gpc0 gt 0
    stick-table type binary len 20 size 500 store gpc0 peers mypeers
    tcp-request content track-sc1  base32+src if METH_POST wp_login
    use_backend bk_login_abusers if flagged_as_abuser

    default_backend wordpress-http-backend

My backend definition :

backend wordpress-http-backend
    tcp-request inspect-delay 10s
    tcp-request content accept if HTTP
    # brute force protection
    acl wp_login                path_beg -i /wp-login.php
    stick-table type binary len 20 size 500 store http_req_rate(20s)
peers mypeers
    tcp-request content track-sc2  base32+src if METH_POST wp_login
    stick store-request base32+src            if METH_POST wp_login
    acl bruteforce_detection  sc2_http_req_rate gt 2   # <=====  adjusted to "2" to see if I got anything better. No luck.
    acl flag_bruteforce       sc1_inc_gpc0      gt 0
    http-request deny if bruteforce_detection flag_bruteforce

    balance roundrobin
    option http-server-close
    option forwardfor
    option httpchk GET /haproxy-check.php HTTP/1.1\r\nHost:\
backend.address.fqdn
    http-check expect string ALLOK
    http-check disable-on-404
    server web-1 web-1:80 check slowstart 5000ms
    server web-2 web-2:80 check slowstart 5000ms
    server web-3 web-3:80 check slowstart 5000ms
    server sorry-1 webfront-1:50000 check backup
    server sorry-2 webfront-2:50000 check backup

My sandbox backend :

backend bk_login_abusers
    mode http
    log global
    option httplog
    timeout tarpit 1s
    http-request tarpit
    errorfile 500 /etc/haproxy/errorfiles/wp_fake_login.http
    errorfile 503 /etc/haproxy/errorfiles/wp_fake_login.http

Finally, the contents of the stick tables :

# table: wordpress-frontend, type: binary, size:500, used:0

# table: wordpress-http-backend, type: binary, size:500, used:6
0xe71b10: key=410B34765BD290FF000000000000000000000000 use=0 exp=0 server_id=2 http_req_rate(20000)=0
0xe4abd0: key=410B3476B2891330000000000000000000000000 use=0 exp=0 server_id=1 http_req_rate(20000)=0
0xdb7e50: key=B77846E1A2F88FA1000000000000000000000000 use=0 exp=0 server_id=2 http_req_rate(20000)=0
0xe4fd00: key=BA68E942D972C985000000000000000000000000 use=0 exp=0 server_id=1 http_req_rate(20000)=0
0xdb7f00: key=C45BC90B5BC80C04000000000000000000000000 use=0 exp=0 server_id=1 http_req_rate(20000)=0
0xdb7da0: key=F58B492055EE71FC000000000000000000000000 use=0 exp=0 server_id=1 http_req_rate(20000)=0

I tested with cURL and querying a lot on the /wp-login.php address with
POST requests (more than twice per 20s, for sure), but the counters
don't raise.

I'm open to new ideas on how to solve my case, and mistakes in my config.

Thanks !

Hoggins!

Posts: 1

Participants: 1

Read full topic


Stick-table doesnt create entry using dns name but ip works

$
0
0

@killefer762 wrote:

I am using haproxy to load balance some remote desktop servers and the stick-table works except when I am using the dns name for the farm. If I used the ip address for the load balancer it works fine. If I use the actual domain name of the load balancer it works fine. If I use dns name that refers to the farms then the connection will work but no entry in stick table.

10.75.1.60 is virtual ip address for load balancers (this works)
10.75.1.61 is load balancer 1 (this works)
farm2.domain.com doesnt work (ip to 10.75.1.60)
lb1.corp.domain.com (10.75.1.61) works

I am stumped !

Posts: 1

Participants: 1

Read full topic

How to get original client ip as remortAddress in place of haproxy server ip

$
0
0

@naresh wrote:

Hi all,
I'm using haproxy as a reverse-proxy using socketio cluseter with sticky session for scaling in socketIO i need remortAddress. How can i get orignal client ip in remortAddress inplace of haproxy ip with out replacing in HAproxy. Please help me in solving this in haproxy configurations. System environment (In AWS : HA-Proxy version 1.6.5 2016/05/10, CentOS Linux release 7.1.1503, Kernel Version 3.10.0-229.14.1.el7.x86_64)

Posts: 1

Participants: 1

Read full topic

Backend encryption and reusing SSL sessions

$
0
0

@stevelynn wrote:

I have an application where I'm using HAProxy (1.5.14) to route traffic to different backends. HAProxy terminates the incoming SSL and then re-encrypts to the backend. There seems to be a large number of key exchanges, which is limiting the performance. In a short test (2 min), I sent 50k requests through HAProxy. I observed over 1k key exchanges. Looking further into the packets, it seems that HAProxy is closing the connections to the backends [RST, ACK]. When HAProxy attempts to reuse the session ID after this, the backend service replies with a new session ID. It appears that closing the connection invalidates the session ID on the backend. I believe that I have configured HAProxy correctly with option http-keep-alive.

When does HAProxy close a connection to the backend?

Is there a way to increase the SSL session reuse in the backend?

Posts: 1

Participants: 1

Read full topic

Configure HAProxy for round robin load balancing

$
0
0

@3rw1Nr1S0s wrote:

I installed haproxy and apache on same server. I then cloned the server for load balancing. My goal is to have apache handle both http and https requests (ssl passthrough) and have all http requests redirected to https automatically. I choosed round robin as algorithm for load balancing. I configured haproxy to use ports 80 and 443 for binding to virtual the IP (10.10.0.1) and have apache listen on ports 8080 and 4443 for “passed through” requests.


After much googling, I have the following haproxy.cfg setup on both servers:
-------------------------------------------------------------------------------------------------------------
<snip>


###-Redirect http requests to https-######
frontend http_to_https_redirect
bind 10.10.0.1:80
mode http
redirect scheme https if !{ ssl_fc }


frontend https_frontend
bind 10.10.0.1:443
mode tcp
option tcplog
default_backend https_backend


backend https_backend
mode tcp
balance roundrobin
option ssl-hello-chk
server ws1 10.10.0.2:4443 check
server ws2 10.10.0.3:4443 check


<snip>
-------------------------------------------------------------------------------------------------------------


I used curl and wget to test the above configuration and noted that all connections were directed only to ws1 although roundrobin was specified as algorithm. I would appreciate if someone can share insight on how to make roundrobin work.


Also, is it better (performance wise) if http-to-https redirection is done at apache level rather than at haproxy? If so, how do I configure haproxy so that both http and https connections get forwarded to apache? Thanks.


Kristy

Posts: 1

Participants: 1

Read full topic

SNI works somtimes, sometimes is broken

$
0
0

@gosi wrote:

We have the following setup:

Client -> haproxy -> multiple servers running nginx (different certs)

Which in some cases works and in some it does not. I was tracing via tcpdump and sometimes the client doesn´t send the server name in the hello packet. Testing with Chrome and Firefox.

The config is derived from this example:
(http://blog.haproxy.com/2012/04/13/enhanced-ssl-load-balancing-with-server-name-indication-sni-tls-extension/)

# Adjust the timeout to your needs
defaults
timeout client 30s
timeout server 30s
timeout connect 5s

# Single VIP
frontend ft_ssl_vip
bind 10.0.0.10:443
mode tcp

tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }

default_backend bk_ssl_default

# Using SNI to take routing decision
backend bk_ssl_default
mode tcp

acl application_1 req_ssl_sni -i application1.domain.com
acl application_2 req_ssl_sni -i application2.domain.com

use-server server1 if application_1
use-server server2 if application_2
use-server server3 if !application_1 !application_2

option ssl-hello-chk
server server1 10.0.0.11:443 check
server server2 10.0.0.12:443 check
server server3 10.0.0.13:443 check

Maybe the example is outdated?

haproxy -v
HA-Proxy version 1.6.6 2016/06/26
Copyright 2000-2016 Willy Tarreau willy@haproxy.org

Posts: 1

Participants: 1

Read full topic

Rate-limiting per backend

$
0
0

@adkhare wrote:

Hi,

I would like to have a feature of rate-limiting per backend and which can be configured to have different limits per second, minute, day, etc.

Please can someone help me with the same.

Regards,
Amit Khare

Posts: 1

Participants: 1

Read full topic

Rewrite urls for response with absolute urls

$
0
0

@Mowijo wrote:

Hi all.

I have Jenkins running on Tomcat with HAProxy as a reverse proxy in front of it. That works kind of OK and my haproxy.conf is listet at the end of my post, but I have a problem with rewriting paths in the response.

when I GET ht.tp://192.168.1.16/jenkins I get the correct HTML returned, but all links are absolute to the host's root, and thus my webbrowser cannot fetch them as it tries e.g. ht.tp://192.168.1.16/static/style.css instead of ht.tp://192.168.1.16/jenkins/static/style.css.

If only it was a matter of rewriting domains, I would be home safe, but because of that pesky start o the path I am stuck.
I have tried to change the server and/or host to ht.tp://192.168.1.16/jenkins as in the configuration file below.

Can what I want be one?
How? Any pointers and hints are appreciated. (I tried setting the JENKINS_URL in the jenkins control panel, which didn't work. Also a generic solution would be convenient.)

Thanks,
Morten

I am running HA-Proxy version 1.6.6 2016/06/26

My haproxy.conf:

global
    maxconn 4096

defaults
    log  global
    maxconn  2000
    mode  http
    option  redispatch
    option  forwardfor
    option  http-server-close
    retries  3
    timeout  http-request 10s
    timeout  queue 1m
    timeout  connect 10s
    timeout  client 1m
    timeout  server 1m
    timeout  check 10s

frontend http-in
    bind *:80
    
    reqadd  X-Forwarded-Proto:\ http

    acl is_MyWebApp path_beg /MyWebApp
    use_backend MyWebApp if is_MyWebApp
    
        
        
backend MyWebApp
    option httpclose
    option forwardfor
    
    http-response  set-header Location ht.tp://192.168.1.16/MyWebApp
    http-response  set-header Server ht.tp://192.168.1.16/MyWebApp
    http-response  set-header Host ht.tp://192.168.1.16/MyWebApp    
    
    reqrep ^([^\ ]*\ /)MyWebApp[/]?(.*)     \1\2
    server s2 192.168.1.16:8080  maxconn 32

Posts: 1

Participants: 1

Read full topic


LUA and UDP sockets

$
0
0

@pdixon wrote:

Hello,

Just wondering if anyone here has managed to send UDP packets from within a LUA script. What I'm trying to do is have lua send a counter to graphite via a UDP datagram in order to collect metrics for particular requests.

I'm successfully using lua to select a back end based on properties of the request body (don't ask) and the icing on the cake would be to ping graphite with a counter via a UDP datagram for the ops team.

I've looked at the HAProxy lua socket object and this appears to be tcp only... or failing that, I'm just not reading the docs properly.

Regards,
Paul

Posts: 1

Participants: 1

Read full topic

SSL Termination + Transparent + Failover + Many sessions?

$
0
0

@JPedroT wrote:

Researching load balancing for our setup and wondering if our requirement list is feasible to do with haproxy.

  1. SSL Termination on the load balancer
  2. Transparent IPv4/IPv6 ie client ip shows up on the servers
  3. HAProxy in a high availability setup, ie failover between 2 or more.
  4. Lots of sessions, (assuming we have the proper cpu/ram resources in place), I found this link https://www.linangran.com/?p=547, we are trying to build for the 200k concurrent range. 50/50 https/http

This will run in a virtual vmware environment, but I assume it's not much different from Amazon etc.

From reading up on stuff, I think the challenge is the combination of all 4 requirements. I feel quite certain that 1 & 2 is okay, but when you add 3 & 4 its when it gets tricky.

Is this possible to do with the free haproxy or must we look at the commercial version or maybe another solution entirely?

All feedback is greatly appreciated.

Pedro

Posts: 1

Participants: 1

Read full topic

LUA 5.3 doesn't compile with haproxy 1.6.7

$
0
0

@azsfg wrote:

Hi All,

I'm on CentOs 5.11 x86_64. I'm able to compile haproxy 1.6.7 (Without LUA). I'm able to compile Lua 5.3 (and install both). But when attempting to compile haproxy 1.6.7 w/ Lua, I get the following compile error:

make USE_PCRE=1 USE_OPENSSL=1 USE_LUA=1 USE_ZLIB=1 DEBUG="" ARCH=x86_64 TARGET=linux26 ADDLIB=-lz SPEC_CFLAGS="-Wdeclaration-after-statement" LUA_LIB_NAME=lua CC=gcc44
gcc44 -Iinclude -Iebtree -Wall -m64 -march=x86-64 -O2 -g -Wdeclaration-after-statement       -DTPROXY -DCONFIG_HAP_CRYPT -DUSE_ZLIB  -DENABLE_POLL -DENABLE_EPOLL -DNETFILTER -DUSE_GETSOCKNAME -DUSE_OPENSSL  -DUSE_SYSCALL_FUTEX -DUSE_LUA  -DUSE_PCRE -I/usr/include  -DCONFIG_HAPROXY_VERSION=\"1.6.7\" -DCONFIG_HAPROXY_DATE=\"2016/07/13\" -c -o src/uri_auth.o src/uri_auth.c
In file included from /usr/include/lua.h:16,
                 from include/types/hlua.h:6,
                 from include/types/stream.h:36,
                 from include/proto/log.h:33,
                 from src/uri_auth.c:20:
/usr/include/luaconf.h:572:2: error: #error "Compiler does not support 'long long'. Use option '-DLUA_32BITS'   or '-DLUA_C89_NUMBERS' (see file 'luaconf.h' for details)"
In file included from include/types/hlua.h:6,
                 from include/types/stream.h:36,
                 from include/proto/log.h:33,
                 from src/uri_auth.c:20:
/usr/include/lua.h:93: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘lua_Integer’
/usr/include/lua.h:96: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘lua_Unsigned’
/usr/include/lua.h:182: error: ‘lua_Integer’ declared as function returning a function
/usr/include/lua.h:226: error: expected declaration specifiers or ‘...’ before ‘lua_Integer’
/usr/include/lua.h:244: error: expected declaration specifiers or ‘...’ before ‘lua_Integer’
/usr/include/lua.h:246: error: expected declaration specifiers or ‘...’ before ‘lua_Integer’
/usr/include/lua.h:261: error: expected declaration specifiers or ‘...’ before ‘lua_Integer’
/usr/include/lua.h:263: error: expected declaration specifiers or ‘...’ before ‘lua_Integer’
In file included from include/types/hlua.h:7,
                 from include/types/stream.h:36,
                 from include/proto/log.h:33,
                 from src/uri_auth.c:20:
/usr/include/lauxlib.h:46: error: ‘lua_Integer’ declared as function returning a function
/usr/include/lauxlib.h:48: error: expected declaration specifiers or ‘...’ before ‘lua_Integer’
/usr/include/lauxlib.h:48: error: ‘lua_Integer’ declared as function returning a function
/usr/include/lauxlib.h:86: error: ‘lua_Integer’ declared as function returning a function
make: *** [src/uri_auth.o] Error 1

Any ideas what this could be? Googling wasn't helpful for this case. Just FYI, I'm just testing with gcc44 above, the same exact issue happens with gcc (4.1).

If I do add -DLUA_32BITS it does compile. But for some reason ld tries to link in 32bit shared libs.

/usr/bin/ld: skipping incompatible /usr/lib/libcrypt.so when searching for -lcrypt
/usr/bin/ld: skipping incompatible /usr/lib/libcrypt.a when searching for -lcrypt
/usr/bin/ld: skipping incompatible /usr/lib/libz.so when searching for -lz
/usr/bin/ld: skipping incompatible /usr/lib/libz.a when searching for -lz

Additionally, I really need 64bit numbers, not 32...

-Alex

Posts: 1

Participants: 1

Read full topic

Chroot resulting in 503

$
0
0

@bwmetcalf wrote:

I'm attempting to chroot our haproxy setup running as root, but when doing so I only get 503s when hitting our frontend. Nothing is showing up in the logs to indicate what might be wrong. The relevant parts of my config look like

global
  tune.ssl.default-dh-param 2048
  tune.ssl.cachesize 50000
  pidfile /var/run/haproxy/haproxy.pid
  chroot /var/haproxy
  log 127.0.0.1 local1
frontend ft_foo_ssl
  mode http
  bind *:80
  bind *:443 ssl crt /etc/haproxy/my.pem ciphers AES256-SHA:AES128-SHA:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!RC4:!MD5:!PSK no-sslv3 no-tls-tickets
  http-request set-log-level silent
  maxconn 100000
  redirect scheme https code 301 if !{ ssl_fc }
  default_backend bk_foo_ssl
backend bk_foo_ssl
  mode http
  server ft_foo_sock unix@/var/run/haproxy/ft_foo.sock send-proxy-v2
frontend ft_foo
  mode http
  bind unix@/var/run/haproxy/ft_foo.sock accept-proxy
  maxconn 100000
  use_backend %[req.hdr(host),lower,map_dom(/etc/haproxy/foo.map)]

If I remove the chroot, everything works fine.

Posts: 2

Participants: 2

Read full topic

Redirect URI: how to define exakt match, no pattern

$
0
0

@c.monty wrote:

Hello!
I have setup a mailserver that serves 3 webservices:
mail.mydomain.de
webmail.mydomain.de
sogo.mydomain.de

In order to ensure that requests are ending up at the same host, I have defined this rule:
acl mail hdr(host) -i mail.mydomain.de hdr(host) -i webmail.mydomain.de hdr(host) -i sogo.mydomain.de
use_backend mail_server if mail

This works as expected.

In addition I want to redirect the following URLs:
hxtp://mydomain.de/mail -> mail.mydomain.de
hxtp://mydomain.de/webmail -> webmail.mydomain.de
hxtp://mydomain.de/sogo -> sogo.mydomain.de

I started with a ACL like this:
acl redir_webmail path_beg /webmail
http-request redirect location https://webmail.%[hdr(host)] code 301 if redir_webmail

The issue here is that when I enter
hxtp://mail.mydomain.de/webmail
this will be redirected to
hxtp://webmail.mail.mydomain.de
and this is incorrect.

Can you please advise?

THX

Posts: 3

Participants: 2

Read full topic

HAProxy fails to start with "invalid address" when using IPv6 Resolvable Name in the Backend

$
0
0

@jseparovic wrote:

Hi,

I have an issue with Haproxy failing to start when using an IPv6 resolvable DNS Name:

[root@util-1 ~]# haproxy -f /etc/haproxy/haproxy.cfg -d
[ALERT] 208/190746 (13694) : parsing [/etc/haproxy/haproxy.cfg:98] : 'server backend1' : invalid address: 'backend-1.test.com' in 'backend-1.test.com'

[ALERT] 208/190746 (13694) : Error(s) found in configuration file : /etc/haproxy/haproxy.cfg
[ALERT] 208/190746 (13694) : Fatal errors found in configuration.

The name resolves fine:

[root@util-1 ~]# dig backend-1.test.com AAAA

; <<>> DiG 9.9.4-RedHat-9.9.4-29.el7_2.3 <<>> backend-1.test.com AAAA
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 34045
;; flags: qr aa rd ra ad; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;backend-1.test.com.	IN	AAAA

;; ANSWER SECTION:
backend-1.test.com. 0	IN	AAAA	2001:1234::100

;; Query time: 0 msec
;; SERVER: 2001:1234::1#53(2001:1234::1)
;; WHEN: Wed Jul 27 19:08:50 EDT 2016
;; MSG SIZE  rcvd: 73

There is no issue pinging using the name:

[root@util-1 ~]# ping6 backend-1.test.com
PING backend-1.test.com(backend-1.test.com) 56 data bytes
64 bytes from backend-1.test.com: icmp_seq=1 ttl=64 time=0.208 ms
64 bytes from backend-1.test.com: icmp_seq=2 ttl=64 time=0.225 ms
64 bytes from backend-1.test.com: icmp_seq=3 ttl=64 time=0.218 ms
64 bytes from backend-1.test.com: icmp_seq=4 ttl=64 time=0.318 ms
^C
--- backend-1.test.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.208/0.242/0.318/0.045 ms

I'm currently using version 1.5.2:

[root@util-1 ~]# haproxy -v
HA-Proxy version 1.5.2 2014/07/12
Copyright 2000-2014 Willy Tarreau <w@1wt.eu>

I have also tried this using haproxy version 1.6.7 with the same result.

[root@util-1 ~]# ./haproxy -f /etc/haproxy/haproxy.cfg
[ALERT] 208/192951 (13972) : parsing [/etc/haproxy/haproxy.cfg:98] : 'server backend1' : invalid address: 'backend-1.test.com' in 'backend-1.test.com'

[ALERT] 208/192951 (13972) : Error(s) found in configuration file : /etc/haproxy/haproxy.cfg
[WARNING] 208/192951 (13972) : Setting tune.ssl.default-dh-param to 1024 by default, if your workload permits it you should set it to at least 2048. Please set a value >= 1024 to make this warning disappear.
[ALERT] 208/192951 (13972) : Fatal errors found in configuration.
[root@util-1 ~]# ./haproxy -v
HA-Proxy version 1.6.7 2016/07/13
Copyright 2000-2016 Willy Tarreau <willy@haproxy.org>

I have noticed other systems have the same issue with IPv6 names and had to change the underlying library to a method that supports both ipv6 and ipv4. Not sure if this is a similar issue here.

BTW, when I use the IPv6 Address, everything works fine.

Posts: 2

Participants: 2

Read full topic

Knowing administrative state (MAINT/DRAIN/...) backend-side

$
0
0

@mbronk wrote:

Hi,
Is it possible for the backend to know that connections to it are being drained so that it could start gracefully closing persistent connections?

The use case is as follows:

  • the LB is in TCP mode (loadbalancing SSL connections that have to be end2end)
  • the application has an auxiliary HTTP port specifically for LB-side health-checks
  • want to gracefully remove one backend server from load-balancing (not affecting any client connections)
  • backend app uses persistent connections (http keep-alive)

The problem:

  • Even though LB enters DRAIN mode, persistent connections may never terminate while there is traffic on them.

It looks like one of the potential solutions would be to tell the backend application to start gracefully dropping persistent connections (essentially, start replying 'Connection: close').

Question being:
How can the backend find out that it should end persistent connections?
http-check send-state seems like a good fit, but the only information the backend receives is the operational state (UP/DOWN/NOLB).
Is it possible to get the administrative state (MAINT/DRAIN) in the health-check request header as well?

Thanks in advance

Posts: 1

Participants: 1

Read full topic


Service Unavailable : 503

$
0
0

@prash.devops wrote:

Hi,

Here is our haproxy setup and haproxy setup and ha proxy is just acting proxy server(no load balancing)

example confguration

frontend http-in-ssl
bind *:80
bind *:443 ssl crt /etc/haproxy/ssl
log 127.0.0.1 local1 notice
option httplog
reqadd X-Forwarded-Proto:\ https
reqadd X-Forwarded-Proto:\ http

    acl alpha hdr(host) -i alpha.example.com
    acl beta hdr(host) -i beta.example.com
    redirect scheme https if !{ ssl_fc } alpha
    redirect scheme https if !{ ssl_fc } beta
    use_backend alpha-backend if { ssl_fc_sni alpha.verizon.com }
    use_backend beta-backend if { ssl_fc_sni beta.verizon.com }

backend alpha-backend
server alpha-server-server 192.168.1.100:8080 check inter 2000 maxconn 500 rise 2 fall 3
backend beta-backend
server beta-server-server 192.168.1.200:8090 check inter 2000 maxconn 500 rise 2 fall 3

==This is working fine if any one hits to the haproxy from external IP. Even curl is working fine when i exececute curl from my computer.

But When i try to curl alpha.example.com from beta.example.com it is not working. Here is the out put i am getting

root@test-betahost:~# curl -vvv -L http://beta.example.com
* Rebuilt URL to: http:/beta.example.com/
* Hostname was NOT found in DNS cache
* Trying 192.168.1.10...
* Connected to beta.example.com (192.168.1.10) port 80 (#0)

GET / HTTP/1.1
User-Agent: curl/7.35.0
Host: beta.example.com
Accept: /

< HTTP/1.1 302 Found
< Cache-Control: no-cache
< Content-length: 0
< Location: https://beta.example.com/
< Connection: close
<
* Closing connection 0
* Issue another request to this URL: 'https://beta.example.com/'
* Hostname was NOT found in DNS cache
* Trying 159.67.198.2...
* Connected to proxy.ebiz.example.com (132.67.190.200) port 80 (#1)
* Establish HTTP proxy tunnel to beta.example.com:443
CONNECTbeta.example.com:443 HTTP/1.1
Host: beta.example.com:443
User-Agent: curl/7.35.0
Proxy-Connection: Keep-Alive

< HTTP/1.1 503 Service Unavailable
< Cache-Control: no-cache
< Pragma: no-cache
< Content-Type: text/html; charset=utf-8
< Proxy-Connection: close
< Connection: close
< Content-Length: 1019
<
* Received HTTP code 503 from proxy after CONNECT
* Connection #1 to host proxy.ebiz.example.com left intact
curl: (56) Received HTTP code 503 from proxy after CONNECT

Could some one please help me for this?

Posts: 1

Participants: 1

Read full topic

Redirecting to new URL

$
0
0

@prash.devops wrote:

Hi,

Here is my setup.

frontend http-in-ssl
bind *:80
bind *:443 ssl crt /etc/haproxy/ssl
log 127.0.0.1 local1 notice
option httplog
reqadd X-Forwarded-Proto:\ https
reqadd X-Forwarded-Proto:\ http

    acl alpha hdr(host) -i alpha.example.com
    acl beta hdr(host) -i beta.example.com
    redirect scheme https if !{ ssl_fc } alpha
    redirect scheme https if !{ ssl_fc } beta
    use_backend alpha-backend if { ssl_fc_sni alpha.example.com }
    use_backend beta-backend if { ssl_fc_sni beta.example.com }

backend alpha-backend
server alpha-server-server 192.168.1.100:8080 check inter 2000 maxconn 500 rise 2 fall 3
backend beta-backend
server beta-server-server 192.168.1.200:8090 check inter 2000 maxconn 500 rise 2 fall 3

Here i would like to achieve is if any one tried tried to hit alpha.example.com/rest/api it should redirecto to 192.168.1.150:8085 and convert all request to https

Thanks

Posts: 1

Participants: 1

Read full topic

Full URL redirection: domain + path + query, how do I do it?

$
0
0

@JohnJonson wrote:

I use HAProxy 1.5.14 with CentOS 7.

For a while I have been redirecting stuff using a map file like this:

domain1.com https://somedomain.net/path
domain2.com/path https://somedomain.net/path2?option=10
..and so on..

Then in the frontend I do:

http-request redirect location %[base,map_beg(/etc/haproxy/redirections.map)] code 302 if { base,map_beg(/etc/haproxy/redirections.map) -m found }

And this has been working like a charm, however I'm facing an issue that I do not know how to solve, I have to redirect a few URLs now that contain parameters (domain + path + query) IE:

domain3.com/path?optiona=1&optionb=2 https://yetanotherdomain.net/something?option=10
domain4.oreg/home?lol=1 https://yetanotherdomain.net/something?option=15

I tried using:

http-request redirect location %[url,map_beg(/etc/haproxy/redirections.map)] code 302 if { url,map_beg(/etc/haproxy/redirections.map) -m found }

But it didn't work.

I have searched on Google up and down right and left and after a week I haven't been able to find a solution for this issue.

Does any kind soul out there know a way to achieve this?

Thanks in advance.

Posts: 1

Participants: 1

Read full topic

Stats configuration opens UDP port with 0.0.0.0 binding always

$
0
0

@damo_pulsant wrote:

HI all

I am building a new HAproxy setup.

when I configure the stats to either use socket or TCP when starting the server a UDP port is always opened listening on a random port with 0.0.0.0 binding.

how can I prevent this being created ?

Posts: 2

Participants: 2

Read full topic

How To fetch ssl subjectAltName (san) extension data in HAProxy?

$
0
0

@deepagarhaproxy wrote:

Hello,

I am looking to fetch openssl subjectAltName (san) extension data in HAProxy.
But I am not able to find any API in HAProxy to do so.

Could someone help how-to fetch ssl subjectAltName (san) extension data in HAProxy?

The HAProxy APIs looks to present for ssl DN and SNI only but not for subjectAltName (san) extension data.
Is there any workaround, please suggest.

Thanks,
Deepak

Posts: 1

Participants: 1

Read full topic

Viewing all 4731 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>