Quantcast
Channel: HAProxy community - Latest topics
Viewing all 4740 articles
Browse latest View live

Convert Small Nginx Config to HAProxy

$
0
0

@maxbridgewater wrote:

Please, I need some help converting this NGINX config to HAProxy. Essentially, the config extracts IP address, the port, and the relative path from the overall URL and dynamically redirect to this extracted server. One thing that might not be obvious from this config is that it also does websocket. This explains some of the weird settings you would see (e.g. upgrade).

Any help would be much appreciated.

server {
listen 80;

location ~ ^/forward/([^/]+)/([^/]+)/(.*)$ {
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
    proxy_http_version 1.1;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $host;
    proxy_pass http://$1:$2/$3;
}

}

Posts: 2

Participants: 2

Read full topic


Healthcheck with xml post in body

$
0
0

@glenn wrote:

Hello, I have a healthcheck that requires a POST of xml in the body. I can run the below curl command and it works.. but the question is how do I code this into the haproxy.cfg file?

curl -v mywebserver/wfc/XmlService -H "Content-Type: application/xml" -d "<?xml version='1.0'?><Kronos_WFC version='1.0'><Request Object='System' Action='Logon' Username='myuser' Password='mypass'/><Request Action='RetrieveResponseTime' retrieveFor='pingWithDB'><HeartBeat/></Request><Request Object='System' Action='Logoff'/></Kronos_WFC>"

Posts: 2

Participants: 1

Read full topic

Systemctl script for RHEL 7.x

$
0
0

@kotarusv wrote:

Hi

I'm having 2 issues with below systemctl script with RHEL 7.X

  1. I'm seeing below error frequently in systemctl status haproxy-80 or 443 instance

systemd Failed to parse kill mode, ignoring: mixed
systemd Failed to parse kill mode, ignoring: mixed
systemd Failed to parse kill mode, ignoring: mixed
systemd Failed to parse kill mode, ignoring: mixed

  1. More importantly, after few days HAPROXY unable respond and health checks are failing. I am pretty sure backend is working fine. restarting the haproxy immediately fixing the issue. There is no pattern but it has been happening very frequently. As always, restarting haproxy resolving the issue immediately

I am using a self compiled version of haproxy and below are compiled flags

HA-Proxy version 1.6.8 2016/08/14
Copyright 2000-2016 Willy Tarreau willy@haproxy.org

Build options :
TARGET = linux2628
CPU = native
CC = gcc
CFLAGS = -O2 -march=native -g -fno-strict-aliasing -Wdeclaration-after-statement
OPTIONS = USE_LIBCRYPT=1 USE_ZLIB=yes USE_OPENSSL=1 USE_STATIC_PCRE=1

Default settings :
maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.7
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with OpenSSL version : OpenSSL 1.0.2h 3 May 2016
Running on OpenSSL version : OpenSSL 1.0.2h 3 May 2016
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.32 2012-11-30
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built without Lua support
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND

Available polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result OK
Total: 3 (3 usable), will use epoll.

My systemctl startup script

cat /etc/systemd/system/haproxy-80.service
[Unit]
Description=HAProxy Load Balancer
After=syslog.target network.target

[Service]
Environment="CONFIG=/opt/haproxy/haproxy-80/conf/haproxy.conf" "PIDFILE=/run/haproxy-80.pid"
ExecStartPre=/opt/haproxy/haproxy-80/sbin/haproxy-80 -f $CONFIG -c -q
ExecStart=/opt/haproxy/haproxy-80/sbin/haproxy-80-systemd-wrapper -f $CONFIG -p $PIDFILE
ExecReload=/opt/haproxy/haproxy-80/sbin/haproxy-80 -f $CONFIG -c -q
ExecReload=/bin/kill -USR2 $MAINPID
KillMode=mixed
Restart=always
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=haproxy-80
Environment=NODE_ENV=production

[Install]
WantedBy=multi-user.target

.i am running 80 and 443 instances separately thus. binaries are shared but using different configuration files.

Can you confirm above systemctl startup script is valid ?

Srinivas Kotaru

Posts: 1

Participants: 1

Read full topic

Routing Issues and General Questions

$
0
0

@novaomega wrote:

Ok so let me start off by saying that I could be completely wrong about everything I am about to say and I am open to any criticisms and feedback. I stumbled upon haProxy because I needed a load balancer for part of my Undergrad engineering thesis. Now I am by no means a networking expert nor do I pretend to be one, but from what I read HaProxy might be able to solve another problem I will have. IF HaProxy is not the tool for this job please point me in the right direction to either make one and opensource it or to one that already exists.

First my plan may seem big and even a little complicated and It probably is, but the 7 men and women I have been working with so far have been doing a very good job at progressing this project.

My goal is to create a software stack that will use VirtualGL and TurboVNC for the base allowing for a series of VNC sessions to be opened on servers. The advantage of this is to allow for a load of users to run 3D accelerated programs on one server and they will all share just one High Performance GPU. Now what I want to do for security and other parts of my thesis which aren't relevant to this forum is that I want to put all of these "servers" behind a DHCP server acting as a "Master" node per say where a request to have a user connect will go the the DHCP server "Master" node and this node will then find one of its children that has the most resources free to assign this client to connect to. Basically finding the most free server to load a session on. That is all fine and dandy(I think) but do I need to worry about the packets being transmitted if I step in the middle? Say I am connecting using TurboVNC and I connect to the DHCP server even though persay it does NOT have a VNC server on board. How would I go about linking the connection from Server A found under the DHCP server to client N? also what happens if Server A and B both open up a connection on display :1? typically if you connect to a server with the Viewer you need to put in the ipaddress : the display number. this display number is generated by the server when you tell it to spawn a new session for a new user. if a machine located at IP address 10.10.0.102 opened display 1 and 2 and a machine located at IP address 10.10.0.103 opened display 1 and 2 if a user polled the DHCP server asking to connect to the server under display :1 how would the DHCP server/HaProxy/some tool that makes my life easier automatically know to put this user to this ipaddress? I suppose you could say if a user from this public IP address comes in and connects connect them to this session, but what if two people from the same public ip address connect. Also once you connect the client to the server do I need to maintain and watch the connection? would it be better to send a web cookie along with any information to the server to identify who I am and where I am going also if I did that where would I start researching to find this out?

OK so I hope I didn't steam roll you with questions and I also hope that you are still reading. :smile: I want to have a series of these small cluster systems where you have a DHCP/Master server with its children nodes which host the VNC sessions out from them. I then want to have another DHCP/Master server which sits above all of these clusters and then finds the cluster with the least work on it which will then ask that master to find the system with the least work and then it will connect a client to that machine. Basically I want to try and find the best experience possible for the client conecting. I do not know if HaProxy can do this and I do not know how to program for ip Look up tables, I am a computer science and computer enginerring double major undergrad student.

Anything will be helpful

Thank you for your time and patience

Scott

Posts: 1

Participants: 1

Read full topic

Multiple RDP endpoints using the same source port number

$
0
0

@carlpywell wrote:

Hi, Im wondering if the following is possible.

We have two different RDP servers that do different functions so are classed as different backends but use the same source port 3389.Is it possible to split these with an ACL at the frontend using something like the destination computer name and direct them to the relevant backend?

For example two RDP endpoints are RDP1.example.com and RDP2.example.com, both using the default 3389 port

frontend rdp
bind :3389
acl is-rdp1 if the endpoint is rdp1.example.com
acl is-rdp2 if the endpoint is rdp2.example.com
use backend rdp1 if is-rdp1
use backend rdp2 if is-rdp2

backend rdp1
server rdp-1 rdp1.example.com

backend rdp2
server rdp-2 rdp2.example.com

I need the ACL to identify the RDP server address.

Thanks in advance

Posts: 1

Participants: 1

Read full topic

OCSP stapling with Let's Encrypt certificates

$
0
0

@StephenG wrote:

I'm trying to get OCSP stapling working with HAProxy and certificates from letsencrypt.org but running into a problem.

I've got HAProxy setup with SSL termination for a number of domains (on a single IP) and it all works fine, except for the OCSP stapling part.

The bind line in my haproxy.cfg frontend is:

bind 208.101.xx.xx:443 ssl crt /etc/haproxy/certs/

I have a script to get the OCSP response which is something like this:

openssl ocsp -no_nonce -respout /etc/haproxy/certs/mydomain.com.pem.ocsp -issuer /etc/letsencrypt/live/mydomain.com/chain.pem -verify_other /etc/letsencrypt/live/mydomain.com/chain.pem -cert /etc/letsencrypt/live/mydomain.com/cert.pem -url http://ocsp.int-x3.letsencrypt.org/ -header "HOST" "ocsp.int-x3.letsencrypt.org"

This seems to be fine, it responds as,

Response verify OK
/etc/letsencrypt/live/mydomain.com/cert.pem: good
This Update: Oct 18 01:00:00 2016 GMT
Next Update: Oct 25 01:00:00 2016 GMT

So the .ocsp file is in my HAProxy cert directory. I also copy over the issuer file from the let's encrypt certificates:

cp /etc/letsencrypt/live/mydomain.com/chain.pem /etc/haproxy/certs/mydomain.com.pem.issuer

Now I have both the .issuer and .ocsp files in my HAProxy cert directory. (The .ocsp file is in a binary format)

-rw------- 1 user user 5221 Oct 17 20:41 mydomain.com.pem
-rw------- 1 user user 1647 Oct 17 19:45 mydomain.com.pem.issuer
-rw------- 1 user user 527 Oct 19 09:44 mydomain.com.pem.ocsp

I notify HAProxy about it:

echo "set ssl ocsp-response $(/usr/bin/base64 -w 10000 /etc/haproxy/certs/mydomain.com.pem.ocsp)" | /usr/bin/socat stdio /var/run/haproxy.admin

It seems to work, I get the reply:

OCSP Response updated!

Now it should be stapling the OCSP response ?

I try testing it using:

openssl s_client -connect mydomain.com:443 -tlsextdebug -status -servername mydomain.com

and get:

..
OCSP response: no response sent
depth=2 O = Digital Signature Trust Co., CN = DST Root CA X3
verify return:1
depth=1 C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3
verify return:1
depth=0 CN = mydomain.com
verify return:1
..

Oops... not sure what is going on here, or if it's something peculiar to the way Let's Encrypt certs are setup.

I was using HAProxy 1.6.3 but moved to 1.7 dev4 to try again, without any more success. (CentOS 7, OpenSSL 1:1.0.1e-51.el7_2.7)

Thanks for any pointers...

Posts: 1

Participants: 1

Read full topic

Closed - double post

Lua script dont print/log host or path in tcp mode

$
0
0

@xannax wrote:

Hi,

The problem is that when using the tcp mode the Lua script dont log/print the host, path and only the ip is getting logged/printed. If i change the mode to http it prints/output everything, but i need it in tcp mode. Can someone help me to fix that?

This is how my HAProxy Configuration file look:

frontend RTMP
bind 0.0.0.0:1935
mode tcp
use_backend %[lua.choose_backend("d")]

backend a
    balance roundrobin
    mode tcp
    server origin_1 192.168.1.19:1935 check fall 3 rise 2

backend b
    balance roundrobin
    mode tcp
    server origin_1 192.168.1.20:1935 check fall 3 rise 2

And this is how my Lua script:

function my_hash(txn, salt)

local str = ""
str = "HOST: " .. txn.sf:req_fhdr("host")
str = str .. " PATH: "
str = str .. txn.sf:path()
str = str .. " SRC: "
str = str .. txn.sf:src()

core.log(core.info, str)
   return 'a'
end

core.register_fetches("choose_backend", my_hash)

Posts: 1

Participants: 1

Read full topic


HAProxy load balance RTMP (TCP) BASED on URL

$
0
0

@xannax wrote:

I want my HAProxy to redirect users to backends based on the URL. The mode is TCP, becouse they will connect to rtmp servers. Please show me how?

Posts: 1

Participants: 1

Read full topic

Is it possible to use lua scripting to customize how the HAProxy balancing is performed?

$
0
0

@cgonzalez wrote:

Hi guys,

I have a monitoring solution that gather the health of my different nodes and what I would like to do is to use this health data in the HAProxy balancing process. But I did not see any easy way to do it in the HAProxy configuration.

Do any of you tried to do something similar ? Any ideas/advice where to start?

I started looking into lua scripting and stick tables, but I'm not totally sure that would allow me to impact the balancing process. Do any of you know if that could work? Any advice?

Any help would be welcomed.
Thanks in advance,
Celso

Posts: 1

Participants: 1

Read full topic

High CPU usage when exists CLOSE_WAIT connections

$
0
0

@robsonpeixoto wrote:

I'm using https://github.com/mesosphere/marathon-lb to configure the HAProxy. Because of it change the configuration with any change in our microservices instances, it create a lot of haproxy process. For example:

# pgrep haproxy -a
690 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 70155
3108 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 72735
7540 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 77012
8297 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 77058
9651 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 78452
10690 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 79475
15639 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 84966
15760 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 85082
16574 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 85637
16923 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 86235
17022 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 86278
17672 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 86375
18060 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 87011
18620 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 87398
19470 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 87809
20350 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 88653
52146 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 20253
52339 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 20744
53367 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 20934
53468 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 21957
53710 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 22058
54324 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 22295
54967 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 23482
55476 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 23537
55796 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 24162
55987 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 24199
56180 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 24546
56246 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 24582
56519 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 24879
57201 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 25317
57546 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 25568
57774 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 26080
60398 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 30723
60783 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 31118
89062 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 56479
89509 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 57252
89784 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 57531
90949 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 58144
91675 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 59389
93436 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 59422
93677 haproxy -p /tmp/haproxy.pid -f /marathon-lb/haproxy.cfg -D -sf 61172

The haproxy process will finish when all connection are finished. But sometimes a process only has connection with CLOSE_WAIT status and it will consume 100% of the CPU.

# lsof -i | awk '{if($1 == "haproxy") print $2 " " $10}' | sort -u
10690 (ESTABLISHED)
15760 (ESTABLISHED)
16574 (ESTABLISHED)
16923 (ESTABLISHED)
17022 (ESTABLISHED)
17672 (ESTABLISHED)
18060 (ESTABLISHED)
18620 (ESTABLISHED)
19470 (ESTABLISHED)
20350 (CLOSE_WAIT)
20350 (ESTABLISHED)
20350 (LISTEN)
3108 (ESTABLISHED)
52146 (ESTABLISHED)
52339 (ESTABLISHED)
53367 (ESTABLISHED)
53468 (ESTABLISHED)
53710 (ESTABLISHED)
54324 (ESTABLISHED)
54967 (ESTABLISHED)
55476 (ESTABLISHED)
55796 (ESTABLISHED)
55987 (ESTABLISHED)
56180 (ESTABLISHED)
56246 (ESTABLISHED)
56519 (ESTABLISHED)
57201 (ESTABLISHED)
57546 (ESTABLISHED)
57774 (ESTABLISHED)
60398 (CLOSE_WAIT)
60783 (CLOSE_WAIT)
690 (ESTABLISHED)
7540 (ESTABLISHED)
8297 (ESTABLISHED)
89062 (ESTABLISHED)
89509 (ESTABLISHED)
89784 (ESTABLISHED)
90949 (ESTABLISHED)
91675 (ESTABLISHED)
93436 (ESTABLISHED)
93677 (ESTABLISHED)

Here the processes 60398 and 60783 only has CLOSE_WAIT status.

# lsof -i | awk '{if($2 == "60398") print $2 " " $9 " " $10}'
60398 mesos-lb-3.mydomain:35819->leia-5.mydomain:31302 (CLOSE_WAIT)
# lsof -i | awk '{if($2 == "60783") print $2 " " $9 " " $10}'
60783 mesos-lb-3.mydomain:37419->leia-8.mydomain:31682 (CLOSE_WAIT)

The strace of booth show the same result:

poll(0x7fe774e2e010, 0, 0)              = 0 (Timeout)
poll(0x7fe774e2e010, 0, 0)              = 0 (Timeout)
poll(0x7fe774e2e010, 0, 0)              = 0 (Timeout)
poll(0x7fe774e2e010, 0, 0)              = 0 (Timeout)
poll(0x7fe774e2e010, 0, 0)              = 0 (Timeout)
poll(0x7fe774e2e010, 0, 0)              = 0 (Timeout)

The tcpdump -vv port 35819 didn't show nothing. The same for the port 37419

About the enviroment:

# haproxy -vv
HA-Proxy version 1.6.9 2016/08/30
Copyright 2000-2016 Willy Tarreau <willy@haproxy.org>

Build options :
  TARGET  = custom
  CPU     = x86_64
  CC      = gcc
  CFLAGS  = -g -fno-strict-aliasing -Wdeclaration-after-statement
  OPTIONS = USE_LINUX_SPLICE=1 USE_LINUX_TPROXY=1 USE_LIBCRYPT=1 USE_ZLIB=1 USE_POLL=default USE_DL=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1 USE_PCRE_JIT=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with OpenSSL version : OpenSSL 1.0.2j  26 Sep 2016
Running on OpenSSL version : OpenSSL 1.0.2j  26 Sep 2016
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.39 2016-06-14
PCRE library supports JIT : yes
Built with Lua version : Lua 5.3.3
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND

Available polling systems :
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 2 (2 usable), will use poll.

It's running using docker and the image was based on debian:stretch image.
The docker host info is:

# lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 14.04.5 LTS
Release:	14.04
Codename:	trusty

# uname -a
Linux mesos-lb-4.mydomain 3.13.0-98-generic #145-Ubuntu SMP Sat Oct 8 20:13:07 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

# docker version
Client:
 Version:      1.10.3
 API version:  1.22
 Go version:   go1.5.3
 Git commit:   20f81dd
 Built:        Thu Mar 10 15:54:52 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.10.3
 API version:  1.22
 Go version:   go1.5.3
 Git commit:   20f81dd
 Built:        Thu Mar 10 15:54:52 2016
 OS/Arch:      linux/amd64

Any idea what's happing?
Thanks for the attention

Posts: 1

Participants: 1

Read full topic

Postfix LOAD-BALANCING (outgoing e-mails)

$
0
0

@samuel wrote:

Hi Haproxy community,
please i'm looking for a way to loadBalancing postfix outgoing e-mails
is that possible?

Best regards
samuel.

Posts: 1

Participants: 1

Read full topic

How to password protect multiple sub domain but single login

$
0
0

@No-More wrote:

Hello,

I'm new to haproxy,

I have managed to set up a password protected access (for development platform) but my application is making ajax call to other subdomain of the same domain (for example client.test.com -> server.test.com).
But those calls are rejected due to the previous rule. How could I managed to login on one subdomain, for all other subdomain ? or is it possible to not ask for password if the request comes from a browser app loaded from another domain ?

I'm not sure my question is please, please ask if you need more information.

Thanks a lot.

Posts: 1

Participants: 1

Read full topic

Haproxy using systemctl

$
0
0

@kotarusv wrote:

Hi

Did anyone using compiled version of haproxy on RHEL 7.x or derivatives? We have been running compiled version successfully last few years on RHEL 6.7 platform successfully. recently we migrated to RHEL 7.x, thus compiled using the same option which we are using on RHEL 7.x build system. i am having intermediate down every few days. Not seeing any relevent messages on logs ( /var/logs/messages or haproxy notiice logs) not sure why it is going down every few days and we have to issue reload to make it work again.

only noticeable difference is system-wrapper script and systemctl script. I am using system-wrapper generated from compiled version and using unit file from haproxy contribution directory.

unit file:

cat etc/systemd/system/haproxy-80.service

[Unit]
Description=HAProxy Load Balancer
After=syslog.target network.target

[Service]
Environment="CONFIG=/opt/haproxy/haproxy-80/conf/haproxy.conf" "PIDFILE=/run/haproxy-80.pid"
ExecStartPre=/opt/haproxy/haproxy-80/sbin/haproxy-80 -f $CONFIG -c -q
ExecStart=/opt/haproxy/haproxy-80/sbin/haproxy-80-systemd-wrapper -f $CONFIG -p $PIDFILE
ExecReload=/opt/haproxy/haproxy-80/sbin/haproxy-80 -f $CONFIG -c -q
ExecReload=/bin/kill -USR2 $MAINPID
KillMode=mixed
Restart=always
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=haproxy-80
Environment=NODE_ENV=production

[Install]
WantedBy=multi-user.target

Syslog file

$ cat etc/rsyslog.d/haproxy-80.conf
if ($programname == 'haproxy-80' and $syslogseverity-text == 'info') then -/opt/haproxy/logs/haproxy-80/haproxy-80-info.log
& ~
if ($programname == 'haproxy-80' and $syslogseverity-text == 'notice') then -/opt/haproxy/logs/haproxy-80/haproxy-80-notice.log
& ~

haproxy config file:

$ cat opt/haproxy/haproxy-80/conf/haproxy.conf
global
daemon
user www
group www
pidfile /var/run/haproxy-80.pid
log /dev/log local0 info
log /dev/log local1 notice
maxconn 50000
tune.bufsize 128000
tune.maxrewrite 1024
stats socket /var/run/haproxy-80.stat mode 600 level admin
stats timeout 2m
nbproc 1
tune.comp.maxlevel 5

defaults
log global
mode http
option forwardfor except 127.0.0.0/8
option http-server-close
option dontlognull
option httplog clf
option tcp-smart-accept
option tcp-smart-connect
option splice-auto
no option logasap
balance roundrobin
retries 3
timeout client 1m
timeout http-keep-alive 5s
timeout server 1m
timeout queue 1m
timeout connect 10s
timeout check 10s
timeout http-request 10s
default-server maxconn 1024 inter 5s rise 5 fall 5

   errorfile 503 /opt/haproxy/errors/503.html
   errorfile 400 /opt/haproxy/errors/400.html
   errorfile 403 /opt/haproxy/errors/403.html
   errorfile 500 /opt/haproxy/errors/500.html
   errorfile 502 /opt/haproxy/errors/502.html
   errorfile 504 /opt/haproxy//errors/504.html
   errorfile 408 /dev/null

frontend _IN
mode http
bind *:80
monitor-uri /proxy.html
log-format [%pid]\ [%Ts.%ms]\ %ac/%fc/%bc/%bq/%sc/%sq/%rc\ %Tq/%Tw/%Tc/%Tr/%Tt\ %tsc\ %ci:%cp\ %fi:%fp\ %si:%sp\ %ft\ %{+Q}r\ %ST\ %b:%s\ "%CC"\ "%hr"\ "%CS"\ "%hs"\ ireq_size=%U\ resp_size=%B
unique-id-format %{+X}o\ %ci:%cp_%fi:%fp_%Ts_%rt:%pid
unique-id-header X-Unique-ID

Blacklist: Deny access to some IPs before anything else is checked

tcp-request content reject if { src -f /opt/haproxy/security/rules/blacklist.lst }

Whitelist: Allow IPs to bypass the filters

http-request allow if { src -f /opt/haproxy/security/rules/whitelist.lst }

    reqadd                X-Forwarded-Proto:\ http
    reqadd                X-Forwarded-Port:\ 80

    rspdel                ^Server:.*
    rspdel                ^X-Powered-By:.*
    rspdel                ^X-Runtime:.*

capture request header X-Forwarded-For len 500
capture request header Host len 500
capture request header Accept-Encoding len 32
capture response header Location len 64
capture response header Content-Type len 64
capture request header Referrer len 64
capture request header Content-Length len 10
capture request header User-Agent len 64
capture request header X-Unique-ID len 64

default_backend sso_server

Compile flags:

./haproxy-80 -vv
HA-Proxy version 1.6.9 2016/08/30
Copyright 2000-2016 Willy Tarreau willy@haproxy.org

Build options :
TARGET = linux2628
CPU = native
CC = gcc
CFLAGS = -O2 -march=native -g -fno-strict-aliasing -Wdeclaration-after-statement
OPTIONS = USE_LIBCRYPT=1 USE_ZLIB=yes USE_OPENSSL=1 USE_STATIC_PCRE=1

Default settings :
maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.7
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with OpenSSL version : OpenSSL 1.0.2j 26 Sep 2016
Running on OpenSSL version : OpenSSL 1.0.2j 26 Sep 2016
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.32 2012-11-30
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built without Lua support
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND

Available polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result OK
Total: 3 (3 usable), will use epoll.

PS output:

www 1451 18205 0 Oct18 ? 00:00:00 /opt/haproxy/haproxy-80/sbin/haproxy-80 -f /opt/haproxy/haproxy-80/conf/haproxy.conf -p /run/haproxy-80.pid -Ds -sf 18209
www 1452 1451 0 Oct18 ? 00:53:31 /opt/haproxy/haproxy-80/sbin/haproxy-80 -f /opt/haproxy/haproxy-80/conf/haproxy.conf -p /run/haproxy-80.pid -Ds -sf 18209
root 18205 1 0 Oct11 ? 00:00:00 /opt/haproxy/haproxy-80/sbin/haproxy-80-systemd-wrapper -f /opt/haproxy/haproxy-80/conf/haproxy.conf -p /run/haproxy-80.pid
root 18219 1 0 Oct11 ? 00:00:00 /opt/haproxy/haproxy-443/sbin/haproxy-443-systemd-wrapper -f /opt/haproxy/haproxy-443/conf/haproxy.conf -p /run/haproxy-443.pid
wwwssl 28231 18219 0 Oct28 ? 00:00:00 /opt/haproxy/haproxy-443/sbin/haproxy-443 -f /opt/haproxy/haproxy-443/conf/haproxy.conf -p /run/haproxy-443.pid -Ds -sf 26752
wwwssl 28232 28231 0 Oct28 ? 00:09:01 /opt/haproxy/haproxy-443/sbin/haproxy-443 -f /opt/haproxy/haproxy-443/conf/haproxy.conf -p /run/haproxy-443.pid -Ds -sf 26752

any did one had similar experience? or any one using haproxy successfully on rhel platform using systemd based service management?

any help is highly appreciated. unable to breakthrough this issue due to insufficient log why it is failing every few days.

Srinivas Kotaru

Posts: 1

Participants: 1

Read full topic

Add/subract servers from backend

$
0
0

@haproxyguy wrote:

Hey guys,

I'm working with the AWS platform and started looking into haProxy when I realised that Amazons load balancer won't do handle persistent connections which are not HTTP based.

haProxy seemed a great fit. However, I'm having a bit of issue working out how I'm going to automatically scale up/down. So for example, during a busy period, I may wish to add a few extra servers into my backend. It seems the only way I can do this in an automated fashion is to re-write the haproxy.cfg file and then soft-reload the load balancer.

Is there a more elegant way to do this? It seems a bit hacky using shell scripts to re-write configs.

Any help would be greatly appreciated.

Thank you,

Matt

Posts: 1

Participants: 1

Read full topic


HAProxy Redirect with Lync 2013

$
0
0

@alan.lowe wrote:

Hey guys, first off, love HAP. With that said, I am having a little issue with it working properly with Lync 2013, we have made SEVERAL attempts and different alterations to the config and we still can't seem to get the lync server to accept the connection after the proxy passes the traffic. Has anyone worked with Lync 2013 and HAP at all, if so....please help.

Posts: 1

Participants: 1

Read full topic

How do I get HAProxy to propagate chunked encoding?

$
0
0

@alvitar wrote:

We're trying to use HAProxy in an application where the client captures voice clips and submits them as part of an HTTP POST. We have Transfer-Encoding set to chunked because the voice clips can be quite large, and frankly because the customer requires it. That is how it arrives at HAProxy, but on the backend the request is no longer chunked. Can anyone tell me what I'm doing wrong? Is there a setting to propagate Transfer-Encoding?

Posts: 1

Participants: 1

Read full topic

HAProxy not passing through user certificate

$
0
0

@philipcowgill wrote:

I'm using HAProxy to load balance between four servers, which requires a user to present a certificate in order to login to. When the user hits the webpage they are asked to present their certificate but the certificate never gets through to the server to authenticate with. I've tested logging in without HAProxy in front of it and everything works correctly. I have pasted what my configuration setup looks like.

global
uid 0
gid 0
log /dev/log local0
log /dev/log local1 notice
maxconn 45000
daemon

defaults
log global
mode http
option tcplog
option dontlognull
timeout server 86400000
timeout connect 86400000
timeout client 86400000
timeout queue 1000s

frontend 443_oam
bind 192.168.0.208:443
mode tcp
option tcplog
default_backend oam_443

backend oam_443
mode tcp
option ssl-hello-chk
balance roundrobin
server oam01 :443 check
server oam02 :443 check
server oam03 :443 check
server oam04 :443 check

Posts: 1

Participants: 1

Read full topic

Proxied IIS site delay issues

$
0
0

@webber wrote:

I have set up a single haproxy server to load balance hundreds of sites running on two IIS servers. It works but after a while some client ip's have long delays connecting to sites. If I clear arp on both of the IIS servers with "arp -d *" the issue goes away for a while.

I have set the ArpCacheSize to 8000 (hex) in the registry on both servers and that has helped but there is still an issue. Is there a garbage collecting timeout that also needs to be reduced in the registry? I have also tuned the linux kernel on the haproxy server for raising the arp-related settings:

net.ipv4.route.gc_thresh = 524288
net.ipv4.neigh.default.gc_thresh1 = 8192
net.ipv4.neigh.default.gc_thresh2 = 16384
net.ipv4.neigh.default.gc_thresh3 = 32768

Any ideas on a permanent fix besides flushing arp on the WIndows 2012 backend servers?

Thanks.

Posts: 1

Participants: 1

Read full topic

How can I add a globally unique http header

$
0
0

@Jerry wrote:

Hi,
I am on a campaign to add application tracing to all our complex apps. I want to have a unique id across all incoming requests across all our data centers. All the incoming requests first hit a haproxy, so this is the best place to do this.

Here's what I am thinking of...

  • Each haproxy has a 16 bit haproxy-id
  • There is a file that has the next block of numbers (48 bits) that this haproxy should use. When haproxy starts up, it reads the file and sets that in a pair of integers (low 32 bits and high 16 bits.)
  • Each request gets a header of X-tracer-tag: with the haproxy-id, high bytes and low bytes in hex, concatenated and the number gets incremented (implementation bumps first).
  • If the value was ffffffff, roll to 0 and increment the high bytes. (I doubt we will handle >48 bits of queries in the life of this software.)
  • When the when the low bytes get closes to the block top, a log entry is generated with the next block to use.
  • A separate process watches the log file and updates the input file

(note: While this is technically not thread safe, it is fine because the worst case is that we would skip a 4G number block when incrementing the high bytes. This is not worth doing any mutex hassle.)

I looked at lua and it could do most of what I want. I have a few questions:
How would I have a global variable that is available to a routine that every connection calls?
How do I set the haproxy id so that lua can access it?
How do I read the file with the current 6 bytes to seed the ID on start?

thanks in advance,
jerry

Posts: 1

Participants: 1

Read full topic

Viewing all 4740 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>