Quantcast
Viewing all 4731 articles
Browse latest View live
↧

HaProxy stops working after a period of time '212' HTTP error code

@elizabethk wrote:

I’m having this strange issue where HAProxy randomly stops working for a specific URL/IP and I cannot figure out the reason. A restart of the service will resolve the issue but I need to determine a permanent fix for this rather than a workaround cronjob that restarts the service periodically.

I am proxying a request from a server to an API

When HAProxy is working my logs will look like below (URL replaced with ā€˜site’ and IP changed for confidentiality):

/var/log/haproxy/access.log-20200117.gz:
haproxy[26037]: 10.10.50.50.:42386 [16/Jan/2020:13:39:14.934] site-uat_site.com site-uat_com/site-uat_com 0/153/272 1104 – 1/1/0/1/0 0/0

and then at some random time, possibly after a request hasn’t been sent to this URL for a while I get the below in my logs:

/var/log/haproxy/access.log-20200118.gz:
haproxy[26037]:10.10.50.10:60657 [17/Jan/2020:12:49:26.065] site-uat_site.com site-uat_com/site-uat_com 0/-1/120008 212 sC 0/0/0/0/3 0/0

Note the 212 error code and sC termination code. I’ve tried doing some research into this:
s: the server-side time-out expired first.
C: waiting for CONNECTION to establish on the server. The server might at most have noticed a connection attempt.

I check the log on the backend of the proxy and I don’t see any traffic reaching it.

Any help or ideas would be greatly appreciated, I’m even a little stuck on how to start troubleshooting this.

Config:

frontend site-uat_site_com
           bind            10.10.50.50:6443
           mode            tcp
           log             global
           option          tcplog
           option          dontlognull
           timeout client  90s
           use_backend site-uat_com

backend site-uat_com
            mode            http
            timeout connect 30s
            timeout server  30s
            balance         roundrobin
            http-request set-header Host nonprod-site.com
            server          site-uat_com nonprod-site.com:443 ssl verify none

Posts: 1

Participants: 1

Read full topic

↧

How to healthcheck redis behind stunnel?

@asp wrote:

Hi,

I am new to haproxy and wanted to give it a try on centos 7.6.
I want to use haproxy to do the failover of a redis instance, which is located behind stunnel to enable tls on redis. On producer side I am using fielebeat and metricbeat which sends data tls encrypted to redis.
Stunnel and redis are located on the same box.

So my chain looks like this:
producer -> (TLS) -> haproxy (TLS passthrough) -> (TLS) -> stunnel -> redis.

Without encryption it healthcheck and connection via redis-cli from external to redis via haproxy is working fine.

That is the config for that:
defaults REDIS
    mode tcp
    timeout connect 3s
    timeout server 6s
    timeout client 6s

frontend ft_redis
    bind 0.0.0.0:16380 name redis
    default_backend bk_redis

backend bk_redis
    option tcp-check
    tcp-check connect
    tcp-check send PING\r\n
    tcp-check expect string +PONG
    tcp-check send info\ replication\r\n
    tcp-check expect string role:master
    tcp-check send QUIT\r\n
    tcp-check expect string +OK
    server server1 localhost:7000 check inter 1s 
    server server2 localhost:7001 check inter 1s 

Now I want to do the same, but via accessing the stunnel ports (where I need to connect via TLS).
I changed the two server lines to:

server server1 localhost:7100 check-ssl inter 1s ssl verify none
server server2 localhost:7101 check-ssl inter 1s ssl verify none

But looks like something is wrong. The healthcheck does not come back. The backend lines in stats are gray without any status.

Do you have any ideas how to solve the issue?

Thanks a lot, Andreas

Posts: 1

Participants: 1

Read full topic

↧
↧

HAProxy CPU Usage when all backend servers are in MAINT state with check directive

@bokseniu wrote:

Hi folks!

I’ve noticed the weird HAProxy 2.0 (2.0.12) behavour when it is started with all backend servers disabled (MAINT state). Weirdness is that when the servers are configured with check directive HAProxy starts to consume additional CPU cycles (when compared to the same configuration with no-check directive). It becomes more visible when HAProxy is started with high number of threads.

For example, on a 2CPU VM and 100 backend servers in a MAINT state HAProxy started with 62 threads consumes around 20% of CPU (that’s with no load). With the same config but no-check in it CPU usage is around 1%.

Can this be fixed?

Example config:

global
   nbthread 62
   log stdout local0 warning
   stats socket /var/run/haproxy.sock mode 666 level admin
   stats timeout 2m

defaults
    log global
    mode http
    option http-use-htx
    timeout connect 5s
    timeout client 1h
    timeout server 1h

listen grpc
    bind 127.0.0.1:50051 proto h2
    option httplog
    balance roundrobin
    default-server proto h2 check inter 1s fall 1 rise 1 disabled
    #default-server proto h2 no-check inter 1s fall 1 rise 1 disabled
    server-template srv 1-100 10.0.0.1:50051

Posts: 1

Participants: 1

Read full topic

↧

External-check with chroot not working

@asp wrote:

Hi,

I am trying to get an external-check running together with chroot. For simpliness to show the issue I will just call /bin/true as check.

I tried the following:

without chroot:

backend bk_redis
    option external-check
    external-check command /bin/true
    server elastic01.example.com_7000 localhost:7000 check inter 1s
    server elastic02.example.com_7001 localhost:7001 check inter 1s

that one is working. Both servers are returned as available as expected.

Now my changes with chroot:
first, I copied /bin/true to /var/opt/rh/rh-haproxy18/lib/haproxy/true

then I changed the config:

global
    chroot      /var/opt/rh/rh-haproxy18/lib/haproxy

backend bk_redis
    option external-check
    external-check command /true
    server elastic01.example.com_7000 localhost:7000 check inter 1s
    server elastic02.example.com_7001 localhost:7001 check inter 1s

But here I get only this error:

[WARNING] 022/114401 (2119) : Server bk_redis/elastic01.internal.dtpublic.de_7000 is DOWN, reason: External check error, code: 255, check duration: 14ms. 1 active and 0
[ALERT] 022/114402 (2127) : Failed to exec process for external health check: No such file or directory. Aborting.
[ALERT] 022/114403 (2128) : Failed to exec process for external health check: No such file or directory. Aborting.
[ALERT] 022/114403 (2131) : Failed to exec process for external health check: No such file or directory. Aborting.

I am not so familar with chroot. Can you point out, where my mistake is?

Thanks, Andreas

Posts: 1

Participants: 1

Read full topic

↧

Balance IP OK, balance dns name KO

@papali wrote:

Good morning, I installed HAProxy to balance 2 web servers, and I created 2 text files within these web servers to understand when one or another server is used, then from my PC if I go to the address ā€œhttps://firsttest.domain.com/number_server.txtā€ every refresh balance towards a different web server, ALL OK, even the DNS configuration.

The problem is that if instead I point to ā€œhttps://firsttest.dominio.comā€, then to the homepage of the site, instead I get:

firsttest.domain.com has redirected you too many times.
Try to clear the cookie.
ERR_TOO_MANY_REDIRECTS

With the IP Address instead it working.

What could be the problem?

Thanks.

Posts: 1

Participants: 1

Read full topic

↧
↧

Authentication using IDP

↧

Cannot run external healtcheck as systemd service with selinux enabled

@asp wrote:

Hi,

I have some strange issue.
If I run haproxy as systemd service (version 1.8 from centos-scl-repository) I have issues running external healthchecks.

I get following error:

[ALERT] 022/162700 (23160) : Failed to exec process for external health check: Permission denied. Aborting.

When running the same from commandline it works as expected.
chroot is currently deactivated.

After a lot of trying I found out, that the issue is triggered by selinux. When I disable selinux with setenforce 0 the error vanishes. If I reenable it, the error is there again.

I already executed the command setsebool -P haproxy_connect_any=1.

Thanks for any help
Andreas

Posts: 1

Participants: 1

Read full topic

↧

Protect internal PHP by proxy

@rafaelweb wrote:

[Automatically translated]
Hello.

I am participating in a new project and I chose HAProxy as a solution, but I am new to the subject and I need your help.
I need to allow external access to an internal application, and for that I am using HAProxy in a DMZ controlling access via firewall. In addition I needed to create an authentication in the application (PHP) and this is where the problems started.
I am trying to block the user’s access directly on HAProxy if he has not yet authenticated himself on the system. I thought about using a custom header but I can’t access that header in haproxy.

Is this the ideal way for what I need to do?
My code looks like this:

[PHP]
...
if (!empty($_SESSION['uTokenAuth'])) {
	header("uTokenAuth:".$_SESSION['uTokenAuth']);
}
...

[HAPROXY]
...
frontend http_in
	bind *:80
	#bind *:443 ssl crt /etc/ssl/certs/mysite.pem
	http-request capture req.hdr(uTokenAuth) len 16
	log-format %ci\:%cp\ frontend=%ft\ backend_ip=%bi\ backend_pool=%b\ server_name=%s\ AuthHeader=%[capture.req.hdr(0)]\ %hr\ %hrl\ %hs\ %hsl\ http_log="[%tr] %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r"
	#http-request redirect scheme https unless { ssl_fc }
	default_backend WEB
	
backend WEB
	option forwardfor
	option http-server-close
	balance roundrobin
	server WEBSERVER ip:80 no-ssl check port 80
	acl withtoken req.hdr(uTokenAuth) -m found
	acl loginpage path_beg /login
	http-request set-header X-Client-ip %[src]
	http-request set-header X-Forwarded-Proto http
	http-request set-header X-Frame-Options DENY
	http-request redirect location /login if !withtoken !loginpage

Thanks

Posts: 1

Participants: 1

Read full topic

↧

Reading hostname list from file in ACL not working

↧
↧

Preserve server states during failover

@maartend wrote:

Hi,

In an active passive setup with keepalived (floating IP) and two haproxy servers (both running), what would be the possibility to preserve the server states during failover from one HAProxy to the other?

Locally its possible to maintain the server state during restarts with ā€œserver-state-fileā€.
Is there a (peering) mechanism available that both HAProxies are in sync with the server states, configured by the stats page.

Please let me know what thoughts you have…

Maarten

Posts: 2

Participants: 2

Read full topic

↧

Stickyness on server-created sessions

@Hein wrote:

I’m trying to load-balance a SOAP interface which requires sticky sessions, in a manner that is transparent to the caller. The first call to the interface does not have a session ID in it, and thus needs to be load-balanced normally. The server will create a session and return the session ID in the response. Subsequent calls to the interface will contain the session ID, and thus need to be routed to the original server. The session IDs are globally unique, so they are ideally suited to be used in a stick table, and I’ve been working on getting that done.

I’m encountering several problems, and I am reluctant to create a topic for each, so let’s see if I can combine them:
Conditional entry in stick table
I have a lua script that parses the incoming request for the session ID. However, there might be none, in which case the script returns nil (though I could easily return something else that is not similar to the session IDs). So the ā€˜stick on’ must be conditional, because right now, nil gets stored in the stick table and every request without a session ID goes to that server! That’s not what we need.
I now have:

http-request set-var(txn.sessionid) lua.parseElement
stick on var(txn.sessionid) if var(txn.sessionid),strcmp(nil) neq 0

but that gives an error because there is no such ACL. This one also doesn’t work:

http-request set-var(txn.sessionid) lua.parseElement
acl no_req_session var(txn.sessionid) -m str nil
stick on var(txn.sessionid) if !no_req_session

Clearly I am doing something wrong… but what?

Parsing response
In my Lua script, I have the following lines:

local payload_len = txn.sf:res_len()
local payload = tostring(txn.sf:res_payload(0,tonumber(payload_len)))
core.Alert("Request length is " .. tostring(payload_len))
core.Alert("LUA Starting parse on payload " .. payload)

which should give me the payload of the response in the variable ā€˜payload’. This is what I see in the log:

[alert] 023/071300 (1968) : Request length is 435
[alert] 023/071300 (1968) : LUA Starting parse on payload .?

Clearly, I’m not successful in converting the whole payload to a string. What am I doing wrong?

Any help with these is greatly appreciated!

Posts: 1

Participants: 1

Read full topic

↧

Subdirectory frontend to root backend

@thetimetolearn wrote:

Hello

I’m looking for some help with figuring out the best way to handle routing URLs with subdirectories to applications hosted on the root of a webserver.

Say I have URL domainx.com/path1, I want users to be sent to an application on server web2 that is running an application in the root of tomcat.

If I use the below config it will send users to that tomcat server but will direct users to 192.168.0.51:8080/path1 not 192.168.0.51:8080 If I move the application to 192.168.0.51:8080/path1 this will work. But what if the application can’t be renamed or relocated. What is the best way to accomplish this?

Thanks for any help in advance!

 	acl url_domainx.com hdr(host) -i domainx.com
	acl url_domainx.com-path1 path_beg -i /path1
	
	use_backend domainx.com if url_domainx.com-path1 url_domainx.com
	use_backend domainx.com if url_domainx.com
 
backend domainx.com
        server web1 192.168.0.50:8080 check
		
backend domainx.com-path1
        server web2 192.168.0.51:8080 check

Posts: 7

Participants: 2

Read full topic

↧

Slow URLs using Multi-backends

@Amine97 wrote:

Hello,
Could you please help me?
i’m using :
Two VMs Haproxy 1.8 (open source) on Debian9 (and reparting the load between them with CRMsh/pacemaker/corosync) each one 3 vCPU and 12Gb RAM
When i set just one URL in the Frontend ACL and one Backend server it works fine.
but when setting many ACL in one frontend for different URLs that goes to different webservers/ports on different backends , the URLS opens very very slow and sometimes i get ā€œ503 gateway errorā€.
i tried playing with timeout connection/timeout server/timeout client but didn"t solved the issue… i tried setting each app in different Frontend a part… the same issue persists…
all webservers are Microsoft IIS… here is my config:

global
log /dev/log local0
log /dev/log local1 notice
log 127.0.0.1:514 local0 warning
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/run/haproxy.stat

defaults
mode http
log global
option httplog
option dontlognull
option forwardfor except 127.0.0.0/8
option redispatch
maxconn 80000

listen stats
bind *:9999
mode http
log global
maxconn 40000
stats enable
stats hide-version
stats refresh 30s
stats show-node
stats uri /

#----
#Front
#----

frontend front_INTE_QA
log-format ā€œ%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}rā€
http-response set-header X-Frame-Options SAMEORIGIN
http-response set-header X-Content-Type-Options nosniff
http-response set-header X-Proxy VMHAP1
mode http
bind *:80
bind *:443 ssl crt /etc/haproxy/certificates/ #no-sslv3
redirect scheme https code 301 if !{ ssl_fc } # redirect 80 -> 443 (for owa)
log-format ā€œ%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}rā€
maxconn 40000
timeout connect 1800s

acl acl_app1 hdr(host) -i app1.domaine1.com
acl acl_app2 hdr(host) -i app2.domaine2.com
acl acl_app3 hdr(host) -i app3.domaine3.com
acl acl_app4 hdr(host) -i app4.domaine4.com

use_backend back_server1 if acl_acl_app1
use_backend back_server2 if acl_acl_app2
use_backend back_server3 if acl_acl_app3
use_backend back_server4 if acl_acl_app4

#----
#Backend
#----

backend back_server1
mode http
balance roundrobin
server srv_myserver1 10.10.11.5:8089

backend back_server2
mode http
balance roundrobin
server srv_myserver1 10.10.11.4:8088

backend back_server3
mode http
balance roundrobin
server srv_myserver1 10.10.11.15:8045

backend back_server4
mode http
balance roundrobin
server srv_myserver1 10.10.11.29:8050

Posts: 1

Participants: 1

Read full topic

↧
↧

SSL end-to-end with URL rewrite

@berik wrote:

Hello,

Can I accomplish this task using haproxy?

There are two serverfarms sharepoint2013 and sharepoint2019.
I want the loadbalancer to replace requests to specific URL sp.mycompany.com/forms/form1 with sp19.mycompany.com/forms/form1 transparently for users. Also the loadbalancer must send these requests to another server farm (sharepoint2019). But by default if requests goes to sp.mycompany.com/ or other URI at this domain, loadbalancer must send to sharepoint2013 serverfarm.

DNS record sp.mycompany.com points to sharepoint2013 serverfarm

DNS record sp19.mycompany.com points to sharepoint2019 serverfarm

Thank you!

Posts: 1

Participants: 1

Read full topic

↧

Change last-modified for sitemap.xml

@Giant wrote:

Hi

We are going to update the last-modified header to the time is opened by web for sitemap.xml, is it possible to do this by haproxy or not!!!

Posts: 1

Participants: 1

Read full topic

↧

HAProxy backend server respond with different URI than the acl path_beg

@abbas.ali wrote:

I have configured HAProxy as below:

    acl has_env_appslogs path_beg -i /test
    use_backend backend_env_pool_appslogs if has_env_appslogs

    backend backend_env_pool_appslogs
        server env_APP_Logs_2 <hostname>:9999 check

when I browse https://<haproxy_host>/test it brings me to the backend server’s login page.
After entering user & pass in the login page … it generates a HTTP POST request to https://<haproxy_host>/login …. which can’t handle HAProxy. It says below:
[ 503 Service Unavailable
No server is available to handle this request.]

How can I handle this in HAProxy [Like in the middle of backend server communication] ?

Similar request : Rewrite urls for response with absolute urls

Posts: 1

Participants: 1

Read full topic

↧

Hanproxy with nginx backend over http2. httpchk healthcheck

@therb1 wrote:

I wanna use Haproxy in front of nginx.
I want send http2 to nginx and i want use http l7 healthchecks.
I finded post Haproxy 2.0.5 httpchk http/2 backend in this post i found that haproxy can’t check backend server over http2. Is this information is still right and what is the haproxy vision of this theme?

Posts: 1

Participants: 1

Read full topic

↧
↧

HAProxy Config Help

@Thaweeoo wrote:

I am new to the forum, and I have been reading all the posts that I can find on HAProxy configs, but they don’t seem to be addressing my specific need.

Here is my situation.

I am trying to setup next cloud on lxc / lxd containers and I need to be able to run more than 1 instance for me and my cohorts that I work with, so they have there own private cloud.

One of the major components of that is to be able to route the traffic to the correct instance of nextcloud as well as be able to run the internal nextcloud.cc command to letsencrypt each instance.

I would be grateful if someone would review my config and help me get it to where I need it. HAProxy is a bit confusing to figure out. Thanks a bunch ! Image may be NSFW.
Clik here to view.
:slight_smile:

frontend localhost80

bind *:80
mode http
redirect scheme https code 301 if !{ ssl_fc }

frontend localhost443

bind *:443
option tcplog
mode tcp

acl tls req.ssl_hello_type 1

tcp-request inspect-delay 5s
tcp-request content accept if tls

acl is_inhomecc hdr_end(host) -i cloud.mydomain.net
acl inhomecc_path path_beg /cloud/

acl is_brd hdr_end(host) -i brd.mydomain.net
acl brd_path path_beg /bernard/

acl letsencrypt-acl path_beg /.well-known/acme-challenge/

use_backend inhomecc_cloud if is_inhomecc !inhomecc_path
use_backend brd_cloud if is_brd !brd_path
use_backend letsencrypt-backend if letsencrypt-acl

acl is_inhomecc req.ssl_sni -i cloud.mydomain.net ;- Tried testing
acl is_brd req.ssl_sni -i brd.mydomain.net ;- Tried testing

backend inhomecc_cloud

mode tcp
option ssl-hello-chk
server 192.168.1.155:443 check

backend brd_cloud

mode tcp
option ssl-hello-chk
server brd 192.168.1.154:443 check

As a side note? Do i need to have the container encrypted before it will connect due to the :443 check call? Because I get connection refused a lot and sometimes timed out.

Posts: 1

Participants: 1

Read full topic

↧

502 Bad Gateway - PH-- - Haproxy 1.8

@genseb1 wrote:

Hi,

I’m trying to use Haproxy 1.8 on debian 10 distrib (buster) with an apache backend.

When I try to Access a page in which we set a cookie, I’m encountering a 502 Bad Gateway.

Looking at the log, I can see that it’s a PH-- error so answer of server seems to be the cause but I haven’t any other clue.

Activating debug mode didn’t help me.

I’ve tried in tcp mode it works but I can’t use or because my routing is based on http header.

Tried with apache with proxy mode activated and it works fine.

I’ve also tried to accept Bad request ans rĆ©ponse in option also to set ā€œno option checkcacheā€ even if ā€œoption checkcache IS not set anywhereā€ but it also didn’t work

Any ideas of the issue?

Posts: 1

Participants: 1

Read full topic

↧

Haproxy in openshift for redirection

@ulicle wrote:

Hello,
we are using RedHat openshift 3.11 and want to configure a custom router as described in https://docs.openshift.com/container-platform/3.11/install_config/router/customized_haproxy_router.html#install-config-router-customized-haproxy
The custom router shall redirect requests from port 9092 in the following manner:
namespace.apps.openshift_domain:9092/service -->
service-namespace.apps.openshift_domain:8080

The route for the target url exists and can be accessed without error through the default router in this manner:
service-namespace>.apps.openshift_domain:80 -->
service-namespace>.apps.openshift_domain:8080

To achieve this custom router behavior, we made changes in the existing haproxy-config.template by replacing the bind port 80 part with port 9092 and added an extra redirect in the backend openshift_default section:

http-request redirect code 301 drop-query location
http://service-namespace.apps.openshift_domain:8080

Changes are visible in the running custom router pod in the haproxy.config file, and oc logs for the custom router has the following output:

  • Checking http://localhost:9092 …
  • Health check ok : 0 retry attempt(s).
    I0129 16:04:28.963383 1 router.go:481] Router reloaded:
    [WARNING] 028/160428 (164) : parsing [/var/lib/haproxy/conf/haproxy.config:169] : backend ā€˜be_tcp:default:docker-registry’ : ā€˜option tcplog’ directive is ignored in backends.

Unfortunately curl is reporting a connection refused error, when we try to access namespace.apps.openshift_domain:9092/service
Even, when the log point out port 9092 (similar as the default router for port 80) the redirection does not work at all.
Any ideas, how this can be resolved?

Posts: 1

Participants: 1

Read full topic

↧
Viewing all 4731 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>