Quantcast
Channel: HAProxy community - Latest topics
Viewing all 4731 articles
Browse latest View live

Is "verify none" working for an expired certificate?

$
0
0

@moscardo wrote:

Hi,
I have a short question (I tried it and my assumptions seem to be correct, but just want to double check), can a let a certificate expire on the backend and have “verify none” and a valid certificate on the fronend and I will not have any issue?
So far I am moving machines that have a valid certificate behind HAProxy, so on the date that a certificate expires, I want to make sure that if a valid certificate is exposed on the frontend, the backends with an expired certificate will be fine.

Thanks.

Posts: 1

Participants: 1

Read full topic


Cloudflare: haproxy is using the wrong IP for HTTP requests after the first

$
0
0

@supermathie wrote:

We are using haproxy 1.8.17 in a two-stage setup:

  • multiple “interceptor” servers (in http mode) that accept the initial connection and send it (using send-proxy-v2-ssl) to multiple:
  • “routers” that know about the various backends to which we need to route the requests

Everything is working great, except when CloudFlare is involved. In the frontend on “router” we’re using:

http-request set-src hdr(x-forwarded-for) if is_cloudflare_src

and we find:

  • the connections are properly coming from various Cloudflare IPs within their published ranges (188.114.110.101, 172.68.133.229, 162.158.255.95) – this is OK

  • between “interceptor” and “router”, a single connection is fronted by a PROXY header with the actual outside layer 3/4 connection information – this is OK

  • inside that stream are multiple requests – this is OK :

    POST /message-bus/c009f71cb32a656f70b46a6db8c6ad42/poll?dlp=t HTTP/1.1
    Host: community.customer.com
    X-Forwarded-For: 192.0.2.18
    CF-Connecting-IP: 192.0.2.18
    
    POST /message-bus/97361f6ea08fc8c7a1eda4d34435907b/poll?dlp=t HTTP/1.1
    Host: community.customer.com
    X-Forwarded-For: 198.51.100.37
    CF-Connecting-IP: 198.51.100.37
    
    GET /admin/users/195.json?_=1556350420519 HTTP/1.1
    Host: community.customer.com
    X-Forwarded-For: 203.0.113.39
    CF-Connecting-IP: 203.0.113.39
    
  • what’s being reported by “router” is NOT OK as it has the source IP as the X-Forwarded-For IP from the first request:

192.0.2.18:30658 [30/Apr/2019:18:54:41.486] app community/server03 0/0/1/61/62 200 1904 - - ---- 718/718/29/12/0 0/0 "GET /admin/users/195.json?_=1556350420519 HTTP/1.1"

I think that with our setup haproxy should be inspecting the X-Forwarded-For of each request, but it appears to be using the first IP seen for every single subsequent request.

Where’s the problem? Is this haproxy’s error or do we need to tell it to do something different?


This is a total guess but I wonder if the http-request set-src overwrites the connection source instead of the request source and then on the next request on that connection it doesn’t match is_cloudflare_src

Posts: 1

Participants: 1

Read full topic

How to use TCP & HTTP/2 both inside frontend

$
0
0

@cleadspi wrote:

Im try to put Dns-over-TLS and HTTPS into HAproxy 1.9.7 on Debian9
This settings works with DoH, but DoT not working. I wanna share port 443 on both services
Without HTTP/2, everything works fine ~~

frontend doh-in
    no log
    bind :::443 v4v6  ssl crt /etc/haproxy/dot-jp.blahdns.pem alpn h2,http/1.1
    http-response set-header Strict-Transport-Security "max-age=31536000"
    tcp-request inspect-delay 3s
    tcp-request content accept if HTTP
    tcp-request content accept if { req.ssl_hello_type 1 }
    use_backend dot-server if { req.ssl_hello_type 1 }
    use_backend doh-server if { ssl_fc_alpn -i h2 }
    use_backend doh-server if HTTP

    acl dns_url path /dns-query

    use_backend doh-server if dns_url
    default_backend dot-server

backend dot-server
    mode tcp
    server dns 127.0.0.1:50

backend doh-server
    http-response del-header server
    http-response del-header x-powered-by
    server doh-proxy 127.0.0.1:3000

Posts: 1

Participants: 1

Read full topic

Forward http request with client certificate to outside server with mutual ssl validation

$
0
0

@xpoveda wrote:

Hi everybody,

I’m looking for how send a client certificate to outside server with ssl validation but I don’t found it in documentation and no examples im internet, usually appears only example with openssl and s_client command.

Many examples of how create a server with client validation but I want create a client with haproxy to consume it.

I like listen in haproxy.cfg some port, by example 81, and send the request to https://outsideserver.com:443 adding client certificate.

Any example to do this?

Thanks so much in advance.

Xavier.

Posts: 1

Participants: 1

Read full topic

Stick table backend inconsistency using DNS resolvers on upgrades

$
0
0

@tjungblut wrote:

Hello community,

We’re using HAProxy in Kubernetes as a sticky load balancer in front of a deployment of five total pods (real HAProxy, not the ingress controller version of it). Here’s our config:

global
	daemon
	maxconn 10000
	stats socket /usr/local/etc/haproxy/admin.sock mode 600 level admin
	log /dev/log local0
	
defaults
	mode http
	timeout connect 5000ms
	timeout client 30000ms
	timeout server 30000ms

resolvers kubernetes
	nameserver skydns kube-dns.kube-system:53
	resolve_retries 10
	timeout retry 2s
	hold valid 5s

frontend http-in
	bind *:80
	log /dev/log local0
	option httplog
	default_backend servers

backend servers
	balance roundrobin
	stick-table type string size 100m
	option httpchk GET /health
	http-check expect status 200
	option tcp-check
	stick on path, word(3,/)
	server-template pod 5 pod.namespace.svc.cluster.local:8080 check resolvers kubernetes inter 500

As you can see we’re leveraging the server-templates and k8s dns resolvers to create the backend servers dynamically.

We’ve been pretty happy with the stick table approach so far, but we have some issues when doing a rolling upgrade of the backend servers. Kubernetes will start to terminate a pod and start-up a couple of new ones and wait until the old ones are completely wound down.

Now we’ve observed some inconsistency of the stick table during this rolling upgrade scenario, namely the instance of HAProxy would happily forward all requests to what it thinks is pod3. But on the request log of the backend we would see it ending up on three different backend servers.

Here’s the HAProxy request log for that period:

[29/Apr/2019:10:39:46.149] http-in servers/pod3 0/0/0/12812/12813 200 1031 - - ---- 303/303/235/3/0 0/0 "GET /v1/document/sticky_1556013348827_0fmbgvj50nmc/stepssince/983?_no_ie_cache=1556527174123 HTT
[29/Apr/2019:10:39:47.194] http-in servers/pod3 0/0/0/30/30 200 338 - - ---- 300/300/234/3/0 0/0 "GET /v1/document/sticky_1556013348827_0fmbgvj50nmc/stepssince/1006?_no_ie_cache=1556527187165 HTTP/1.1"
[29/Apr/2019:10:39:47.197] http-in servers/pod3 0/0/0/31/31 200 309 - - ---- 300/300/233/2/0 0/0 "POST /v1/document/sticky_1556013348827_0fmbgvj50nmc/steps HTTP/1.1"
[29/Apr/2019:10:39:47.277] http-in servers/pod3 0/0/0/34/34 200 338 - - ---- 300/300/233/2/0 0/0 "GET /v1/document/sticky_1556013348827_0fmbgvj50nmc/stepssince/1006?_no_ie_cache=1556527187248 HTTP/1.1"

HAProxy still thinks it is sending everything to pod3.

Here’s what we receive on the backend, where we can see requests for the same sticky_id path fragment to different pods at nearly the same time (request log -> pod identifier).

2019-04-29T10:39:46.962Z 'REQUEST-OK [GET] [/v1/document/sticky_1556013348827_0fmbgvj50nmc/stepssince/983]' -> pod-799496f69-dbp8s
2019-04-29T10:39:47.223Z 'REQUEST-OK [GET] [/v1/document/sticky_1556013348827_0fmbgvj50nmc/stepssince/1006]' -> pod-58fdfcc477-zjcrq
2019-04-29T10:39:47.227Z 'REQUEST-OK [POST] [/v1/document/sticky_1556013348827_0fmbgvj50nmc/steps]' -> pod-799496f69-dbp8s
2019-04-29T10:39:47.306Z 'REQUEST-START [GET] [/v1/document/sticky_1556013348827_0fmbgvj50nmc/stepssince/1006]'  -> pod-58fdfcc477-zjcrq

Here’s the proxy logs for that period that shows how the DNS resolver switches the IPs:

April 29th 2019, 10:39:19.000	[WARNING] 118/083919 (1) : Server servers/pod1 is going DOWN for maintenance (No IP for server ). 4 active and 0 backup servers left. 11 sessions active, 0 requeued, 0 remaining in queue.
April 29th 2019, 10:39:34.000	[WARNING] 118/083934 (1) : Server servers/pod1 ('pod.namespace.svc.cluster.local') is UP/READY (resolves again).
April 29th 2019, 10:39:34.000	[WARNING] 118/083934 (1) : Server servers/pod1 administratively READY thanks to valid DNS answer.
April 29th 2019, 10:39:34.000	[WARNING] 118/083934 (1) : Server servers/pod5 is going DOWN for maintenance (No IP for server ). 4 active and 0 backup servers left. 3 sessions active, 0 requeued, 0 remaining in queue.
April 29th 2019, 10:39:34.000	[WARNING] 118/083934 (1) : servers/pod1 changed its IP from 172.20.5.223 to 172.20.4.207 by kubernetes/skydns.
April 29th 2019, 10:39:34.000	[WARNING] 118/083934 (1) : servers/pod2 changed its IP from 172.20.4.248 to 172.20.4.245 by DNS cache.
April 29th 2019, 10:39:39.000	[WARNING] 118/083939 (1) : servers/pod3 changed its IP from 172.20.5.50 to 172.20.5.232 by DNS cache.
April 29th 2019, 10:39:49.000	[WARNING] 118/083949 (1) : Server servers/pod5 ('pod.namespace.svc.cluster.local') is UP/READY (resolves again).
April 29th 2019, 10:39:49.000	[WARNING] 118/083949 (1) : Server servers/pod5 administratively READY thanks to valid DNS answer.
April 29th 2019, 10:39:49.000	[WARNING] 118/083949 (1) : servers/pod4 changed its IP from 172.20.5.176 to 172.20.4.111 by DNS cache.
April 29th 2019, 10:39:49.000	[WARNING] 118/083949 (1) : servers/pod5 changed its IP from 172.20.5.98 to 172.20.5.164 by DNS cache.

The HAProxy is a single pod that did not restart or anything else that would wipe the sticky table somehow.

We suspect that this might be due to connections being pooled and thus held on while the server underneath is actually changing its IP via the DNS resolver. Does HAProxy have draining support for such a scenario and using the server-template? What else could cause such a behaviour?

I understand that this is probably also fairly specific to Kubernetes, but any helpful pointers on what’s going on here are appreciated.

Thanks a ton,
Thomas

Posts: 1

Participants: 1

Read full topic

Upgrading HAProxy 1.4 to 1.5+ for SSL termination?

$
0
0

@dflorez wrote:

Hello, I’ve inherited an out of date HAProxy server and in my learning about HAP for a project, I have found v 1.4 does not support SSL termination. We have many sites configured through HAProxy with SSL Passthrough and I would like to upgrade without breaking anything or at least have a way to rollback, but my experience is limited. The setup is as follows.

2 external HAP 1.4.22 / Ubuntu 12.04.2 vm servers (public facing)
2 internal HAP 1.4.22 / Ubuntu 12.04.2 vm servers
Keepalived is configured to provide redundancy between the 2 ext servers, same for the 2 int servers.
A cron job is set up on INT2 to copy config changes from INT1 and EXT2 from EXT1, every 5 minutes.

Any help or ideas are greatly appreciated

Thanks,
Dave

Posts: 1

Participants: 1

Read full topic

How to retain Connection: keep-alive header in HAProxy 1.5

$
0
0

@leon wrote:

Hi all,

I have an application is sending a HTTP request to a machine behind a HAProxy vip but the response HAProxy is sending back appears to be missing some headers.
What the sending application is expecting is to see in the response is the “Connection: keep-alive” header, but via the HAProxy, this header is stripped out in the response.

I’ve been trying to write something that will add the header back in to the response if the back-end (receiving) server responds with a “Connection: keep-alive” header but it’s not working:

acl foo hdr(Connection) -m found
rspadd Connection:\ keep-alive if foo

Any ideas on how I can get this working?

Also, I cannot upgrade HAProxy at the moment as it’s actually in-built into an NSX edge…

Thanks!
Leon

Posts: 1

Participants: 1

Read full topic

HAProxy SSL-termination with re-encryption on selfsigned cert?

$
0
0

@n1ete wrote:

After 10 hours of debugging i am lost and hope someone get me clarified on this. i read probably several times the right answer or was near “it-works”…

My Setup is Simple:
i got two webservers with self signed certs and there running fine internal

appserver1+nginx+selfsignedcert
app1.localdomain


appserver2+nginx+selfsignedcert
app2.localdomain

to make them reachable over wan i wanted to utilize haproxy (v1.8) with letsencrypt certificates for public web access.

haproxy(1.8)+certbot+letsencryptcerts
ha.localdomain

the following domain certs are created via letsencrypt and stored in ssl folder
app1.public.net
app2.public.net

and from now on it wasnt even once working:
i cant reach a site trough the haproxy address i run either into 503 or connection refused errors. startet to strip and change a lot on my haproxy confg file. this is how it looks now

frontend main
  bind :80
  bind :443 ssl crt /path/to/ssl.pem
  redirect scheme https if !{ ssl_fc }
  acl isapp1 hdr(host) -i app1.public.net
  acl isapp2 hdr(host) -i app2.public.net
  use_backend bapp1 if isapp1
  use_backend bapp2 if isapp2

backend bapp1
  description my shiny app server is so shiny
  server app1 XXX.XXX.XXX.XXX:433 check-ssl ssl verify none
  option http-no-delay
  http-request set-header X-Forwarded-Port %[dst_port]
  http-request add-header X-Forwarded-Proto https if { ssl_fc }

backend bapp2
  description a second shiny tower somewhere between bits and bytes
  server app2 XXX.XXX.XXX.XXX:433 check-ssl ssl verify none
  http-request set-header X-Forwarded-Port %[dst_port]
  http-request add-header X-Forwarded-Proto https if { ssl_fc }

listen haproxy-stats
  bind :8080
  mode http
  option forwardfor
  option httpclose
  stats enable
  stats show-legends
  stats refresh 10s
  stats uri /stats
  stats realm Haproxy\ Statistics
  stats auth someuser:somepassphrase
  stats admin if TRUE

i would love to get some help here what i am missing, dont want to spend another night on it :smiley:

there is also a more to consider: why do i terminate ssl 2x ?
i usually encrypt most of my internal traffic with ssl but brings this re-encryption some drawbacks except to the perfomance impact of ssl?

is there any advance on how ssl termination is done on nginx or haproxy?
something to consider security wise?

i love the idea to manage just a single machine with all certs on it, but since i deploy ssl certs with ansible i dont care that much.

is it maybee better to generete the letsencrypt certs on the backends and let haproxy run in tcp reverse-proxy mode?

a lot of questions for a first post and a happy hello to all the proxy-nerds :wink:

thanks in advance
n1ete

Posts: 1

Participants: 1

Read full topic


Backend observe layer7

$
0
0

@negroha wrote:

HAPROXY 1.9

Good afternoon, I am using the following configuration in my group of backend servers

server server01 10.0.0.1:80 check on-error mark-down observe layer7 error-limit 5

Where I tell him to do a layer 7 check and where he returns 5 errors.
The question I ask is the following. In MS ARR where I’m wanting to go out I had the possibility of doing this but adding that the condition to mark the server as down is if it is fulfilled in a certain amount of time since the services of my backend usually return error 500 since they use exceptions to show errors, so I would only consider removing a server from the backend farm if a certain amount of error is met in a certain period of time.

And the other question is how does the server come back online? Does human intervention require?

Thank you so much.

regards

Posts: 1

Participants: 1

Read full topic

Tune.h2.initial-window-size

$
0
0

@Thoufiq wrote:

I have found this " [tune.h2.initial-window-size] " command for faster uploads/connections… Can we use the same command in TCP mode… Will it work? or is there any option to increase the tcp window size ? Please help…

Posts: 1

Participants: 1

Read full topic

Azure Kubernetes Ingress Controller with HAproxy (Working Example)

$
0
0

@huhproxy wrote:

We’re investigating haproxy as a viable alternative to nginx as a kubernetes ingress controller on AKS (Azure Kubernetes Service).

The only example I can find on the net for such a setup is this one:
https://www.haproxy.com/blog/haproxy_ingress_controller_for_kubernetes/ .

(while there are many for NGINX). It’s a year out of date, so almost certainly broken in some ways.

Trying to get it working on Azure kubernetes cluster, I have some questions:

a) What is the host “foo.com” referred to in the configuration? Is that a DNS entry I’m supposed to have ready and associated to a pod in the Kubernetes cluster already? Any guide in the document on how to do that.

b) Is there any other in depth, tested working examples available specifically for getting HAproxy working as an internal and external Ingress controller on AKS ?

Many thanks in advance,
H.R.HoA

Posts: 1

Participants: 1

Read full topic

How to figure out how many requests are being refused?

$
0
0

@ecgouvea wrote:

Is there a way to figure out how many requests are being refused due to the high traffic and exceeding number of requests being issued to a single HAProxy instance?

Posts: 1

Participants: 1

Read full topic

How to make reloads safe in case of invalid config

$
0
0

@happy wrote:

  • I use default systemd-reloads
    ExecReload=/usr/sbin/haproxy -f $CONFIG -c -q
    ExecReload=/bin/kill -USR2 $MAINPID

  • Haproxy’s management manual says:

First, the service script (/etc/init.d/haproxy or equivalent) will verify that
the configuration file parses correctly using “haproxy -c”. After that it will
try to start haproxy with this configuration file, using “-st” or “-sf”.

However if I try to reload invalid config - all old process die immediately.

Of cause it is easy to solve problem by using “haproxy -c -f /etc/haproxy/haproxy.cfg && service haproxy reload”. But it is not correct way with double validation.

Is there any better solution for this case?

Posts: 1

Participants: 1

Read full topic

Roundrobin hiccups

$
0
0

@crazyfox wrote:

Hi @ll,

I’m just testing HAPROXY, We ditching an old LB.

Pretty straight forward config, but here are issues when I hit the VIP
Seems like I’m hitting the same server over and over, when I test with curl I do get roundrobin working as expected, I know that roundrobin is not efficient but its the preferred method for our situation.

Any ideas ?

global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats timeout 30s
user haproxy
group haproxy
daemon

defaults
log global
mode tcp
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000

frontend https
bind *:443
mode tcp
option tcplog
default_backend app

backend app
mode tcp
option tcplog
balance roundrobin
server webapp1 ip:443 check
server webapp2 ip:443 check

Posts: 1

Participants: 1

Read full topic

Consistent hash and server-template via dns resolver

$
0
0

@tjungblut wrote:

Hello community,

We’re using HAProxy in Kubernetes as a consistent hashing load balancer in front of a deployment of five total pods (real HAProxy, not the ingress controller version of it). Here’s our config:

global
	daemon
	maxconn 10000
	stats socket /usr/local/etc/haproxy/admin.sock mode 600 level admin
	log /dev/log local0
	
defaults
	mode http
	timeout connect 5000ms
	timeout client 30000ms
	timeout server 30000ms

resolvers kubernetes
	nameserver skydns kube-dns.kube-system:53
	resolve_retries 10
	timeout retry 2s
	hold valid 5s

frontend http-in
	bind *:80
	log /dev/log local0
	option httplog
	default_backend servers

backend servers
	balance uri depth 3
	hash-type consistent
	option httpchk GET /health
	http-check expect status 200
	option tcp-check
	server-template pod 5 pod.namespace.svc.cluster.local:8080 check resolvers kubernetes inter 500

As you can see we’re leveraging the server-templates and k8s dns resolvers to create the backend servers dynamically.

Now since we’re using consistent hashing we assume that we can just scale the haproxy frontend to multiple pods and have the same load balancing across the backend servers. However, when we try that out, we observe that the routing is consistent on one haproxy, but not on the others. So it seems that the order of the backend servers is different on each of the instances of haproxy.

My main question here is: is there a way to guarantee ordering of the server-template via the dns resolver in this case? A simple sort on the resulting IPs would probably good enough for us, since we don’t control the DNS server and the replies.

Thanks,
Thomas

Posts: 1

Participants: 1

Read full topic


Redispatch on timeout queue

$
0
0

@mindeswx wrote:

Hello, what happens if we are using stick tables and backend reaches it’s limit?
I figured out that when backend is on it’s limit new connections are qued and “timeout queue” applies.

But is stick table cleared for that connection after the timeout? If not, is there a way to do so?

Thank you!

Posts: 1

Participants: 1

Read full topic

Haproxy Stats Understand

$
0
0

@vijay.sachdeva wrote:

Hi Folks,

I wanted to understand the haproxy stats page which has information about sessions. Like Current, Max and Total sessions. I have got multiple backends running and each has a different number of sessions stats, so wanted to know what is exactly a total session means here.

Posts: 1

Participants: 1

Read full topic

Redirecting HTTPS to HTTPS

Haproxy template / select backend/server using header

$
0
0

@raghudasari wrote:

I am trying to target a backend server using a request header (x-upstream) which has the target server
ip. example x-upstream: 10.100.20.13:24001

I was able to achieve this bye the following script .
however as you can see we need a backend block for each server
IS there a better way to avoid repetition ? and make this better

frontend localnodes
bind *:3005
mode http
option httplog
log /dev/log local0 debug
acl is_backend_set hdr_len(x-upstream) gt 0
http-request set-var(req.backend) hdr(x-upstream) if is_backend_set
http-request set-path /swbizlogic/service.svc
http-request set-header Content-Type application/test
use_backend backend-%[var(req.backend)] if is_backend_set
default_backend nodes

backend nodes
mode http
timeout http-request 90s # set to low value for slowloris
timeout connect 10s
timeout server 90s
balance roundrobin
server svr1 10.100.20.13:24001 weight 100
server svr2 10.100.20.13:24002 weight 100
server svr3 10.100.20.13:24003 weight 100
backend backend-10.100.20.13:24001
mode http
timeout connect 10s
timeout server 90s
http-response add-header x-upstream %b
http-request set-header Content-Type application/test
http-request set-path /swbizlogic/service.svc
server svr 10.100.20.13:24001 weight 100

backend backend-10.100.20.13:24002
mode http
timeout connect 10s
timeout server 90s
http-request set-header Content-Type application/test
http-response add-header x-upstream %b
http-request set-path /swbizlogic/service.svc
server svr 10.100.20.13:24002 weight 100
backend backend-10.100.20.13:24003
mode http
timeout connect 10s
timeout server 90s
http-request set-header Content-Type application/test
http-response add-header x-upstream %b
http-request set-path /swbizlogic/service.svc
server svr 10.100.20.13:24003 weight 100

Posts: 1

Participants: 1

Read full topic

How proto h2 works for http2? (haproxy 1.9.7)

$
0
0

@steve wrote:

frontend fe_main
mode http
option http-use-htx
bind *:443 ssl crt /etc/hapee-1.8/certs/www.example.com.pem alpn h2,http/1.1
use_backend be_main


backend be_main
mode http
option http-use-htx
server server1 1.2.3.4:8080 proto h2

or

backend be_main
mode http
option http-use-htx
server server1 1.2.3.4:8080

What is the difference between two backends?
when I try to call backend be_main using http2, I get 200 http2 200-OK for both cases.
(I have used ssl offload at the frontend as well)
For both cases, the server doesn’t have TLS(multiplexed HTTP1.1 and H2C server).

Posts: 1

Participants: 1

Read full topic

Viewing all 4731 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>