Quantcast
Channel: HAProxy community - Latest topics
Viewing all 4731 articles
Browse latest View live

How to effectively manage long redirect lists

$
0
0

@jurgenhaas wrote:

We are migrating a range of webservers into a new farm which will have HaProxy at the front door to route all the traffic inside the farm properly. We already know how to do that as it is similar to 6 other farms that we built that way.

This time we’ll get a huge list of redirects that these guys managed in Apache config files so far. We would like to handle all redirects across hundreds of domains as early as possible and therefore look into best ways to handle them.

We know how tpo do that with ACLs in the haproxy.cfg file for each redirect inndividually. Now, that may end up in a huge file which may not be the most efficient way of doing this. Are there better way of doing this, e.g. some sort of lookup table in e.g. YAML or JSON format?

Or any other suggestions?

Posts: 7

Participants: 2

Read full topic


Parameters: retreis , timeout connect

$
0
0

@amir wrote:

Hi,
I have backend HTTPs server.
I configured “retries” to 6 and "timeout connect " to 15000ms .
i wanted to check that haproxy behavior works according the values of “retreis” , "timeout connect " parameters but, I didn’t see any difference in the log before and after configuring “retries” and "timeout connect ".
I do see that haproxy check the backend server according to server “check ssl” configuration
I use “check ssl” every 10s for the backend server .

  1. Does “check ssl” configuration is “overried” “retries” ?
  2. How to verify haproxy behavior follows “retries” and "timeout connect " values setup?
  3. In case using “check ssl” , can i skip “retries” configuration?

Thanks
Amir yosef

Posts: 1

Participants: 1

Read full topic

Specified Backend Logging

$
0
0

@Jinbakrdi wrote:

Hi all!

I am working with HAProxy and have recently implemented http logging. I am now looking to see if it is possible to implement logging (at all levels: INFO, WARN, etc) only for a specific/defined backend. I have restricted logging only to the frontend serving this specified backend but due to the amount of other backends configured (several hundred), this has had limited effectiveness at reducing the amount of logging and the logs I need are getting ‘lost’ amongst the rest.

Is it possible to filter logs by their destination/source backend via HAProxy configuration or do I need to look into filtering them outside of HAProxy?

Thanks!

Posts: 2

Participants: 2

Read full topic

Fix HTTP2 Cookies

$
0
0

@adrianw wrote:

HAProxy 1.8 currently does not concatinate headers so when multiple Cookie headers are sent to a HTTP/1.1 backend it can break. By default Chrome and Firefox will split the cookie header for any H2 request, Edge seems to keep it combined.

More in depth discussion can be found here: H2 + Cookie header => splitted header

The below LUA script will concatinate multiple cookie headers into one before sending to the HTTP/1.1 backend.

fix-http2-cookies.lua

    core.register_action("fix-http2-cookies",{ "http-req" }, function(transaction)
        local hdr = transaction.http:req_get_headers()
        transaction.http:req_set_header("cookie",table.concat(hdr["cookie"],'; ',0,#hdr["cookie"]))
    end)

To Call the LUA

global
    lua-load /etc/haproxy/fix-http2-cookies.lua

frontend HTTPS-IN
    http-request lua.fix-http2-cookies if { req.fhdr_cnt(cookie) gt 1 }

Posts: 1

Participants: 1

Read full topic

"fe_conn gt 20" with stick-table

$
0
0

@kiwifruktish wrote:

Wanted to ask if stick-table still applys if a user ends up on a “backup backend” if limited is reached for a acl at a specific time.

Lets say a user gets connected to prod_backup backend because of the acl has reached greater then 20 connections, as you can see there is a stick-table that last 30min, if the users comes back after 15min and the acl is now below 10 connection will the user still be connecting the stick-table server in prod_backup backend that it got connected to before or does the acl not apply anymore even if there is a stick-table on baclup backend.

example of config:

frontend frontend_service 127.0.0.1:80
    balance roundrobin
    acl max_conn_reached fe_conn gt 20
    default_backend pro
    use_backend prod_backup if max_conn_reached

backend prod
    balance roundrobin
    server production01 127.0.0.1:8001 check maxconn 10
    server production02 127.0.0.1:8002 check maxconn 10
    stick-table type ip size 500k expire 30m
    stick on src

backend prod_backup
    balance roundrobin
    server production01 127.0.0.1:8001 check maxconn 10
    server production02 127.0.0.1:8002 check maxconn 10
    stick-table type ip size 500k expire 30m
    stick on src

Posts: 1

Participants: 1

Read full topic

Bypass maintenance page

$
0
0

@max wrote:

Hello there!

I’m looking for a way to bypass the maintenance page on a backend (when all servers are in maintenance mode).

The solution we found is using force-persist but it doesn’t work without use-server.
And this last option doesn’t enable us to use/test load balancing while in maintenance.

The documentation doesn’t mention that force-persist need to be used with use-server or any other option.
But it tells that it works on frontend, backend and listen which, I assume, means force-persist support other options.

Any idea how to achieve this with force-persist or something else?
What is the exhaustive list of options which can be combined with force-persist?

Our bypass conditions are requests coming from some IPs or having a cookie set to true.
(the bypass can be disabled by the clients using an authorized IP with to the cookie set to false)

Here a testing configuration:

defaults
    mode http
    option httpclose
    option redispatch
    option abortonclose

frontend http
    bind *:80
    bind *:443 ssl no-sslv3 crt /etc/pki/tls/certs/app.example.org.pem
    use_backend app

backend app
    server app-01 app-01:80 maxconn 50 check slowstart 30s
    server app-02 app-02:80 maxconn 50 check slowstart 30s
    option httpchk GET /healthcheck HTTP/1.1\r\nHost:\ app.example.org
    errorfile 503 /data/shared/maintenance/http/503.http
    force-persist if { always_true }

When all the servers are disabled, this configuration doesn’t bypass the maintenance page.
We still get the 503.

Posts: 3

Participants: 2

Read full topic

Setting up HAProxy

$
0
0

@BananaLolzerz wrote:

I am new to HAProxy and I am trying to get HAProxy setup in a network like this:

It would be HAProxy pointing to multiple servers that are running their own containers that are open on different ports. The purpose of this setup would be allow the containers to operate using port 80 but have their own unique URLs.

I have looked on Google but I couldn’t find any config examples that would allow me to identify a requested URL and where should HAProxy point to.

Does anyone know of a good tutorial?

Posts: 2

Participants: 2

Read full topic

Modifying rdp-cookie OR persistent stick-tables?

$
0
0

@match wrote:

Hi,

I’m currently playing around with rdp-cookie load-balancing, and as far as I can tell, there are 2 ways to do it:

  1. balance rdp-cookie

This works great - except the problem is that different clients format the cookie in different ways - it’d be nice to have a way to (almost always) extract the username from the rest of the data. IS there a way to modify the cookie before it’s used in balancing?

  1. stick on rdp_cookie(mstshash),regsub(.*\\\,,g),regsub(@.*,,g) (with balance leastconn)

So the above lets me modify the cookie and remove windows domains from in front and behind the username - which is great! However tables aren’t persistent between unexpected outages/reboots ( and while I’m aware of peering, that doesn’t help for a single proxy server setup or whole-pool outage), so if someone has an rdp session open on a backend node, and I have to restart the proxy service, balance leastconn won’t send them back to their established session.

I’ve discovered I can read and set the table as follows:
echo "show table rdp" |socat stdio /var/run/haproxy.stat >/etc/haproxy/stick-table
Then use the key and server_id on restore to re-establish the table:
echo "set table rdp key bob data.server_id 1" | socat stdio /var/run/haproxy.stat

However doing this manually is a bit racey, and requires a regular backup and restore mechanism along-side haproxy.

any ideas on how to solve the problem of always wanting to balance on rdp-cookie, but also wanting a way to modify the cookie?

Posts: 1

Participants: 1

Read full topic


400 redirect https to http force haproxy to stay on https

$
0
0

@Liquideshark wrote:

Hi,

Il have à simple configuration with two server attention the bâckend. I also have https configure. The thing is on my authentification page after validation the https rewriting to http.

How can i stop haproxy to permit that.

Greatings

Posts: 1

Participants: 1

Read full topic

HAProxy 1.8 Officially Released :confetti_ball: :fireworks:

$
0
0

@sam wrote:

HAProxy version 1.8 is now officially released!

see: https://www.mail-archive.com/haproxy@formilux.org/msg28004.html

  • high level overview of the new features contributed to 1.8
    (warning, the list is huge) :

  • JSON stats (Simon Horman) : the stats socket’s “show stat” and "show info"
    output can now be emitted in a structured JSON format which is more
    convenient than CSV for some modern data processing frameworks.

  • server templates (Frédéric Lécaille) : servers can be pre-provisionned
    in backends using a simple directive (“server-template”). It is then
    possible to configure them at runtime over the CLI or DNS, making it
    trivial to add/remove servers at run time without restarting. As a side
    effect of implementing this, all “server” keywords are now supported on
    the “default-server” line and it’s possible to disable any of them using
    "no-". All settings changed at runtime are present in the state
    file so that upon reload no information is lost.

  • dynamic cookies (Olivier Houchard) : a dynamic cookie can be generated
    on the fly based on the transport address of a newly added server. This
    is important to be able to use server templates in stateful environments.

  • per-certificate “bind” configuration (Emmanuel Hocdet) : all the SSL
    specific settings of the “bind” line may now be set per-certificate in
    the crtlist file. A common example involves requiring a client cert for
    certain domains only and not for others, all of them running on the same
    address:port.

  • pipelined and asynchronous SPOE (Christopher Faulet) : it’s an important
    improvement to the Stream Processing Offload Engine that allows requests
    to be streamed over existing connections without having to wait for a
    previous response. It significantly increases the message rate and reduces
    the need for parallel connections. Two example WAFs were introduced as
    contributions to make use of this improvement (mod_security and
    mod_defender).

  • seamless reloads (Olivier Houchard) : in order to work around some issues
    faced on Linux causing a few RST to be emitted for incoming connections
    during a reload operations despite SO_REUSEPORT being used, it is now
    possible for the new haproxy process to connect to the previous one and
    to retrieve existing listening sockets so that they are never closed. Now
    no connection breakage will be observed during a reload operation anymore.

  • PCRE2 support (David Carlier) : this new version of PCRE seems to be
    making its way in some distros, so now we are compatible with it.

  • hard-stop-after (Cyril Bonté) : this new global setting forces old
    processes to quit after a delay consecutive to a soft reload operation.
    This is mostly used to avoid an accumulation of old processes in some
    environments where idle connections are kept with large timeouts.

  • support for OpenSSL asynchronous crypto engines (Grant Zhang) : this
    allows haproxy to defer the expensive crypto operations to external
    hardware engines. Not only can it significantly improve the performance,
    but it can also reduce the latency impact of slow crypto operations on
    all other operations since haproxy switches to other tasks while the
    engine is busy. This was successfully tested with Intel’s QAT and with
    a home-made software engine. This requires OpenSSL 1.1.x.

  • replacement of the systemd-wrapper with a new master-worker model
    (William Lallemand) : this new model allows a master process to stay in
    the foreground on top of the multiple worker processes. This process
    knows the list of worker processes, can watch them to detect failures,
    can broadcast some signals it receives, and has access to the file
    system to reload if needed (yes, it even supports seamless upgardes to
    newer versions since it reloads using an execve() call). While initially
    designed as a replacement for the systemd-wrapper, it also proves useful
    in other environments and during development.

  • DNS autonomous resolver (Baptiste Assmann) : the DNS resolution used to
    be triggered by health checks. While easy and convenient, it was a bit
    limited and didn’t allow to manage servers via the DNS, but only to detect
    address changes. With this change the DNS resolvers are now totally
    autonomous and can distribute the addresses they’ve received to multiple
    servers at once, and if multiple A records are present in a response, the
    advertised addresses will be optimally distributed to all the servers
    relying on the same record.

  • DNS SRV records (Olivier Houchard) : in order to go a bit further with
    DNS resolution, SRV records were implemented. The address, port and weight
    attributes will be applied to servers. New servers are automatically added
    provided there are enough available templates, and servers which disappear
    are automatically removed from the farm. By combining server templates and
    SRV records, it is now trivial to perform service discovery.

  • configurable severity output on the CLI : external tools connecting to
    haproxy’s CLI had to know a lot of details about the output of certain
    actions since these messages were initially aimed at humans, and it was
    not envisionned that the socket would become a runtime API. This change
    offers an option to emit the severity level on each action’s output so
    that external APIs can classify the output between success, information,
    warnings, errors etc.

  • TLS 1.3 with support for Early-Data (AKA 0-RTT) on both sides (Olivier
    Houchard) : TLS 1.3 introduces the notion of “Early-Data”, which are
    data emitted during the handshake. This feature reduces the TLS handshake
    time by one round trip. When compiled with a TLS-1.3 compatible TLS
    library (OpenSSL 1.1.1-dev for now), haproxy can receive such requests,
    process them safely, and even respond before the handshake completes.
    Furthermore, when the client opts for this, it is also possible to pass
    the request to the server following the same principle. This way it is
    technically possible to fully process a client request in a single round
    trip.

  • multi-thread support (Christopher Faulet, Emeric Brun) : no more need
    to choose between having multiple independant processes performing
    their own checks or cascading two layers of processes to scale SSL.
    With multi-threading we get the best of both : a unified process state
    and multi-core scalability. Eventhough this first implementation focuses
    on stability over performance, it still scales fairly well, being almost
    linear on asymmetric crypto, which is where there’s the most demand.
    This feature is enabled by default on platforms where it could be tested,
    ie Linux >= 2.6.28, Solaris, FreeBSD, OpenBSD >= 5.7. It is considered
    EXPERIMENTAL, which means that if you face a problem with it, you may
    be asked to disable it for the time it takes to solve the problem. It is
    also possible that certain fixes to come will have some side effects.

  • HTTP/2 (Willy Tarreau) : HTTP/2 is automatically detected and processed
    in HTTP frontends negociating the “h2” protocol name based on the ALPN
    or NPN TLS extensions. At the moment the HTTP/2 frames are converted to
    HTTP/1.1 requests before processing, so they will always appear as 1.1
    in the logs (and in server logs). No HTTP/2 is supported for now on the
    backend, though this is scheduled for the next steps. HTTP/2 support is
    still considered EXPERIMENTAL, so just like for multi-threading, in case
    of problem you may end up having to disable it for the time it takes to
    solve the issue.

  • small objects cache (William Lallemand) : we’ve been talking about this
    so-called “favicon cache” for many years now, so I’m pretty sure it will
    be welcome. To give a bit of context, we’ve often been criticized for
    not caching trivial responses from the servers, especially some slow
    application servers occasionally returning a small object (favicon.ico,
    main.css etc). While the obvious response is that installing a cache
    there is the best idea, it is sometimes perceived as overkill for just
    a few files. So what we’ve done here was to fill exactly that hole :
    have a safe, maintenance-free, small objects cache. In practice, if
    there is any doubt about a response’s cachability, it will not cache.
    Same if the response contains a Vary header or is larger than a buffer.
    However this can bring huge benefits for situations where there’s no
    argument against trivial caching. The intent is to keep it as simple and
    fast as possible so that it can always be faster than retrieving the same
    object from the next layer (possibly a full-featured cache). Note that I
    purposely asked William not to implement the purge on the CLI so that
    it remains maintenance-free and we don’t see it abused where it should
    not be installed.

Posts: 1

Participants: 1

Read full topic

Force Sticky session on HaProxy

$
0
0

@DevendraJoshi wrote:

Hi,

Is anybody configured Force Sticky session in Haproxy (with JSESSIONID).

I am using following things
Centos 6.4
Apache 2.2
Jboss 4.0
ModJK
and HaProxy 1.6.12

also check following setting in ModJK config but not succeed.

cookie serverid insert indirect nocache
stick-table type string len 36 size 1m expire 8h
stick on cookie(JSESSIONID)

I want to configure this on HaProxy level.
If my one web-server goes down then client should not get disconnect , should wait for the web-server get up.
If server get up withing time then client start communicating.

Posts: 1

Participants: 1

Read full topic

crossdomain in HAProxy

$
0
0

@maksim_si wrote:

Hi! Can I redirect request from client on the local directory(server) via
haproxy? For example:
If user send request - crossdomain.xm I am redirect its on the directory
/opt/crossdomain/ , folder for files. This is file be on the server whith
haproxy.

In NGINX this is has form

location / {
root /opt/nginx/www/; # folder for store files
index crossdomain.xml;

Merci!

Posts: 1

Participants: 1

Read full topic

Custom load balancing

$
0
0

@Sunil wrote:

Hi,

I am using HAProxy for load balancing my HTTP requests. I would like to know if there is any way to customize the selection of backend server based on the responses returned by each server. I have a servlet which can return the responses (number of clients connected to it). I would like to use this information and route the request to the backend server which has the lowest number.

My HAProxy configuration looks like:

listen http_front xx.xx.xx.xx:8080
  mode http
  option httpchk GET /servlet/IHealthCheckServlet
  server app1 xx.xx.xx.xx:8080 check port 8080
  server app2 xx.xx.xx.xx:8080 check port 8080
  server app3 xx.xx.xx.xx:8080 check port 8080

Can you help me in this regard.

Posts: 1

Participants: 1

Read full topic

HAProxy Container Setup

$
0
0

@BananaLolzerz wrote:

So I have read the documentation but honestly without seeing an actual example of a config I am really having a hard time getting my head wrapped around this.

I have the following setup:

I have 1x physical baremetal machine that has Ubuntu running with HAProxy installed as an application directly on the operating system and then Docker that is running multiple containers.

Before anyone freaks out, this is just running as a temporary/example and is NOT production.

But I am not sure how to write out the HAProxy config file. I have it currently as:

https://hastebin.com/edibifetif.cfg

I have the following domains that are going to be pointing to my public IP:

Can someone please assist me with showing the config file should be written. I am looking at this tutorial:

But I am not sure if the containers would be properly like this:

https://hastebin.com/danuparoqi.cfg

Can someone please help out?

Posts: 1

Participants: 1

Read full topic

Getting TLS certificates with Letsencrypt and HAProxy

$
0
0

@cirowrc wrote:

Hey,

last week I wanted to test the new HTTP2 functionality in 1.8-rc3 and then end up writing the steps I took to have TLS with HAProxy starting from a bare Ubuntu 17.04 instance.

I’ve been just consuming from the haproxy community and never giving back so here’s something that I hope might help someone:

Please let me know if you find anything wrong or want some help with anything related.

Best,

Posts: 1

Participants: 1

Read full topic


AB testing cookie infinite duration/maxlife

$
0
0

@azr wrote:

Howdy !

I have some ab test behind some haproxies ( in kubernetes ).

I’d like to store SERVERID into session so that someone that does not clear his/her cookies always gets redirected to the same version of the website. ( i.e.: give it a duration of 365 days )

The doc says The cookie will only be a session cookie and will not be stored on the client's disk..
But there is also a maxlife that I think will have Haproxy store that cookie ? Right ?

But apparently a cookie can only have a maximum maxlife of 24h.

Would there be a way to have the client store a long lived cookie ? Or to hack it ?

I can also do pull requests if need be :slight_smile:

Thanks !

PS: for the moment I might set up a service that will set a long lived SERVERID when it sees one.

Edit:
I tried this approach : cookie SERVERID insert maxlife 24h and this only gives me a session cookie, so assumptions were wrong.

Posts: 1

Participants: 1

Read full topic

Route by domain name not working

$
0
0

@asche wrote:

hello,

routing by domain name ist not workin in my setup.
when i connect from a client with the command

mysql -htest-db2.example.com -uxxx -pxxx

i get every time connected to server db1 (172.17.2.111).
i think acl ist ignored and default_backend used.

my os ist ubuntu 16.04 with haproxy 1.6.3

please can anybody help me?

my config:

frontend mysql
bind 0.0.0.0:3306
mode tcp
acl usedb1 hdr(host) -i test-db1.example.com
acl usedb2 hdr(host) -i test-db2.example.com
use_backend mysql-db1 if usedb1
use_backend mysql-db2 if usedb2
default_backend mysql-db1

backend mysql-db1
mode tcp
server db1 172.17.2.111:3306 check
server db2 172.17.2.112:3306 check backup

backend mysql-db2
mode tcp
server db1 172.17.2.111:3306 check backup
server db2 172.17.2.112:3306 check

Posts: 1

Participants: 1

Read full topic

Squid through HAProxy

$
0
0

@ludal wrote:

Hi everyone,

I would like to use HAProxy in front of squid, via a specific domain, but still keeping HAProxy for SSL offloading and “basic” load balancing.
So both my proxy and my websites have to reacheable through the 80 tcp port.

To resume, I would like it to work like this :
mydomain.tld -> default_backend -> nginx
proxy.mydomain.tld -> acl -> squid

I’ve done a first set, and it works quite well for now, but I’m facing a strange problem, all the redirected link are using my default backend instead of the squid backend (Google link for example)

Here is my actual configuration :

frontend XXX
        bind A.B.C.D:80
        bind A.B.C.D:443 ssl crt /etc/haproxy/ssl

        http-request set-header X-Forwarded-Proto https if  { ssl_fc }

        acl is_letsencrypt_check path_beg /.well-known/acme-challenge
        use_backend letsencrypt if is_letsencrypt_check

        acl is_proxy req.hdr(Proxy-Connection) -m found
        acl is_proxy req.hdr(Proxy-Authorization) -m found
        use_backend squid if is_proxy

        default_backend nginx

backend nginx
        server nginx-local 127.0.0.1:8080

backend squid
        cookie SERVERID insert indirect nocache
        option httpclose
        option forwardfor header X-Client

        server squid-local 127.0.0.1:3128

backend letsencrypt
        http-request set-header Host localhost
        server nginx-local 127.0.0.1:80

I’m using HAProxy 1.5.8 and squid 3.4.8. Squid is configured in transparent mode.

I guess the main issue here is to identify the incoming requests that are meant to be processed by squid, and so far, I failed to catch them all…
Can anyone help me ?

Thanks !

Posts: 1

Participants: 1

Read full topic

HAProxy - NGinx - Wordpress (SSL)

$
0
0

@soume86 wrote:

Hi,

I am replacing httpd by Nginx on my platform (httpd, Nginx and wordpress), but I have a problem blocking.

My architecture is as follows:

INTERNET --------https------> HAPROXY (SSL) -------> http ------> NGinx -------> Wordpress.

I have installed / configured haproxy and nginx. Both work and my site is in HTTPS. SSL is managed by HAProxy and Nginx does not do SSL.

Now I downloaded and unzipped wordpress. To follow the installation, I went to the homepage from an internet browser and problems begin.

On the homepage for installing wordpress, CSS and java scripts are not loaded whereas the same architecture works with httpd instead of NGinx.

any idea?

My configurations:

#####HAProxy

frontend https-in
bind X.X.X.X:443 ssl crt /etc/pki/certs
mode http
option httplog

    acl my_site hdr(host) -i mon.site.fr
    use_backend wp if my_site

    rspadd  Strict-Transport-Security:\ max-age=15768000

backend wp
mode http
option http-server-close
option forwardfor
http-request add-header X-Forwarded-Proto https if { ssl_fc }
server wp_1 X.X.X.X:8080

#####NGinx

server {
listen *:8080;
server_name mon.site.fr;

 root /var/www/html/site1;

 access_log /var/log/nginx/site1.access.log;
 error_log /var/log/nginx/site1.error.log;

 location / {

   index index.php index.html;
   try_files $uri $uri/ /index.php?$args;
 }

 if (!-e $request_filename) {
   rewrite ^.*$ /index.php last;
 }

 location = /favicon.ico {
   log_not_found off;
   access_log off;
 }

 location = /robots.txt {
   log_not_found off;
   access_log off;
   allow all;
 }

 location ~ /\. {
   deny all;
 }

 location ~* /(?:uploads|files)/.*\.php$ {
   deny all;
 }

 location ~* \.(html|css|js|png|jpg|jpeg|gif|ico|svg|eot|woff|ttf)$ {
   expires max;
   log_not_found off;
 }

 location ~ \.php$ {

   try_files $uri =404;
   fastcgi_pass unix:/var/run/php70-fpm.mon.site.fr.sock;
   fastcgi_index index.php;
   fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
   fastcgi_param SCRIPT_NAME $fastcgi_script_name;
   fastcgi_buffer_size 128k;
   fastcgi_buffers 256 16k;
   fastcgi_busy_buffers_size 256k;
   fastcgi_temp_file_write_size 256k;

   include fastcgi_params;
 }

}

http {
log_format main ‘$remote_addr - $remote_user [$time_local] “$request” ‘
’$status $body_bytes_sent “$http_referer” ‘
’"$http_user_agent" “$http_x_forwarded_for”’;

access_log  /var/log/nginx/access.log  main;

sendfile            on;
tcp_nopush          on;
tcp_nodelay         on;
keepalive_timeout   65;
types_hash_max_size 2048;

include             /etc/nginx/mime.types;
default_type        application/octet-stream;

include /etc/nginx/sites-enabled/*.conf;

# Real IP
set_real_ip_from X.X.X.X;
real_ip_header X-Forwarded-For;

gzip on;
gzip_disable "msie6";

gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;

}

Thanx

Posts: 1

Participants: 1

Read full topic

V1.8 balance source only 1 backend server gets traffic

$
0
0

@elderone1 wrote:

Hi,
seems like Haproxy v1.8 does not distribute load to more than 1 backend server with “balance source” and default map-based hash-type.
Consistent hash-type works ok however…

Posts: 1

Participants: 1

Read full topic

Viewing all 4731 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>