Quantcast
Channel: HAProxy community - Latest topics
Viewing all 4849 articles
Browse latest View live

%[] in use-server

$
0
0

@idiot wrote:

:wave:

I’m trying to make servers “dynamic” by using a “templated” backend with servers pulled from a map

backend be_template
  server-template websrv 1-100 localhost:6666 check disabled
  use-server %[req.hdr(host),lower,map_dom(/usr/local/etc/haproxy/domain2server.map,websrv0)] if { req.hdr(host),lower,map_dom(/usr/local/etc/haproxy/domain2server.map) -m found }

However, the above doesn’t work, haproxy fails to start with the following error:

haproxy_1  | [ALERT] 048/160628 (1) : config : backend 'be_template' : unable to find server '%[req.hdr(host),lower,map_dom(/usr/local/etc/haproxy/domain2server.map,websrv0)]' referenced in a 'use-server' rule.
haproxy_1  | [ALERT] 048/160628 (1) : Fatal errors found in configuration.

I wonder if %[] syntax is allowed in use-server like in use_backend?


HAProxy version that I’m using is 1.9.

Full config:

global
  log /dev/log local0
  log /dev/log local1 notice
  chroot /var/lib/haproxy
  # TODO
  stats socket ipv4@127.0.0.1:9999 level admin
  # stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
  stats timeout 30s
  daemon

defaults
  log	global
  mode http
  option httplog
  option dontlognull
  option forwardfor
  option http-keep-alive
  http-reuse safe
  timeout connect 5000
  timeout client 50000
  timeout server 50000

frontend http
  bind :::80 v4v6
  use_backend be_template

backend be_template
  server-template websrv 1-100 localhost:6666 check disabled
  use-server %[req.hdr(host),lower,map_dom(/usr/local/etc/haproxy/domain2server.map)] if { req.hdr(host),lower,map_dom(/usr/local/etc/haproxy/domain2server.map) -m found }

Posts: 2

Participants: 1

Read full topic


Requests/connections per second limited with keepalive at the client

$
0
0

@Joel28 wrote:

I’m doing some performance benchmarking with HAProxy to see how many connections per second I can get. At present, I’m using apib to generate the client connections, and I have HAProxy in front of an nginx server. I can get hundreds of thousand of connections per second (e.g. 200k+) when I connect to the nginx server directly or to HAProxy with keepalive on (set at apib runtime), but as soon as I turn off keepalive with the client connections, my requests/connections per second drops to something like 30k, and I can’t figure out why. Can anyone suggest a reason or a path to improving the connections per second? I know that there are OS settings (like tcp reuse) that can affect the max connections per second, but considering that I can get a high number of connections per second in other cases, I’m imagining that my limitation is with HAProxy somehow, but I don’t know how to find out.

Posts: 1

Participants: 1

Read full topic

Segfault in 1.9.4 Debian

$
0
0

@hans0r wrote:

Hey Guys,

latest worked version: 1.9.3-1~bpo9+1
version that not worked: 1.9.4-1~bpo9+1

maybe a bug… or a configuration “error” that worked in 1.9.3. Service ist started in the new master-worker mode with support for mastercli. Attached you can find errors and config.

haproxy in debug-mode exited with this error:
Current worker #1 (17207) exited with code 139 (Segmentation fault)

Version:
HA-Proxy version 1.9.4-1~bpo9+1 2019/02/07 - https://haproxy.org/
Build options :
TARGET = linux2628
CPU = generic
CC = gcc
CFLAGS = -O2 -g -O2 -fdebug-prefix-map=/build/haproxy-1.9.4=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered -Wno-missing-field-initializers -Wtype-limits -Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference
OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 USE_SYSTEMD=1 USE_PCRE2=1 USE_PCRE2_JIT=1 USE_NS=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.1.0f  25 May 2017
Running on OpenSSL version : OpenSSL 1.1.0j  20 Nov 2018
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2
Built with Lua version : Lua 5.3.3
Built with network namespace support.
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Built with zlib version : 1.2.8
Running on zlib version : 1.2.8
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with PCRE2 version : 10.22 2016-07-29
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with multi-threading support.

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as <default> cannot be specified using 'proto' keyword)
              h2 : mode=HTX        side=FE|BE
              h2 : mode=HTTP       side=FE
       <default> : mode=HTX        side=FE|BE
       <default> : mode=TCP|HTTP   side=FE|BE

Available filters :
        [SPOE] spoe
        [COMP] compression
        [CACHE] cache
        [TRACE] trace

Frontend:

frontend https
        bind *:443 ssl crt /etc/ssl/private/cert.pem crt /etc/ssl/private/cert2.pem crt #alpn h2,http/1.1
        bind *:35301 ssl crt /etc/ssl/private/cert.pem
        bind *:9000 ssl crt /etc/ssl/private/cert.pem

#       declare capture request len 48
#       declare capture request len 256
#       declare capture request len 400000

        capture request header User-Agent len 128
        capture request header Host len 48
        declare capture request len 1024
#       option accept-invalid-http-request

        tcp-request content track-sc0 src

        # capture for POST header logging
#       declare capture request len 4000000
#       http-request capture req.hdr(Host) id 0
#       http-request capture req.hdr(User-Agent) id 1
        http-request capture req.body id 2

        # Block POST abuser for shop for longer time, definition is in stick-table on backend POST_BLOCKER_SHOP_http_BE
        # with this config all requests from IP in stick-table with GPC0 counter greater then 0 are blocked!
        http-request deny deny_status 429 if { src_get_gpc0(abuser_http_BE) gt 0 }

        #http-request redirect location %[req.hdr(host),lower,map(/etc/haproxy/redirects.map)] code 302 if { req.hdr(host),lower,map(/etc/haproxy/redirects.map) -m found }

        # declare all my.rr in a whitelist so that nothing is blocked from there
        acl is_myrr src -f /etc/haproxy/myrr.lst

        #tcp-request content track-sc0 src

        # bot detection and deny bots
        acl is_robot hdr_reg(User-Agent) -i -f /etc/haproxy/bots.lst
        http-request deny if is_robot

        stick-table type ip size 500k expire 5m store gpc0,conn_cur,conn_rate(3s),http_req_rate(7s),http_err_rate(10s),gpc0_rate(10s)
        http-request tarpit if { src_conn_cur ge 200 } !is_myrr
#        http-request tarpit if { src_conn_rate ge 2500 } !is_myrr
#        http-request tarpit if { sc0_http_err_rate() gt 40 } !is_myrr
        http-request tarpit if { sc0_http_req_rate() gt 10000 } !is_myrr

        http-request redirect code 301 location https://www.%[hdr(host)]%[capture.req.uri] if !{ hdr_beg(host) -i www } { ssl_fc_sni -i domain.com }

        reqadd X-Forwarded-Proto:\https
        rspdel Server:
        rspadd Server:\ xyz\ webserver

        acl is_35301 dst_port 35301
        acl is_9000 dst_port 9000

        acl is_static path_end -i .jpg .gif .png .css .ico .ttf .woff .woff2 .htm .html #.js
        acl is_static_data path_end -i .jpg .gif .png .ico .ttf .woff .woff2
        acl is_default_bot_login path_beg -i /wp-login.php /typo3

        # Shop POST limiter for too many POSTs.
#       acl is_content_present req.hdr_val(content-lenght) ge 1
        acl is_shop path_dir shop
        acl is_rr ssl_fc_sni -i domain.com -i www.domain.com -i web.domain.com
        acl is_POST_abuser src_gpc0_rate gt 10
        use_backend abuser_http_BE if is_POST_abuser METH_POST is_rr is_shop !is_myrr or is_default_bot_login !is_myrr

        use_backend events_BE_static if { ssl_fc_sni -i events.domain.com } is_static !is_35301
        use_backend events_BE if { ssl_fc_sni -i events.domain.com } !is_35301

Backend that is used when this error appears:

backend events_BE
        mode http
        timeout server 300s
        timeout connect 2s
        timeout http-keep-alive 1s
        option http-server-close
        filter compression

        timeout check 2s

        balance roundrobin

        compression algo gzip
        compression type text/html text/plain text/xml text/json text/css text/javascript application/javascript

        cookie ROUTEID insert indirect nocache

        no option redispatch
#       option httpchk GET / HTTP/1.1\r\nHost:\ events.domain.com
        server s1-104-8080 10.10.10.104:8080 check weight 100 maxconn 75 cookie s1events inter 1s fall 3
        server s2-104-8081 10.10.10.104:8081 check weight 100 maxconn 75 cookie s2events
        server s3-102-8081 10.10.10.102:8081 check maxconn 75 backup inter 1s fall 3

backend events_BE_static
        mode http
        timeout server 300s
        timeout connect 2s
        timeout http-keep-alive 1s
        option http-server-close

        timeout check 2s

        http-request del-header Cache-Control
        http-request del-header Pragma
#       http-request del-header Cookie

        balance roundrobin

        cookie ROUTEID insert indirect nocache

        http-request cache-use events
        http-response cache-store events

        server s1-104-8080 10.10.10.104:8080 check weight 100 maxconn 75 cookie s1events inter 1s fall 3
        server s2-104-8081 10.10.10.104:8081 check weight 100 maxconn 75 cookie s2events inter 1s fall 3
        server s3-102-8081 10.10.10.102:8081 check maxconn 75 backup inter 1s fall 3
		
cache events
        total-max-size 100
        max-age 3600

Can someone help me?

Thanks

Posts: 2

Participants: 1

Read full topic

Can some one clarify `map_beg` bit more

$
0
0

@ykiran_k wrote:

I am using map with path & app name like this sample

 /abcd/efgh/ App1
 /abcd/1234/ App2
 /abcd/ App3

my expectation using map_beg for map look up for path was that it will be fall back for all other paths with path that starts with /abcd/ but it’s not behaving the same way like path_beg does so efficiently. Is there any other way or am i using it incorrectly. Please add some suggestions i am new to using the maps.

Let me know if you need more information.

Posts: 1

Participants: 1

Read full topic

Haproxy stats page - user lockout

$
0
0

@simonuk1 wrote:

Does anyone know if there is a way to have the users defined to access the stats page have a lockout period.
So it someone tries to access the site with wrong password it will lockout for a set period?

Posts: 1

Participants: 1

Read full topic

Configuration File Questions for Mulitple Instances

$
0
0

@joeg wrote:

So I run run multiple instances of haproxy on the same box for various applications; I simply create a another service file with the new parameters. This allows me to only mess with one application without affecting the other applications. So when I load haproxy, I have -f $BASECONFIG -f $APPCONFIG … where
BASECONFIG=haproxy-common.conf
APPCONFIG=haproxy-webapp1.conf

I was hoping that I could put my ACLs in my the haproxy-common.conf but it appears that they can only be defined in the frontend or backend stanzas. It also appears that you can’t duplicate listen, frontend, backend stanzas with the same name; I was hoping to have the common settings for stats defined in haproxy-common with only the app unique stuff in haproxy-webapp1.conf.

Is there a better way to do this keeping the haproxy instances separated for each application? Is there someplace I can define ACLs once in the haproxy-common.conf file and reference them in the other configuration files?

Also, is there a way to have HAPROXY use a different IP for polling but another for actual client connections? It seems like it would be easier when capturing packets to filter out the probes.
Anyhow, I thought I’d ask since I an new to haproxy.

Thanks.

Posts: 1

Participants: 1

Read full topic

HAProxy - Can't get more than 48,000 queries per second

$
0
0

@Bob wrote:

We are trying to setup load balanced servers so we can spread the load over many servers as we grow our app. Our software is being created in Go listening as a web server.

We created a simple server in Go using PING/PONG to see how many requests we can handle per server at max. Yes, we understand that when you add in database access and all of the other processing code, that single server will not reach the same number of transactions per second. This is merely test code to ensure consistent speed on each box to exclude any outside latency sources.

We have a 2U physical box with 64 GB ram and (2) Xeon 4110 8 Core 16 Thread processors. Our Internet connection is 1 GB fiber and all servers are connected internally on virtual lan so latency shouldnt be any issue.

We have (3) Go servers setup, each with 4 GB ram and 4 CPU in a VM using Cent OS 7.

We can achieve 47,000 queries per second on each box. We would assume that if we had HAProxy in front of the boxes we should be able to hit approx 140,000 qps over the 3 servers combined.

We are using the TCP method of routing in HAProxy as it appears to be much faster than the HTTP method. We do not need to route based on URL so TCP seems to be better at the moment.

The 47,000 qps was tested using loader.io with 2000 clients spanning to 4000 clients over a 1 minute period. Each server processes the same amount of qps on average.

We setup HAProxy to connect to 1 server only to see what the speed was having it in the middle. We say about 46,000 qps when hitting the one server.

We added 2 more servers for a total of 3 behind HAProxy and there was no increase in qps, however there was load on all 3 servers spread out as indicated from watching htop on all machines. Total of 46,000 qps was all it would reach.

The HAProxy server was setup with 8 CPU and 16 GB ram and was NOT maxing out on CPU when watching htop.

There is 1 external IP address coming into HAProxy and the backends are linked via internal 10.x.x.x IP addresses, 1 per box. Each back end server also has an external IP address we used to test the speed of each server individually to ensure they all worked at 47,000~ qps.

We did increase loader.io to run 2000 thru 8000 clients per second aimed at the HAProxy server to throw more load at it, however it didn’t increase the qps at all.

It appears we have enough power to process the simple ping/pong requests in CPU, RAM and Internet traffic.

Is there a max limit that HAProxy can process per external IP based on port exhaustion? We did increase the ports on the server to 1024 - 65000.

Using watch ss -s doesn’t have any more than 10,000 ports used at maximum and there are very few in any wait status as we have set the server to reuse tcp connections as well as reduced the fin timeout.

We are simply trying to have a front end web proxy able to handle a lot of traffic and pass it off to the back end servers to process.

Our Go code currently runs at about 10,000 qps per VM so if we wanted to achieve the 140,000 qps in the above example, we would need 14 VMs to handle it. The goal is to have the capability for allot more than 140,000 and simply add more servers on the back end to handle the increase in load.

Posts: 1

Participants: 1

Read full topic

HaProxy Rate Limiting

$
0
0

@Santos wrote:

Hi All,

I want to use Haproxy to do the Rate limiting the front end URL.

I tried the suggestions mentioned here. But, I see that using these, once it blocks it blocks for ever. So, this is like DDOS not the Rate limiting.

My requirement is that the customer can access the URL /XYZ000/ at 1000 RPS and /ABC000/ at 2000 RPS for second if more that than the prescribed number of requests received then the extra should be rejected. How do we achieve the same ?

I tried the following, but once it blocks , it blocks for ever which is like DDOS. Please help !

Posts: 2

Participants: 1

Read full topic


Serious Performance Issue with ACL and URL Rewriting

$
0
0

@ijm667 wrote:

We have been running a version 1.6 HAPROXY based load balancing system for the past 12 months.
We were processing around 40K requests per day for the first 6 months then traffic started to increase, to 80K, 160K, 300K requests per day. We have been handling around a million request a day for the past 3 months without any trouble on the HAPROXY side of things. We updated the back end and changed the base path of the URL from /whatever/etc to /v2/whatever/etc and decided to use HAPROXY to do the URL rewrite on the fly allowing existing configurations to continue working with the new URL. This is when the trouble started. We were processing backend request in about 400ms but since the URL rewrite was added the performance has dropped to 1 request per 4 seconds at best. If we disactivate the URL rewriting, performance returns to 400ms so it is clear that the ACL/URL rewriting is the cause.
My questions are very simple. Does anyone know anything about this ? Would an upgrade to 1.7 or 1.8 change things ?

Posts: 7

Participants: 2

Read full topic

Installation and configuration

$
0
0

@natali wrote:

I am new with HA Proxy and I couldnt find an official documentation on how to install it.
Can someone please help me?

Posts: 2

Participants: 2

Read full topic

Need to see client IP in HAProxy logs

$
0
0

@Wade_Beyers wrote:

Hello,

I have in my config
option http-server-close
option forwardfor
http-request set-header X-Client-IP %[src]

The logs are showing the IP address of a frontward facing load balancer, not the x-forwarded client IP. How can I make it show the true client IP in the connection logs?

Posts: 1

Participants: 1

Read full topic

Stats are not updated

$
0
0

@surve.swapnil wrote:

Hi All,

I am seeing issue in our haproxy where backend http responses total stats are not getting updated for few backends. Its a random behavior. We haproxy running in pods and backends are periodically refreshed based on using confd. We use prometheus haproxy exporter to retrieve the stats.

Is there some sort of sampling in the backend http response totals metric? Any special case that it will not update the stats?

Thanks,
Swapnil

Posts: 1

Participants: 1

Read full topic

Can't access HAProxy externally from Internet

$
0
0

@elte156 wrote:

I’m having an issue (timeout) trying to access the load balancer outside of my LAN (externally from the internet).
I can access it just fine from within my local network.

Some of the isolation tests I’ve tried:

  • My network firewall has both 80 and 443 ports open
    • Port forward them to HAProxy server and I can locally access https://example.com
    • Port forward them to HAProxy server and I get TIMEOUT outside of the network trying to access https://example.com
    • Port forward directly to example http server and I can locally access https://example.com
    • Port forward directly to example http server and I can access https://example.com outside of the network
    • This proves that traffic from the outside can hit a machine on the local network
  • Ran wget http://localhost:80 on the HAProxy server and received OK 200
    • Ran same command (with LB internal IP) from another server on same private network and got OK 200
    • Ran same command (with LB external domain http://example.com) from another server on same private network and got OK 200
    • This proves that LB is listening on 80
  • Ran wget --no-check-certificate https://localhost:443 on the HAProxy server and received OK 200
    • Ran same command (with LB internal IP) from another server on same private network and got OK 200
    • Ran same command (with LB external domain https://example.com) from another server on same private network and got OK 200
    • This proves that LB is listening on port 443

This leads me to believe there is something wrong with my version of HAProxy or my config.
Why would everything work fine internally but then timeout when a request is coming from outside the network?

$ haproxy -vv

HA-Proxy version 1.9.4-1ppa1~xenial 2019/02/07 - https://haproxy.org/
Build options :
  TARGET  = linux2628
  CPU     = generic
  CC      = gcc
  CFLAGS  = -O2 -g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered -Wno-missing-field-initializers -Wtype-limits
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 USE_SYSTEMD=1 USE_PCRE2=1 USE_PCRE2_JIT=1 USE_NS=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.0.2g  1 Mar 2016
Running on OpenSSL version : OpenSSL 1.0.2g  1 Mar 2016
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2
Built with Lua version : Lua 5.3.1
Built with network namespace support.
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Built with zlib version : 1.2.8
Running on zlib version : 1.2.8
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with PCRE2 version : 10.21 2016-01-12
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with multi-threading support.

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as <default> cannot be specified using 'proto' keyword)
              h2 : mode=HTX        side=FE|BE
              h2 : mode=HTTP       side=FE
       <default> : mode=HTX        side=FE|BE
       <default> : mode=TCP|HTTP   side=FE|BE

Available filters :
        [SPOE] spoe
        [COMP] compression
        [CACHE] cache
        [TRACE] trace

Current Config

global
        log /dev/log    local0
        log /dev/log    local1 notice
        chroot /var/lib/haproxy
        stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
        stats timeout 30s
        user haproxy
        group haproxy
        daemon

        # Default SSL material locations
        ca-base /etc/ssl/certs
        crt-base /etc/ssl/private

        ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
        ssl-default-bind-options no-sslv3

defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000
        errorfile 400 /etc/haproxy/errors/400.http
        errorfile 403 /etc/haproxy/errors/403.http
        errorfile 408 /etc/haproxy/errors/408.http
        errorfile 500 /etc/haproxy/errors/500.http
        errorfile 502 /etc/haproxy/errors/502.http
        errorfile 503 /etc/haproxy/errors/503.http
        errorfile 504 /etc/haproxy/errors/504.http

listen haproxy-monitoring
        bind *:1000
        mode http
        stats enable
        stats hide-version
        stats realm Haproxy\ Statistics
        stats uri /
        stats auth XXX:XXX

frontend http_frontend
        bind *:80
        http-request redirect scheme https code 301 if !{ ssl_fc }

frontend https_frontend
        bind *:443 ssl crt /etc/ssl/private/
        mode http
        option httplog
        default_backend host_lamp

backend host_lamp
        mode http
        balance roundrobin
        server lamp1 X.X.X.X:80

Failing while trying to access from another ISP (My 4G cell hotspot)

* Preparing request to https://example.com/
* Using libcurl/7.57.0-DEV OpenSSL/1.0.2o zlib/1.2.11 libssh2/1.7.0_DEV
* Disable timeout
* Enable automatic URL encoding
* Enable SSL validation
* Enable cookie sending with jar of 6 cookies
* Connection 75 seems to be dead!
* Closing connection 75
* TLSv1.2 (OUT), TLS alert, Client hello (1):
*   Trying X.X.X.X...
* TCP_NODELAY set
* connect to X.X.X.X port 443 failed: Timed out
* Failed to connect to example.com port 443: Timed out
* Closing connection 76

Success while trying to access from the same network

* Preparing request to https://example.com/
* Using libcurl/7.57.0-DEV OpenSSL/1.0.2o zlib/1.2.11 libssh2/1.7.0_DEV
* Disable timeout
* Enable automatic URL encoding
* Enable SSL validation
* Enable cookie sending with jar of 6 cookies
* Hostname in DNS cache was stale, zapped
*   Trying X.X.X.X...
* TCP_NODELAY set
* Connected to example.com (X.X.X.X) port 443 (#77)
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: C:\Users\XXX\AppData\Local\Temp\insomnia_6.3.2\2017-09-20.pem
*   CApath: none
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server did not agree to a protocol
* Server certificate:
*  subject: CN=example.com
*  start date: Jan 31 23:00:29 2019 GMT
*  expire date: May  1 23:00:29 2019 GMT
*  subjectAltName: host "example.com" matched cert's "example.com"
*  issuer: C=US; O=Let's Encrypt; CN=Let's Encrypt Authority X3
*  SSL certificate verify ok.
> GET / HTTP/1.1
> Host: example.com
> User-Agent: insomnia/6.3.2
> Accept: */*
< HTTP/1.1 200 OK
< Date: Wed, 27 Feb 2019 03:55:10 GMT
< Server: Apache
< X-Powered-By: PHP/7.1.25
< X-Frame-Options: SAMEORIGIN
< X-Mod-Pagespeed: 1.13.35.2-0
< Vary: Accept-Encoding
< Cache-Control: max-age=0, no-cache, s-maxage=10
< Content-Length: 70
< Content-Type: text/html; charset=UTF-8

* Received 70 B chunk
* Connection #77 to host example.com left intact

Posts: 1

Participants: 1

Read full topic

Netscaler CIP magic number in tcp-request

$
0
0

@larsbek wrote:

I am struggling with setting up HAProxy with Netscaler CIP support.
Netscaler is configured with CIP and magicnumber 1234.
As I would like to only require CIP from requests arriving through Netscaler LB, I tried implementing acl and a tcp-request connection rule - unfortunately it does not work (HAProxy closes/FIN connection immediately on receiving SSL Client Hello).
See specific configuration snippet below.
Configuring HAProxy to require Netscaler CIP on all requests (using bind setting of “accept-netscaler-cip 1234” works fine - but is not usable as all requests arriving from other clients but Netscaler is rejected).
I seem to be missing an option to add the magic number (1234) when using the tcp-request connection rule - not sure if this is the problem?!
Anyone tried this or can help… I would appreciate it!

HAProxy version 1.8.14-52e4d43

HAProxy accepts requests from Netscaler and works with this configuration:
frontend public_ssl
bind :443 accept-netscaler-cip 1234
tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }

HAProxy rejects requests from Netscaler with this configuration: (NS source address 1.1.1.1)
frontend public_ssl
bind :443
tcp-request inspect-delay 5s
acl netscaler_with_cip src 1.1.1.1
tcp-request connection expect-netscaler-cip layer4 if netscaler_with_cip
tcp-request content accept if { req_ssl_hello_type 1 }

Brgds, Lars

Posts: 1

Participants: 1

Read full topic

Rewrite url from backend


HAProxy / Keepalived not forwarding traffic

$
0
0

@dean132 wrote:

Hi All,

I am really new to HAProxy and Keepalived its been a steep learning curve this week, but I think I am almost at my end goal and my suspicion is that I am missing a really simple step to achieve what I need to. I would really appreciate any help someone could give me.

THe issue I am having is that the load balancer does not appear to be forwarding traffic to the backends when using the VIP. I can ping the VIP so I know its up.

I was able to perform a curl http://192.168.5.42 command and get the desired response, from the MASTER node itself. However I then in my attempt to resolve the issue added an HAProxy.xml file to the /etc/firewalld/services folder with the below settings and now I cannot perform that command:

<?xml version="1.0" encoding="utf-8"?> HAProxy HAProxy load-balancer

In my lab I have the following setup

2 x RHEL 7.6 servers with HAProxy and Keepalived installed

2 x Windows 2016 IIS servers with default IIS page with bindings on 80

RHEL-A Keepalived setup:

! Configuration File for keepalived

global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}

vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.5.42 dev ens33 label ens33:100
}
}
virtual_server 192.168.5.42 80 {
delay_loop 10
protocol TCP
lb_algo rr

Use direct routing

lb_kind DR
persistence_timeout 7200

real_server 192.168.5.50 80 {
    weight 1
    TCP_CHECK {
      connect_timeout 5
      connect_port 80
    }
}

real_server 192.168.5.51 80 {
    weight 1
    TCP_CHECK {
      connect_timeout 5
      connect_port 80
    }
}

}

RHEL-A HAProxy Setup

#---------------------------------------------------------------------

Example configuration for a possible web application. See the

full configuration options online.

http://haproxy.1wt.eu/download/1.4/doc/configuration.txt

#---------------------------------------------------------------------

#---------------------------------------------------------------------

Global settings

#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the ‘-r’ option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2

chroot      /var/lib/haproxy
pidfile     /var/run/haproxy.pid
maxconn     4000
user        haproxy
group       haproxy
daemon

# turn on stats unix socket
stats socket /var/lib/haproxy/stats

#---------------------------------------------------------------------

common defaults that all the ‘listen’ and ‘backend’ sections will

use if not designated in their block

#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000

#---------------------------------------------------------------------

main frontend which proxys to the backends

#---------------------------------------------------------------------
frontend main
bind 192.168.5.42:80
default_backend app

#---------------------------------------------------------------------

static backend for serving up images, stylesheets and such

#---------------------------------------------------------------------
backend static
balance roundrobin
server static 127.0.0.1:4331 check

#---------------------------------------------------------------------

round robin balancing between the various backends

#---------------------------------------------------------------------
backend app
balance roundrobin
server app1 192.168.5.50
server app2 192.168.5.51

RHEL-B Keepalived setup

! Configuration File for keepalived

global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}

vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.5.42 dev ens33 label ens33:100
}
}
virtual_server 192.168.5.42 80 {
delay_loop 10
protocol TCP
lb_algo rr

Use direct routing

lb_kind DR
persistence_timeout 7200

real_server 192.168.5.50 80 {
    weight 1
    TCP_CHECK {
      connect_timeout 5
      connect_port 80
    }
}

real_server 192.168.5.51 80 {
    weight 1
    TCP_CHECK {
      connect_timeout 5
      connect_port 80
    }
}

}

RHEL-B HAProxy Setup

#---------------------------------------------------------------------

Example configuration for a possible web application. See the

full configuration options online.

http://haproxy.1wt.eu/download/1.4/doc/configuration.txt

#---------------------------------------------------------------------

#---------------------------------------------------------------------

Global settings

#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the ‘-r’ option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2

chroot      /var/lib/haproxy
pidfile     /var/run/haproxy.pid
maxconn     4000
user        haproxy
group       haproxy
daemon

# turn on stats unix socket
stats socket /var/lib/haproxy/stats

#---------------------------------------------------------------------

common defaults that all the ‘listen’ and ‘backend’ sections will

use if not designated in their block

#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000

#---------------------------------------------------------------------

main frontend which proxys to the backends

#---------------------------------------------------------------------
frontend main
bind 192.168.5.42:80
default_backend app

#---------------------------------------------------------------------

static backend for serving up images, stylesheets and such

#---------------------------------------------------------------------
backend static
balance roundrobin
server static 127.0.0.1:4331 check

#---------------------------------------------------------------------

round robin balancing between the various backends

#---------------------------------------------------------------------
backend app
balance roundrobin
server app1 192.168.5.50
server app2 192.168.5.51

Posts: 1

Participants: 1

Read full topic

SSL Offloading not working

$
0
0

@vijay.sachdeva wrote:

Hi Team,

I am facing an issue on SSL offloading for one of my client’s website, what happens is, SSL ofload works fine to some extent but after it reaches other folder within same domain, browser starts blocking the content saying insecure data.

frontend tms_http
bind *:80
bind *:443 ssl crt /etc/ssl/ssl/xyz.pem no-sslv3
redirect scheme https if !{ ssl_fc }
mode http
option forwardfor
reqadd X-Forwarded-Proto:\ https
acl tms1_birturl hdr(host) -i tms01-birt.xyz.com.np
use_backend tms_1_birt18080 if tms1_birturl

backend tms_1_birt18080
timeout client 100000s
timeout server 100000s
server 172.23.1.101 172.23.1.101:18080 check observe layer7

Thanks

Posts: 1

Participants: 1

Read full topic

Rewrite only part of response header

$
0
0

@sysbes wrote:

Hi all!

I have response header:

Location: https://app1.prod.domain.com/devops2/login/userStart?pageUUID=54190a4f-1e89-43b4-ba53-e581e5fc6bf4&amp;userId=&amp;uuidValue=8cad8415-d1ae-4396-8e1a-ac1620a7755a&amp;loginAction=true

I need rewrite and result this:

Location: https://app1.stage.domain.com/devops2/login/userStart?pageUUID=54190a4f-1e89-43b4-ba53-e581e5fc6bf4&amp;userId=&amp;uuidValue=8cad8415-d1ae-4396-8e1a-ac1620a7755a&amp;loginAction=true

Posts: 1

Participants: 1

Read full topic

Server goes UP without tcp-check if it resolves again

$
0
0

@leen15 wrote:

Hi to all, I have a problem with a haproxy instance (1.9.4) in front of a redis cluster (3 nodes), all inside k8s.

I configured haproxy for a tcp-check like this:

backend bk_redis
  option tcp-check
  tcp-check send AUTH\ RedisTest\r\n
  tcp-check expect string +OK
  tcp-check send PING\r\n
  tcp-check expect string +PONG
  tcp-check send info\ replication\r\n
  tcp-check expect string role:master
  tcp-check send QUIT\r\n
  tcp-check expect string +OK
  default-server  check resolvers kubedns inter 1s downinter 1s fastinter 1s fall 1 rise 30 maxconn 330 no-agent-check on-error mark-down
  server redis-0 redis-ha-server-0.redis-ha.redis-ha.svc.cluster.local:6379
  server redis-1 redis-ha-server-1.redis-ha.redis-ha.svc.cluster.local:6379
  server redis-2 redis-ha-server-2.redis-ha.redis-ha.svc.cluster.local:6379

When the master node goes down it works fine, a replica is promoted to master and haproxy redirects the traffic to that.
The problem is when the old master comes back with a new ip, because haproxy doesn’t check again for the master role but instead it puts immediately the old node as UP.

this is the log:

[NOTICE] 058/125637 (1) : New worker #1 (6) forked
[WARNING] 058/125637 (6) : Health check for server bk_redis/redis-0 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 0ms, status: 1/1 UP.
[WARNING] 058/125639 (6) : Health check for server bk_redis/redis-1 failed, reason: Layer7 timeout, info: " at step 6 of tcp-check (expect string 'role:master')", check duration: 1001ms, status: 0/30 DOWN.
[WARNING] 058/125639 (6) : Server bk_redis/redis-1 is DOWN. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 058/125639 (6) : Health check for server bk_redis/redis-2 failed, reason: Layer7 timeout, info: " at step 6 of tcp-check (expect string 'role:master')", check duration: 1001ms, status: 0/30 DOWN.
[WARNING] 058/125639 (6) : Server bk_redis/redis-2 is DOWN. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 058/125657 (6) : Health check for server bk_redis/redis-0 failed, reason: Layer4 timeout, info: " at step 1 of tcp-check (send)", check duration: 1001ms, status: 0/30 DOWN.
[WARNING] 058/125657 (6) : Server bk_redis/redis-0 is DOWN. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] 058/125657 (6) : backend 'bk_redis' has no server available!
[WARNING] 058/125706 (6) : Health check for server bk_redis/redis-2 failed, reason: Layer7 invalid response, info: "TCPCHK did not match content 'role:master' at step 6", check duration: 532ms, status: 0/30 DOWN.
[WARNING] 058/125706 (6) : Health check for server bk_redis/redis-1 failed, reason: Layer7 invalid response, info: "TCPCHK did not match content 'role:master' at step 6", check duration: 835ms, status: 0/30 DOWN.
[WARNING] 058/125707 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 2ms, status: 1/30 DOWN.
[WARNING] 058/125708 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 2ms, status: 2/30 DOWN.
[WARNING] 058/125708 (6) : Health check for server bk_redis/redis-1 failed, reason: Layer7 timeout, info: " at step 6 of tcp-check (expect string 'role:master')", check duration: 1001ms, status: 0/30 DOWN.
[WARNING] 058/125709 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 4ms, status: 3/30 DOWN.
[WARNING] 058/125710 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 4ms, status: 4/30 DOWN.
[WARNING] 058/125711 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 2ms, status: 5/30 DOWN.
[WARNING] 058/125712 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 2ms, status: 6/30 DOWN.
[WARNING] 058/125713 (6) : Server bk_redis/redis-0 was DOWN and now enters maintenance (DNS NX status).
[WARNING] 058/125713 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 2ms, status: 7/30 DOWN.
[WARNING] 058/125714 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 3ms, status: 8/30 DOWN.
[WARNING] 058/125715 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 3ms, status: 9/30 DOWN.
[WARNING] 058/125716 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 4ms, status: 10/30 DOWN.
[WARNING] 058/125717 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 3ms, status: 11/30 DOWN.
[WARNING] 058/125718 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 4ms, status: 12/30 DOWN.
[WARNING] 058/125719 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 2ms, status: 13/30 DOWN.
[WARNING] 058/125720 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 2ms, status: 14/30 DOWN.
[WARNING] 058/125721 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 2ms, status: 15/30 DOWN.
[WARNING] 058/125722 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 3ms, status: 16/30 DOWN.
[WARNING] 058/125723 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 3ms, status: 17/30 DOWN.
[WARNING] 058/125724 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 4ms, status: 18/30 DOWN.
[WARNING] 058/125725 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 3ms, status: 19/30 DOWN.
[WARNING] 058/125726 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 2ms, status: 20/30 DOWN.
[WARNING] 058/125727 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 2ms, status: 21/30 DOWN.
[WARNING] 058/125728 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 3ms, status: 22/30 DOWN.
[WARNING] 058/125729 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 3ms, status: 23/30 DOWN.
[WARNING] 058/125730 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 4ms, status: 24/30 DOWN.
[WARNING] 058/125731 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 4ms, status: 25/30 DOWN.
[WARNING] 058/125732 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 3ms, status: 26/30 DOWN.
[WARNING] 058/125733 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 3ms, status: 27/30 DOWN.
[WARNING] 058/125734 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 2ms, status: 28/30 DOWN.
[WARNING] 058/125735 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 2ms, status: 29/30 DOWN.
[WARNING] 058/125736 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 1ms, status: 1/1 UP.
[WARNING] 058/125736 (6) : Server bk_redis/redis-2 is UP. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
[WARNING] 058/125945 (6) : bk_redis/redis-0 changed its IP from 10.42.4.85 to 10.42.4.87 by kubedns/namesrv1.
[WARNING] 058/125945 (6) : Server bk_redis/redis-0 ('redis-ha-server-0.redis-ha.redis-ha.svc.cluster.local') is UP/READY (resolves again).
[WARNING] 058/125945 (6) : Server bk_redis/redis-0 administratively READY thanks to valid DNS answer.
[WARNING] 058/125947 (6) : Health check for server bk_redis/redis-0 failed, reason: Layer7 timeout, info: " at step 6 of tcp-check (expect string 'role:master')", check duration: 1000ms, status: 0/30 DOWN.
[WARNING] 058/125947 (6) : Server bk_redis/redis-0 is DOWN. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.

If you see last lines, when the bk_redis/redis-0 has a new ip (BUT IT WAS DOWN) it goes immediately UP without do the tcp-check (that it start after a second and of course it fails).

How can I avoid this ?
Is there a way to force that when it resolves again the ip it waits for the tcp-check for go UP ?

Posts: 2

Participants: 2

Read full topic

Inconsistent ACL resolution resulting in incorrect backend selection

$
0
0

@ScottA wrote:

I have a single frontend that I would like to send to two different backends based on the requested url.

Example Frontend Config

frontend example_https
bind 192.168.1.5:443 ssl crt /etc/ssl/private/certname.pem alpn h2,http/1.1
acl is_jim path -i /jim.asmx
option forwardfor
option http-server-close
use_backend jim_webservers_https if is_jim
default_backend bob_webservers_https

With the config above if I request www.example.com/jim.asmx, my requests are sometimes successful. By looking in the haproxy.logs, I can see that for the failed requests the wrong (default) backend is being selected.

I have tried various different approaches eg using path/path_beg/path_end but always appear to encounter inconsistent behaviour. I tends to either work/not work in a particular browser session until I change it/reopen/change browser. That said this pattern is not consistent, all the while haproxy.log shows the same uri being requested and being routed to different backends.

This would suggest my ACL is not being matched correctly
Any thoughts on what to try?

Posts: 1

Participants: 1

Read full topic

Viewing all 4849 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>