Haproxy implements seven-layer load balancing

Table of Contents

Haproxy overview

haproxy algorithm:

Haproxy implements seven-layer load

①Deploy nginx-server test page

②(active/standby) deploy load balancer

③Deploy keepalived for high availability

④Add health check on haproxy

⑤Test

Haproxy Overview

haproxy—It is mainly used for layer 7 load balancing, but it can also be used for layer 4 load balancing.
Apache can also do layer 7 load balancing, but it is very troublesome. No one uses it in actual work.
Load balancing corresponds to the OSI protocol
7-layer load balancing: using the 7-layer http protocol,
Layer 4 load balancing: load balancing using tcp protocol and port number

haproxy algorithm:

1.roundrobin
Polling based on weight is the most balanced and fair algorithm when the server’s processing time remains evenly distributed. This algorithm is dynamic, which means that its weights can be adjusted at runtime. However, by design, each The backend server can only accept up to 4128 connections
2.static-rr
Polling based on weight is similar to roundrobin, but it is a static method. Adjusting its server weight at runtime will not take effect. However, it has no limit on the number of back-end server connections.
3.leastconn
New connection requests are dispatched to the backend server with the smallest number of connections.

Haproxy implements seven-layer load

keepalived + haproxy

192.168.134.165 master

192.168.134.166 slave

192.168.134.163 nginx-server

192.168.134.164 nginx-server

192.168.134.160 VIP (Virtual IP)

①Deploy nginx-server test page

Both nginx are deployed for easy testing

[root@server03 ~]# yum -y install nginx
[root@server03 ~]# systemctl start nginx
[root@server03 ~]# echo "webserver01..." > /usr/share/nginx/html/index.html

[root@server04 ~]# yum -y install nginx
[root@server04 ~]# systemctl start nginx
[root@server04 ~]# echo "webserver02..." > /usr/share/nginx/html/index.html
②(active/standby) deploy load balancer
[root@server01 ~]# yum -y install haproxy
[root@server01 ~]# vim /etc/haproxy/haproxy.cfg
global
    log 127.0.0.1 local2 info
    pidfile /var/run/haproxy.pid
    maxconn 4000
    user haproxy
    group haproxy
    daemon
    nbproc 1
defaults
    mode http
    log global
    retries 3
    option redispatch
    maxconn 4000
    timeout 5000
    clitimeout 50000
    srvtimeout 50000
listen stats
    bind *:81
    stats enable
    stats uri /haproxy
    stats auth aren't:123
frontend web
    mode http
    bind *:80
    option httplog
    acl html url_reg -i \.html$
    use_backend httpservers if html
    default_backend httpservers
backend httpsservers
    balance roundrobin
    server http1 192.168.134.163:80 maxconn 2000 weight 1 check inter 1s rise 2 fall 2
    server http2 192.168.134.164:80 maxconn 2000 weight 1 check inter 1s rise 2 fall 2

[root@server01 ~]# systemctl start haproxy

Browser access haproxy monitoring

master:

slave:

Explanation of the main parameters of the page
Queue
Cur: current queued requests //The current number of queue requests
Max: max queued requests //The maximum number of queue requests
Limit: //queue limit number

Errors
Req: request errors //Error request
Conn: connection errors //Wrong connection

Server list:
Status: status, including up (backend machine active) and down (backend machine hung up).
LastChk: Continuously checks the time of the backend server
Wght: (weight) : weight

③Deploy keepalived for high availability

Note: The priorities of master and slave are different, but the virtual route id (virtual_router_id) remains the same; and the slave is configured with nopreempt (does not preempt resources)

master:

[root@server01 ~]# yum -y install keepalived
[root@server01 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id director1
}
vrrp_instance VI_1 {
    stateMASTER
    interface ens33
    virtual_router_id 80
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.134.160/24
    }
}

[root@server01 ~]# systemctl start keepalived

slave:

[root@localhost ~]# yum -y install keepalived
[root@localhost ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id directory2
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    nopreempt
    virtual_router_id 80
    priority 50
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.134.160/24
    }
}
[root@localhost ~]# systemctl start keepalived

View IP

④Add haproxy health check

Do it on both machines and let Keepalived execute an external script at a certain time interval. The function of the script is to close Keepalived on the local machine when Haproxy fails.

[root@server01 ~]# vim /etc/keepalived/check.sh
#!/bin/bash
  /usr/bin/curl -I http://localhost &>/dev/null
if [ $? -ne 0 ]; then
# /etc/init.d/keepalived stop
        systemctl stop keepalived
fi
[root@server01 ~]# chmod a + x /etc/keepalived/check.sh

Add health check configuration vrrp_script check_haproxy in keepalived and call it with track_script.

! Configuration File for keepalived

global_defs {
   router_id director1
}
vrrp_script check_haproxy {
   script "/etc/keepalived/check.sh"
   interval 5
}
vrrp_instance VI_1 {
    stateMASTER
    interface ens33
    virtual_router_id 80
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.134.160/24
    }
 track_script {
        check_haproxy
    }
}

Restart keepalived

[root@server01 ~]# systemctl restart keepalived
⑤Test

Close the master’s haproxy service and you will find that the master’s keepalived service is also closed. At this time, the VIP on the master is transferred to the slave

  • Close the master’s service and view the VIP

  • Check the slave’s IP and you can find that the VIP jumps here.

  • Check whether the service is normal in the web interface

First refresh

Second refresh