Nginx operation (load balancing, dynamic and static separation, session maintenance, anti-leeching)

1. Control nginx service through nginx command

nginx -c /path/nginx.conf # Start nginx with the configuration file in a specific directory:
nginx -s reload # Reload after modifying the configuration to take effect
nginx -s reopen # Reopen the log file
nginx -s stop # Quickly stop nginx
nginx -s quit # Stop nginx completely and orderly
nginx -t # Test whether the current configuration file is correct
nginx -t -c /path/to/nginx.conf # Test whether the specific nginx configuration file is correct

2. Monitor the working status of nginx through the stub_status module (etc/nginx/conf.d/default.conf)

#Add the following content~~
location /nginx-status {
      stub_status on;
      access_log /var/log/nginx/nginxstatus.log; #Set the location of the log file
      auth_basic "nginx-status"; #Specify the authentication mechanism (the same as the content after location)
      auth_basic_user_file /etc/nginx/htpasswd; #Specify the password file for authentication
      }

3. Create an authentication password file and add user mm. The password is encrypted with md5

yum -y install httpd-tools

#htpasswd is a command tool of the open source http server apache httpd, used to generate password files for http basic authentication

htpasswd -c -m /etc/nginx/htpasswd mm

4. Use limit_rate to limit the speed at which the client transmits data

location / {
            root /var/www/nginx/;
            index index.html index.htm;
            limit_rate 2k; #The speed limit for each connection is 2k/s
        }

5. Check the error log (find the error log)

[root@localhost ~]# cat /var/log/nginx/error.log
2023/10/27 04:00:57 [error] 6702#0: *14 "/root/html/index.html" is forbidden (13: Permission denied), client: 10.219.24.1, server: web.1000cc. com, request: "GET / HTTP/1.1", host: "web.1000cc.com"

6.nginx load balancing

When your nginx server acts as a proxy for two web servers, the load balancing algorithm uses polling. When web1 is broken, the nginx server first requests web1, and then after the response times out, it distributes the request to web2 to continue running.

This code is the configuration of two web machines

upstream testapp {
      server 10.0.105.199:8081;
      server 10.0.105.202:8081;
    }
 server {
        listen 80;
        server_name localhost;
        location/{
           proxy_pass http://testapp; #Requests are forwarded to the server list defined by testapp
        } 
upstream mysvr {
      server http://10.0.105.199:8081;
      server http://10.0.105.202:8081;
    }
 server {
        listen 80;
        server_name localhost;
        location/{
           proxy_pass http://mysvr; #Requests are directed to the server list defined by mysvr
        } 

1. Hot standby: Add backup after the IP port of the second virtual machine; it represents hot standby.

2. Polling: nginx defaults to polling and its weight is 1 by default. The server requests data: abababab…

3. Weighted polling: server 172.17.14.2:8080 weight=1; add weight at the end
server 172.17.14.3:8080 weight=2;

4.ip_hash:nginx will allow the same client IP to request the same server

5.nginx load balancing configuration status parameters

  • down means that the current server does not participate in load balancing for the time being.

  • backup, reserved backup machine. The backup machine will be requested when all other non-backup machines fail or are busy, so this machine has the least pressure.

  • max_fails, the number of allowed request failures, defaults to 1. When the maximum number of times is exceeded, the error defined by the proxy_next_upstream module is returned.

  • fail_timeout, the time unit for suspending the service after max_fails failures, in seconds. max_fails can be used together with fail_timeout.

 upstream myweb {
      server 172.17.14.2:8080 weight=2 max_fails=2 fail_timeout=2;
      server 172.17.14.3:8080 weight=1 max_fails=2 fail_timeout=1;
    }

7.nginx session persistence

ip_hash (hash algorithm)

upstream backend {
    ip_hash;
    server backend1.example.com;
    server backend2.example.com;
    server backend3.example.com down;
}

ip_hash uses the source address hash algorithm to always send requests from the same client to the same backend server unless the server is unavailable.

When the backend server goes down, the session will be lost; Clients from the same LAN will be forwarded to the same backend server, which may cause load imbalance; It is not applicable to CDN networks and does not apply to situations where there is a proxy in the front end.

8.nginx realizes dynamic and static separation

Use regular expression matching to filter and then deliver it to different servers

(Dynamic resources need to use php, download php first)

upstream static {
server 10.0.105.196:80 weight=1 max_fails=1 fail_timeout=60s;
}
upstream php {
server 10.0.105.200:80 weight=1 max_fails=1 fail_timeout=60s;
}
server {
listen 80;
server_name localhost;
#Dynamic resource loading
location ~ \.(php|jsp)$ {
proxy_pass http://php;
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
#Static resource loading
location ~ .*\.(html|gif|jpg|png|bmp|swf|css|js)$ {
proxy_pass http://static;
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}

9.nginx anti-hotlink problem

Two websites A and B. Website A quotes pictures from website B. This behavior is hotlinking. Anti-hotlinking is to prevent A from citing B’s pictures.

noon: allows requests without http_refer to access resources; if noon is added, it can be accessed locally, but others cannot.

blocked: Allow requests that do not start with http:// and have no protocol to access resources;

server_names: Only allow specified IP/domain name to request access to resources (whitelist)

Experiment: Prepare two machines,

server {
    listen 80;
    server_name localhost;

    location/{
         root /usr/share/nginx/html;
         index index.html index.htm;

         valid_referers none blocked *.cc.com 10.0.105.202;
                if ($invalid_referer) {
                   return 502;
                }
        }
    location ~ .*\.(gif|jpg|png|jpeg)$ {
         root /usr/share/nginx/html;

         valid_referers qf.com 10.0.105.202;
                if ($invalid_referer) {
                   return 403;
                }
        }
}

Second machine client
Configure nginx access page
Create page
[root@nginx-server nginx]# vim index.html



qf.com




Test without http_refer:
[root@nginx-server nginx]# curl -I “http://10.0.105.202/test1.png”
HTTP/1.1 200 OK
Server: nginx/1.16.0
Date: Thu, 27 Jun 2019 16:21:13 GMT
Content-Type: image/png
Content-Length: 235283
Last-Modified: Thu, 27 Jun 2019 11:27:11 GMT
Connection: keep-alive
ETag: “5d14a80f-39713”
Accept-Ranges: bytes

Test with illegal http_refer:
[root@nginx-server nginx]# curl -e http://www.baidu.com -I “http://10.0.105.202/test.jpg”
HTTP/1.1 403 Forbidden
Server: nginx/1.16.0
Date: Thu, 27 Jun 2019 16:22:32 GMT
Content-Type: text/html
Content-Length: 153
Connection: keep-alive

Test with legal http_refer:
[root@nginx-server nginx]# curl -e http://10.0.105.202 -I “http://10.0.105.202/test.jpg”
HTTP/1.1 200 OK
Server: nginx/1.16.0
Date: Thu, 27 Jun 2019 16:23:21 GMT
Content-Type: image/jpeg
Content-Length: 27961
Last-Modified: Thu, 27 Jun 2019 12:28:51 GMT
Connection: keep-alive
ETag: “5d14b683-6d39”
Accept-Ranges: bytes