Nginx with load balancing and static and dynamic separation: the perfect combination for building high-performance web applications

Table of Contents

Preface

1. Introduction to Nginx

1.What is Nginx

2. Characteristics of Nginx

3.Where is Nginx used?

4. How to use Nginx

5. Advantages and disadvantages of Nginx

6.Nginx application scenarios

2. Load balancing and static and dynamic separation

1. Load balancing

2. Separation of movement and static

3. Nginx is equipped with load balancing and provides front-end and back-end separation back-end interface data

1.Nginx installation

2.tomcat load balancing

3. Release of load balancing background project

4. Package the front-end project and deploy the project to the Nginx server in Linux

1. Front-end project packaging

2. Separation of dynamic and static and front-end project deployment

3. Add the mapping relationship between IP and domain name


Foreword

In today’s Internet era, high concurrent access has become one of the important challenges faced by web applications. In order to ensure system stability and user experience, we need to take effective measures to deal with this challenge. This article will introduce how to use Nginx with load balancing and dynamic and static separation technologies to build high-performance web applications.

1. Introduction to Nginx

1.What is Nginx

Nginx is an open source lightweight web server and reverse proxy server. It was created by Russian developer Igor Sysoev and first released in 2004. Nginx adopts an event-driven, asynchronous non-blocking IO model and is known for its high performance. It can handle tasks such as static files, reverse proxy, load balancing, static and dynamic separation, and is suitable for building high-concurrency and high-reliability web applications.

Detailed description

  1. Server Role:

    • Web server: Nginx can be used as a static file server to effectively handle the transmission of static resources (such as HTML, CSS, JavaScript, pictures, etc.). It uses an efficient file reading and writing mechanism and can respond to client requests quickly.
    • Reverse proxy server: Nginx can forward client requests to the back-end server for processing, hiding the real server information and improving the security and reliability of the system. It also supports load balancing algorithms to distribute requests to multiple backend servers to share server load and improve system performance and availability.
  2. Concurrency processing capability:

    • Asynchronous non-blocking IO model: Nginx uses an asynchronous non-blocking event-driven model, which can handle a large number of concurrent connections in a single process without creating a new thread or process for each connection. This saves system resources and significantly improves the system’s concurrent processing capabilities.
    • Multi-process/thread working mode: Nginx supports multi-process/thread working mode. Each process/thread runs independently without affecting each other. This design allows Nginx to take full advantage of multi-core processors and maintain stability and performance in the face of high concurrent requests.
  3. Advanced features and modular design:

    • Load balancing: Nginx provides multiple load balancing algorithms, such as polling, IP hashing, minimum number of connections, etc., which can distribute requests to multiple backend servers to achieve load balancing.
    • Separation of dynamic and static requests: Nginx can process dynamic requests and static requests separately, directly return static files, or forward dynamic requests to back-end applications for processing, improving system performance and response speed.
    • Caching: Nginx supports caching of static files and dynamic content, reducing the load on the back-end server and providing faster access speeds.
    • Security: Nginx has some security protection functions, such as DDoS and DoS attack protection, SSL/TLS encryption, access control lists, etc., to protect servers and applications from malicious attacks.
    • Scalability: Nginx’s modular design allows users to select the required modules according to their needs, and supports the expansion of third-party plug-ins to meet the needs of different scenarios.

2. Characteristics of Nginx

  • High performance: Nginx adopts an asynchronous non-blocking IO model, which can handle a large number of concurrent connections and has excellent performance. It can effectively handle high concurrent requests and provide fast response speed.
  • Stable and reliable: Nginx’s multi-process/thread working method allows it to make full use of multi-core processors and maintain stability in the face of high concurrent requests. It also has excellent fault tolerance, even if one process/thread has a problem, other processes/threads can still work normally.
  • Lightweight: Nginx has a relatively small amount of code and takes up less system resources. Its memory consumption is relatively low, making it suitable for use in resource-constrained environments.
  • Scalability: Nginx has a modular architecture, you can select the required modules according to your needs, and supports the expansion of third-party plug-ins. This makes Nginx very flexible and scalable.

3.Where is Nginx used

Nginx can run on various operating systems, including Linux, Unix, Windows, etc. It can serve as an independent web server to directly handle client requests, or it can serve as a reverse proxy server to forward requests to the back-end application server. Nginx is widely used in the Internet field, especially suitable for web applications and distributed systems that need to handle a large number of concurrent requests.

4. How to use Nginx

Nginx is set up and managed through configuration files. Users can edit configuration files using a simple text editor to define the server’s behavior and rules. Nginx’s configuration syntax is concise and clear, easy to understand and maintain. By reloading the configuration file, Nginx configuration can be dynamically modified without restarting the server.

5.Advantages and Disadvantages of Nginx

Advantages:

  • High performance: Using an asynchronous non-blocking IO model, it can handle a large number of concurrent connections and has excellent performance.
  • Lightweight: consumes less system resources and is suitable for running in resource-constrained environments.
  • Highly reliable: multi-process/thread working mode and strong fault tolerance, maintaining system stability.
  • Scalability: Supports modular architecture and third-party plug-ins, with good flexibility and scalability.
  • Simple configuration: The configuration syntax is concise and clear, easy to understand and maintain.

Disadvantages:

  • Dynamic content processing capabilities are relatively weak: Compared with some web servers specifically designed to handle dynamic content, Nginx’s dynamic content processing capabilities may be slightly weaker.
  • Learning curve: For beginners, Nginx may require a certain learning cost, especially for complex configuration requirements.

6.Nginx application scenarios

  • High-concurrency web applications: Nginx’s high performance and concurrent processing capabilities make it an ideal choice for handling highly concurrent web applications, such as e-commerce websites, social networks, online media, etc.
  • Reverse proxy and load balancing: Nginx can be used as a reverse proxy server to forward requests to multiple back-end servers to achieve load balancing and improve system availability.
  • Static file service: Nginx can transmit static files quickly and efficiently, and is suitable for scenarios where static resources such as images, audio, and videos are distributed.
  • Security protection and encryption: Nginx has some security protection functions, such as DDoS and DoS attack protection, SSL/TLS encryption, etc., which can protect servers and applications from malicious attacks.
  • CDN acceleration: Nginx can be used as the core component of the content delivery network (CDN) to provide faster access speeds and better user experience by caching static files and dynamic content.

2. Load balancing and dynamic and static separation

1.Load balancing

Load balancing refers to allocating client requests to multiple servers to achieve load balancing. Normally, the load balancer will select an available server based on a certain algorithm (such as polling, weight, IP hash, etc.) and forward the client request to the server. The main function of load balancing is to improve the availability, stability and throughput of the system.

Build a cluster of multiple servers to provide services to the outside world for a certain web application

Ensure the stability of the project itself,

It can reduce the pressure on the server, respond to user requests efficiently, and effectively improve the user experience.

Implementation steps

  1. The client sends a request to the load balancer. The request can be an HTTP request, TCP request, or UDP request.

  2. Load Balancer Selects Server After receiving the request, the load balancer will select an available server based on a certain algorithm (such as polling, weighted polling, IP hashing, etc.) and forward the request to the server.

  3. Server Processes Request After the server receives the request, it processes it and returns the response to the load balancer.

  4. Load Balancer Returns Response After the load balancer receives the server’s response, it returns it to the client.

Common load balancing algorithms include:

  1. Round robin The round robin algorithm is one of the simplest load balancing algorithms and is based on round-robin distribution of requests to each server. The polling algorithm is suitable for all servers with the same performance.

  2. Weighted polling Weighted polling adds weight control based on the polling algorithm. Servers can be weighted according to differences in server performance to achieve more balanced load balancing.

  3. IP hashing The IP hashing algorithm calculates a hash value based on the client’s IP address, and then sends the request to the server corresponding to the hash value. The IP hashing algorithm is suitable for web applications that need to maintain session state.

  4. Minimum number of connections The minimum number of connections algorithm selects a server with the smallest load based on the current number of connections on the server. This algorithm is suitable for long connection scenarios.

  5. Shortest response time The shortest response time algorithm selects a server with the shortest response time based on the server’s processing time. This algorithm is suitable for web applications that require fast response.

2. Separation of movement and stillness

Static and dynamic separation is an application architecture design pattern that separates dynamically generated content from static resources and processes and stores them separately on different servers. Its main purpose is to improve the performance and scalability of the website.

Dynamically generated content usually refers to pages dynamically generated by a web server or application server, such as ASP, JSP, PHP, etc. Static resources usually refer to files that do not need to be dynamically generated by the server, such as HTML, CSS, JavaScript, pictures, videos, etc.

Implementation

  1. Implemented through a reverse proxy server The reverse proxy server can distinguish between dynamic requests and static requests based on the URL path of the request, and forward them to different servers for processing. For example, the reverse proxy server can forward all static resource requests (such as .jpg, .png, .js, .css, etc.) to a dedicated static file server for processing, while dynamic requests (such as .jsp, .php, .asp, etc.) are forwarded to the application server for processing.

  2. CDN (Content Delivery Network) implemented through CDN is a static resource acceleration service based on a distributed network. It caches static resources to node servers around the world, allowing users to obtain resources from the node servers closest to them, thus Improve access speed and reliability. CDN can usually be used in conjunction with a reverse proxy server to achieve more efficient static and dynamic separation.

  3. Configure directly on the Web server. Some Web servers can realize dynamic and static separation directly through configuration files, such as Apache’s mod_rewrite module and Nginx’s location directive. In this way, we can redirect all static resource requests (such as .jpg, .png, .js, .css, etc.) to the static file directory for processing, and dynamic requests (such as .jsp, .php, . asp, etc.) are forwarded to the application server for processing.

Advantages of dynamic and static separation:

  1. Improve website performance Since static resources are usually accessed more frequently than dynamic resources, processing them separately can reduce the load on the application server and improve the performance and response speed of the website.

  2. Improve scalability Through dynamic and static separation, we can deploy static resources to multiple servers to achieve better scalability and fault tolerance.

  3. Improve security Because static resources typically do not contain confidential information, handling them separately from dynamic content can reduce potential security risks.

3. Nginx is equipped with load balancing and provides front-end and front-end separated background interface data

1.Nginx installation

Visit Nginx official website to download the Nginx installation package (tar.gz format)

wget http://nginx.org/download/nginx-1.13.7.tar.gz

or

First download from the official website and then upload directly to the Linux server

Download the 4 dependencies required for Nginx

yum -y install gcc zlib zlib-devel pcre-devel openssl openssl-devel

Unzip the Nginx installation package

tar -xvf nginx-1.13.7.tar.gz

Install nginx

# Enter the installation package directory
cd nginx-1.13.7
# Compile and execute configuration: considering the subsequent installation of ssl certificate, add two modules
./configure --with-http_stub_status_module --with-http_ssl_module


Copy the compiled files to the specified installation directory

# Installation
make & amp; & amp; make install

Start nginx

# Start
./nginx
# Restart
./nginx -s reload

# closure
./nginx -s stop

View the network connection information corresponding to the specified port number

#If there is no lsof--corresponding download
yum install -y lsof
#Check
lsof -i:port number

Set up the firewall to open port 80

firewall-cmd --zone=public --add-port=80/tcp --permanent

Update firewall rules and query the firewall open port list

 firewall-cmd --reload & amp; & amp; firewall-cmd --list-port

Start nginx demonstration effect

2.tomcat load balancing

Here we prepare 2 tomcats to achieve load balancing effect

#Copy another tomcat 8082 port
cp -r apache-tomcat-8.5.20/ apache-tomcat-8.5.20_8082/

To prevent port number conflicts, we will change the port in conf/server.xml in apache-tomcat-8.5.20_8082

Here I add + 2 to all ports under this file

To facilitate testing, we will use the content of webapps/ROOT/index.jsp in apache-tomcat-8.5.20_8082

Start two tomcat servers

We also need to configure load balancing configuration, and we need to configure several tomcat servers,

Enter /usr/local/nginx/ conf/nginx.conf

In the following configuration, upstream tomcat_list defines a server cluster named tomecat_list, which contains two backend servers (also called upstream servers). The IP address of the first backend server is 127.0.0.1, the listening port is 8080, and the weight is 1; the IP address of the second backend server is 127.0.0.1, the listening port is also 8082, and the weight is 2 .

The weight ranges from 1 to 65535. The server with the larger weight has a higher probability of being assigned to the request. By default, Nginx will use the round-robin method to evenly distribute requests to each server, but you can change the request distribution ratio between servers by modifying the weight parameters.

In the following configuration, the weight of the 8082 port server is 2, while the weight of the 8080 port server is 1. Therefore, the request volume assigned to the8082 port server should be twice as high as that of the 8080 port server.

Next, restart the nginx proxy service

Demo effect:

You can see that the relationship between the appearance of port 8082 and the appearance of port 8080 is 2:1, which is determined by our above weight.

When there are too many requests from users, we can also choose a server to carry more requests. When we have a service that goes down or an accident occurs, the load balancer will automatically forward the request to other available servers to achieve High availability and fault tolerance.

When we stop a server to see if we can still access the project


Effect demonstration:

3. Load balancing background project release

Place the ssh projects under the webapps of the two tomcat servers

Restart the demonstration effect—demonstrate and close the 8080 port service and continue the demonstration

4. Package the front-end project and deploy the project to the Nginx server in Linux

1. Front-end project packaging

First test whether the front and back ends of the project can run normally

Preparation before packing

White screen problem

In the index.js file in the config folder in the project directory, change “/” in assetsPublicPath under the build object to “./”,

The icon icon of element-ui cannot be displayed normally.

// Solve the icon path loading error, build/utils.js
publicPath:’../../’

Package the front-end project, open the front-end project file directory cmd and enter the command

npm run build

After packaging, a dist folder will be generated

Our dynamic web project can be accessed like a static project. Click index.html in the folder to access the project test

Next we can put the static project on the static server

2. Separation of dynamic and static and front-end project deployment

How to separate movement and stillness

Filter by path,

If you have an api request path, use the dynamic web application server;

If there is no api request path, use the static resource server.

location / {
            #This code is to solve the problem that history routing cannot jump. It is introduced on the vue-router official website.
        try_files $uri $uri/ /index.html;
    }
    location ^~/api/ {
        #^~/api/ indicates that the matching prefix is api request. If proxy_pass ends with /, the path after /api/* will be directly spliced to the end, that is, api will be removed.
        proxy_pass http://tomcat_list/;
    }

We only distinguish here whether it is a static server or a dynamic server (backend)

Of course, it can also use modules to distinguish which dynamic server to use, such as user management interfaces, order management interfaces, etc., which can be distinguished in detail

 location ^~/xxx/ {
        
        proxy_pass http://tomcat_list/;
    }

Upload front-end project

First create a folder named mypro under /usr/local/

Unzip the front-end project

#Download the zip plugin
yum install -y unzip
#Unzip the front-end project
unzip blog.zip

nginx dynamic and static separation configuration

/usr/local/nginx/conf/nginx.conf

Restart nginx service

Go to /usr/local/nginx/sbin and restart Nginx

./nginx -s reload

Effect Demonstration

3. Add the mapping relationship between IP and domain name

Find the local C:\Windows\System32\drivers\etc\hosts and add the mapping relationship

Add mapping relationship between IP and domain name

Effect demonstration (demonstrated separately through IP and domain name)