[Linux] Nginx installation uses load balancing and dynamic and static separation (front-end and back-end project deployment), and front-end project packaging

1. Introduction to Nginx

1. Introduction

Nginx is a high-performance web server and reverse proxy server that also acts as a load balancer, HTTP cache, and security device. It is characterized by small memory footprint, high stability, strong concurrency, and easy expansion, so it has been widely used in the Internet field.

The following three points are summarized:

  1. Load balancing: traffic sharing
  2. Reverse proxy: Handles the problem of external network accessing the internal network
  3. Separation of dynamic and static requests: Determine dynamic requests or static requests, and selectively access designated servers

2. Usage scenarios

  1. Web server: Deploying web services through Nginx can improve the server’s concurrent processing capabilities, reduce response delays and the impact of network requests. It also supports a variety of load balancing algorithms and can automatically allocate traffic according to actual conditions.

  2. Reverse proxy server: Nginx can be used as a reverse proxy server to achieve load balancing of multiple back-end servers, and can allocate requests according to actual conditions, effectively improving the concurrent processing capabilities of the back-end servers.

  3. HTTP caching: Nginx’s HTTP caching mechanism can cache frequently accessed web pages, pictures, videos and other static resources locally, improving response speed and reducing server load.

  4. Security protection equipment: Nginx can realize web application access control, DOS attack protection, IP blacklist and other functions through configuration.

3. Use of Nginx

Nginx usage and deployment projects need to be note that before deploying the project, you need to understand the specific needs of your project and configure Nginx according to the actual situation. At the same time, ensure that the server has correctly configured firewall rules to allow the corresponding access port. In addition, it is recommended to back up configuration files and project files to prevent unexpected situations.

The process is as follows:

  1. Install Nginx: First you need to install Nginx on the server. The specific installation method will vary depending on the server operating system. You can refer to the Nginx official documentation or operating system-related tutorials for installation.

  2. Configure Nginx: After the installation is complete, Nginx needs to be configured. It mainly includes setting the listening port, configuring the service proxy, setting up the load balancing, configuring the cache, etc. The Nginx configuration file is located at /etc/nginx/nginx.conf or /usr/local/nginx/conf/nginx.conf. You can make corresponding modifications according to actual needs.

  3. Start Nginx: After the configuration is completed, start Nginx through the terminal command or service management tool, such as by executing sudo service nginx start or /etc/init.d/nginx start nginx.

  4. Deploy the project: Place the project file in the web root directory of Nginx and create corresponding subdirectories as needed. By default, the web root directory of Nginx is specified by the root parameter in the configuration file, usually /usr/share/nginx/html or /var/www/html.

  5. Configure project access: According to the needs of the project, you can add the corresponding site configuration in the Nginx configuration file. It mainly includes setting domain name and port, specifying access path, configuring HTTPS, setting access permissions, etc.

  6. Restart Nginx: After the project deployment is completed, you need to reload the Nginx configuration file to make it effective. Restart Nginx by executing sudo service nginx restart or /etc/init.d/nginx restart.

  7. Test access: Enter the IP address or domain name of the Nginx server in the browser, plus the corresponding access path. If the project page can be accessed normally, the deployment is successful.

2. Nginx installation

1. Install dependencies

Install 4 dependencies with one click, execute the command and wait for the installation to complete!

yum -y install gcc zlib zlib-devel pcre-devel openssl openssl-devel

2. Download and decompress the installation package

If you have downloaded it, you can drag it in using the client tool.

If the download is not complete, execute the following command. Remember to execute it in the directory where you placed it.

Download command: wget http://nginx.org/download/nginx-1.13.7.tar.gz
Decompression command: tar -xvf nginx-1.13.7.tar.gz

3. Install nginx

1. Enter the decompressed directory to install: cd /nginx-1.13.7/

2. Compile and perform configuration: Considering subsequent installation of SSL certificate, add two modules and wait for loading to complete!
Command: ./configure –with-http_stub_status_module –with-http_ssl_module

3. Install and wait for the installation to be completed!
Command: make & amp; & amp; make install

4. Start

Enter the /usr/local/nginx/sbin/ directory and start: cd /usr/local/nginx/sbin/

Start: ./nginx

Restart: ./nginx -s reload

Shut down: ./nginx -s stop

Or, specify the configuration file to start: ./nginx -c /usr/local/nginx/conf/nginx.conf

5. Test

Installlsofplugin

Command: yum install -y lsof

The installation is complete, use the command: lsof -i:80

6. Set firewall port 80

Start the firewall: systemctl start firewalld

Set port 80: firewall-cmd –zone=public –add-port=80/tcp –permanent

Set port 8081: firewall-cmd –zone=public –add-port=8081/tcp –permanent

Update firewall rules: firewall-cmd –reload

Firewall list: firewall-cmd –zone=public –list-ports

firewall-cmd –reload & amp; & amp; firewall-cmd –list-port

7. Visit

Access using our IP path can see

3. tomcat load balancing

1. Preparation

Make sure there are no projects or files that do not belong in your tomcat’s /apache-tomcat-8.5.20/webapps/

2. Prepare 2 tomcats

Prepare two tomcats and execute the following commands in your root directory

Command copy: cp -r apache-tomcat-8.5.20/ apache-tomcat-8.5.20_8081/

3. Modify port

Enter the server.xml file in the second tomcat/conf to modify the port

Edit file: vim server.xml

In order to avoid conflicts between the two tomcats, we +1 in these places

  1. HTTP port, default 8080, change to 8081 as follows
  2. Remote service port, default 8005, change to 8006 as follows “>
  3. AJP port, default 8009, change as follows, 8010

4. Test port

We start ./startup.sh in two tomcats respectively.

5. Server cluster

Go to cd /usr/local/nginx/conf/ and edit the nginx.conf file

Edit: vim nginx.conf

Add the following code

 #Server cluster
    upstream tomcat_list { #Server cluster name
        server 127.0.0.1:8080 weight=1; #Server 1 weight means weight. The greater the weight, the greater the probability of distribution.
        server 127.0.0.1:8081 weight=2; #Server 2 weight means weight. The greater the weight, the greater the probability of distribution.
    } 

Comment root inside and add proxy_pass http://tomcat_list/;

Enter /usr/local/nginx/sbin/: cd ../sbin/

Restart nginx: ./nginx -s reload

Finally, we can directly access our IP address to see

If one server tomcat stops, only the other one will be used

4. Backend interface deployment

1. Import project startup

  1. Use our client tools to place our war package in the webapps folders of our two tomcats.
  2. If your two tomcats are both open, we can stop running./shutdown.sh in both tomcats.
  3. Then we start two tomcat servers./startup.sh.
  4. For MySQL data, you can view [Linux] Linux project deployment and change the access port number.

2. Test

We access our port data

5. Front-end deployment

1. Front-end packaging

1.1, Introduction

SPA (Single Page Application) is a web application that runs on a single page. It mainly uses JavaScript, Ajax and other technologies to dynamically load page content and provide a user experience like a desktop application. The main purpose of packaging the front-end SPA is to speed up page loading, reduce the number of resource requests, and ensure fast response of front-end applications.

1.2, Steps

  1. Remember to run your project before packaging to avoid unexpected troubles. (Make sure the project runs without problems)
  2. Modify assetsPublicPathbuild in the spa front-end project. com/img-blog.csdnimg.cn/8f4160dc7dc549809a789d1122705fff.png” width=”1200″>
  3. . Add publicPath in build/utils.js.
  4. Enter the command window with cmd in the directory of your front-end spa project.
  5. Enter the command in the command window: npm run build
  6. You can see the dist file in the directory, which contains our packaged front-end project.

2. Front-end deployment

2.1. Import\decompress files

  1. Create a new folder mypro in /usr/local/ and put your front-end project zip.
  2. Go insidemypro
  3. Unzip the front end
    1. Install the decompression plug-in: yum install -y unzip
    2. Unzip: unzip blog.zip

2.2. Configuration

Edit the nginx.conf file in /usr/local/nginx/conf/.

Static resource configuration

Change the path of root to the path of the file you decompressed; comment proxy_pass.

Dynamic resource allocation

Add and save below; ^~/api/ means that the matching prefix is api request. If the proxy_pass has / at the end, the path after /api/* will be directly spliced to the end, that is, api will be removed .

 location ^~/api/ {
proxy_pass http://tomcat_list/;
}

Enter: cd /usr/local/nginx/sbin/

Restart nginx: ./nginx -s reload

Our frontend can be seen by visiting the IP address

2.3. Load mapping relationship

Look at what your project request path is. If your project request path is corresponding, we don’t need to modify it.

We enter the C:\Windows\System32\drivers\etc\hosts file of our host to edit and configure, configure the access path corresponding to our project access to the IP address corresponding to our server.

2.4, Access

This way our host can access