Table of Contents
Introduction
Preliminary preparation
1. The node and corresponding IP of each server
2. The architecture of each server
1. Node-1 node corresponding service construction
Deploy redis
1. Use yum to install redis
2. Modify configuration file
3. Start the service
Deploy nginx
1. Install dependency packages
2. Download nginx installation package
3. Unpack and compile and install
4. Modify configuration file
5. Start the service
Deploy filebeat
1. Use wget to download the filebeat installation package
2. Unpack after installation is complete
3. Modify configuration file
4. Start the service
5. Check whether the redis log is generated
2. Node-2 node corresponding service construction
Build es cluster
1. Use wget to download the installation package and install it
2. Set up a shell with user login and check the installation path
3. Modify the es configuration file
4. Start the service
3.Node-3 node corresponding service construction
Build es cluster
1.Install ES
2. Modify the es configuration file
3. Start the service
4. Deploy the elasticsearch-head-master plug-in
5. Deploy logstash
6. Deploy kibana
Introduction
ELK is a popular open source log management and data analysis platform that consists of three main components:
-
Elasticsearch: A distributed search engine for storing, searching, and analyzing data. Elasticsearch efficiently handles large amounts of structured and unstructured data, enabling users to quickly search and retrieve data.
-
Logstash: Server-side data processing pipeline for data collection, transformation, and transfer. Logstash can collect data from various sources (such as log files, databases, message queues, etc.), then process and format the data, and finally send the data to Elasticsearch or other target storage.
-
Kibana: User interface for data visualization and analysis. Kibana provides powerful data dashboards, charts, and search capabilities that enable users to intuitively understand and analyze data stored in Elasticsearch.
-
Filebeat: It is a lightweight open source log file data collector. Filebeat is usually installed on the client that needs to collect data, and the directory and log format are specified. Filebeat can quickly collect the data and send it to Logstash parses or sends to Elasticsearch for storage
-
Redis: Redis can act as a buffer between Logstash and other data sources, allowing data to be cached between Logstash collection and transfer to Elasticsearch. This helps smooth processing of bursts of large amounts of data and reduces the load on Elasticsearch.
Preparation
1. The node of each server and the corresponding IP
hostname | ip |
node-1 | 192.168.42.130 |
node-2 | 192.168.42.131 |
node-3 | 192.168.42.131 |
2. The architecture built by each server
Node | Service built |
node-1 | redis + nginx + filebeat |
node-2 | es cluster |
node-3 | es cluster + logstash + kibana |
One.node-1 node corresponding service construction
Deploy redis
1. Use yum to install redis
yum install epel-release -y yum install redis -y
2. Modify configuration file
vim /etc/redis.conf bind 0.0.0.0 //Line 61 daemonize yes //Line 128
3. Start service
systemctl start redis
Deploy nginx
1. Install dependent packages
yum -y install gcc gcc-c + + autoconf automake libtool make openssl openssl-devel pcre pcre-devel wget
2. Download nginx installation package
wget http://nginx.org/download/nginx-1.8.1.tar.gz -P /usr/local/src/
3. Unpack, compile and install
cd /usr/local/src/ tar xzf nginx-1.8.1.tar.gz cd /usr/local/src/nginx-1.8.1 ./configure \ --prefix=/usr/local/nginx \ --with-http_ssl_module \ --with-http_flv_module \ --with-http_stub_status_module \ --with-http_gzip_static_module \ --with-pcre make & amp; & amp; make install
4. Modify configuration file
cd /usr/local/src/nginx-1.8.1/conf vim nginx.conf log_format main '{ "time_local": "$time_local", ' '"remote_addr": "$remote_addr", ' '"remote_user": "$remote_user", ' '"body_bytes_sent": "$body_bytes_sent", ' '"request_time": "$request_time", ' '"status": "$status", ' '"host": "$host", ' '"request": "$request", ' '"request_method": "$request_method", ' '"uri": "$uri", ' '"http_referrer": "$http_referer", ' '"http_x_forwarded_for": "$http_x_forwarded_for", ' '"http_user_agent": "$http_user_agent" ' '}'; access_log /var/log/nginx/access.log main;
Then create the log directory in the configuration file, and then start the service
mkdir -p /var/log/nginx
5. Start service
/usr/local/nginx/sbin/nginx
deploy filebeat
1. Use wget to download the filebeat installation package
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.9.2-linux-x86_64.tar.gz -P /opt
2. Unpack after the installation is complete
tar -xzf /opt/filebeat-7.9.2-linux-x86_64.tar.gz -C /usr/local/ //Change the original file name mv /usr/local/filebeat-7.9.2-linux-x86_64 /usr/local/filebeat
3. Modify configuration file
//Backup configuration file mv /usr/local/filebeat/filebeat.yml /usr/local/filebeat/filebeat.yml.bak //Rewrite configuration file cat > /usr/local/filebeat/filebeat.yml << EOF filebeat.inputs: - type: log enabled: true paths: - /var/log/nginx/*.log output.redis: hosts: [192.168.42.130:6379] key: "web_log" EOF
4. Start service
cd /usr/local/filebeat ./filebeat &
5. Check whether the redis log is generated
[root@localhost /]# redis-cli 127.0.0.1:6379> keys * 1) "web_log"
Second.node-2 node corresponding service establishment
Build es cluster
1. Use wget to download the installation package and install it
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.12.0-x86_64.rpm -P /opt rpm -i /opt/elasticsearch-7.12.0-x86_64.rpm
2. Set up a shell with user login and check the installation path
chsh -s /bin/bash elasticsearch rpm -ql elasticsearch
3. Modify es configuration file
//Backup first cp /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml.bak
Add 2 more configuration files at the end
cat >> /etc/elasticsearch/elasticsearch.yml << EOF http.cors.enabled: true http.cors.allow-origin: "*" EOF
4. Start the service
systemctl start elasticsearch curl http://localhost:9200
Three.node-3 node corresponding service construction
Build es cluster
1.Install es
The installation steps are the same as before, except in the configuration file
2. Modify es configuration file
//Back up the configuration file first cp /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml.bak
Add 2 more configuration files at the end
cat >> /etc/elasticsearch/elasticsearch.yml << EOF http.cors.enabled: true http.cors.allow-origin: "*" EOF
3. Start the service
systemctl start elasticsearch curl http://localhost:9200
4. Deploy elasticsearch-head-master plug-in
Install dependencies
# Install dependencies # curl -sL https://rpm.nodesource.com/setup_14.x | sudo bash - yum install -y nodejs # Check the installed version npm -v
Use wget to download the required installation package and unpack it
wget https://codeload.github.com/mobz/elasticsearch-head/zip/master -O /opt/elasticsearch-head-master.zip unzip /opt/elasticsearch-head-master.zip //Optimize path mkdir -p /usr/local/head-master mv /elasticsearch-head-master/* /usr/local/head-master //Enter the file directory to install cd /usr/local/head-master/_site npm install //Start service cd /usr/local/head-master/ nohup npm run start & amp;
Check cluster status via http://192.168.42.132:9100/
5. Deploy logstash
Use wget to download the required installation package and install it
wget https://artifacts.elastic.co/downloads/logstash/logstash-7.6.0.tar.gz -P /opt tar zvxf /opt/logstash-7.6.0.tar.gz -C /usr/local
Optimize paths and modify configuration files
mv /usr/local/logstash-7.6.0 /usr/local/logstash mkdir /usr/local/logstash/conf.d/ cat >> /usr/local/logstash/conf.d/redis_to_elk.conf << EOF input { redis { host => "192.168.42.130" port => "6379" data_type => "list" type => "log" key => "web_log" } } output { elasticsearch { hosts => ["192.168.42.132"] index => "beats_log-%{ + YYYY.MM.dd}" } } EOF
Start service
cd /usr/local/logstash/bin/ ./logstash -f /usr/local/logstash/conf.d/redis_to_elk.conf & amp;
6. Deploy kibana
Use wget to download the installation package and unzip it
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.12.0-x86_64.rpm -P /opt rpm -i /opt/kibana-7.12.0-x86_64.rpm
Modify configuration file
cp /etc/kibana/kibana.yml /etc/kibana/kibana.yml.bak //Backup configuration file sed -i '2s/^#//' /etc/kibana/kibana.yml sed -i '7s/.*/server.host: "0.0.0.0"/' /etc/kibana/kibana.yml sed -i '111s/.*/i18n.locale: "zh-CN"/' /etc/kibana/kibana.yml
Start service
systemctl start kibana curl http://localhost:5601
Enter http://192.168.42.132:5601/ in the browser to create the index