1. Server information
1. System: centos7.9
2. Configuration: 2 cores, 8GB memory, 80GB storage
2. Install docker
1. Update data source
yum update
2. Set the installed data source (Alibaba)
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
3. Install Docker client
yum install docker-ce
4. Check Docker version
docker -v
5. Start Docker
systemctl start docker
3. Centos7.9 installation Docker-Compose
1. Install through yum yum install -y docker-compose 2. Install via curl curl -L https://get.daocloud.io/docker/compose/releases/download/v2.4.1/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker- compose chmod +x /usr/local/bin/docker-compose 3. Download the corresponding Linux distribution directly in release (recommended) ①Website: https://github.com/docker/compose/releases/tag ②After downloading, upload the software to the [/usr/local/bin] directory of the server and rename it: sudo mv docker-compose-linux-x86_64 docker-compose ③Apply executable permissions to binary files: sudo chmod +x /usr/local/bin/docker-compose ④Create soft link: sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose ⑤ Check the docker-compose version: docker-compose -v
4. Install ELK using docker
1.ELK refers to ElasticSearch, Kibana, and LogStash.
2. Create and run an ElasticSearch container:
#7.6.2 Discovery.type=single-node needs to be added for startup docker run -e ES_JAVA_OPTS="-Xms256m -Xmx256m" -e discovery.type=single-node -d -p 9200:9200 -p 9300:9300 --name MyES elasticsearch:7.6.2
3. Browser access test: http://server IP:9200, the following results should be output:
{<!-- --> "name": "f8d552739afd", "cluster_name": "docker-cluster", "cluster_uuid": "OxyNHcD-Q-mzX42mobfkiQ", "version": {<!-- --> "number": "7.6.2", "build_flavor": "default", "build_type": "docker", "build_hash": "ef48eb35cf30adf4db14086e8aabd07ef6fb113f", "build_date": "2020-03-26T06:34:37.794943Z", "build_snapshot": false, "lucene_version": "8.4.0", "minimum_wire_compatibility_version": "6.8.0", "minimum_index_compatibility_version": "6.0.0-beta1" }, "tagline": "You Know, for Search" }
4. Create and run a Kibana container
**<1>** Check the IP address of ES in docker, because kibana needs to connect to ES when starting up
#First use the command docker ps to view the ES container ID docker ps #The output is as follows: CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f8d552739afd elasticsearch:7.6.2 "/usr/local/bin/dock…" 28 hours ago Up 28 hours 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp, 0.0.0.0:9300- >9300/tcp, :::9300->9300/tcp MyES #View the container IP address through the container ID. The above a266d1ff5c1b is the container ID of our ES docker inspect --format '{<!-- -->{ .NetworkSettings.IPAddress }}' f8d552739afd #The output is as follows: 172.17.0.2
<2>Create and run a Kibana container
#Note that the ELASTICSEARCH_URL here needs to be replaced with the IP address of the ES container above, otherwise Kibana cannot connect to ES docker run -d --name MyKibana -p 5601:5601 -e ELASTICSEARCH_URL=http://172.17.0.2:9200 kibana:6.8.8
5. Browser access test: http://server IP:5601
6. Create and run a LogStash container
docker run -d -p 9600:9600 -p 4560:4560 --name MyLogStash logstash:6.8.8
7. After running, enter the container. Modify the logstash.yml configuration file
docker exec -it logstash container ID bash cdconfig vi logstash.yml # Change to the following configuration http.host: "0.0.0.0" xpack.monitoring.elasticsearch.url: http://172.17.0.2:9200
8. Modify the logstash.conf file under /usr/share/logstash/pipeline/pipeline
input {<!-- --> tcp {<!-- --> #The mode is selected as server mode => "server" #Fill in the ip and port according to your own situation. The port defaults to 4560, which corresponds to the destination in the appender in logback.xml below. host => "0.0.0.0" port => 4560 #Format json codec => json_lines } } output {<!-- --> elasticsearch {<!-- --> action => "index" #Here is the address of es. Multiple es should be written in the form of an array. hosts => "172.17.0.2:9200" # Used for kibana filtering, you can fill in the project name index => "springboot-logstash-%{ + YYYY.MM.dd}" } stdout {<!-- --> codec => rubydebug } }
9. Restart logstash
docker restart MyLogStash
5. Use of springboot integrated ELK
1. Initialize a springboot project based on maven management (the specific details will not be elaborated). The project structure is as follows:
2. The configuration of the pom.xml file is as follows:
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.6.11</version> <relativePath/> <!-- lookup parent from repository --> </parent> <groupId>com.example</groupId> <artifactId>logstash-demo</artifactId> <version>0.0.1-SNAPSHOT</version> <name>elk-demo</name> <description>elk-demo</description> <properties> <java.version>1.8</java.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <!--Introducing logstash to collect logs--> <dependency> <groupId>net.logstash.logback</groupId> <artifactId>logstash-logback-encoder</artifactId> <version>5.3</version> </dependency> <dependency> <!--Convenient to print logs--> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <version>1.18.12</version> </dependency> <dependency> <!--Convenient for testing--> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.13</version> <scope>test</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.8.1</version> <configuration> <source>1.8</source> <target>1.8</target> <encoding>UTF-8</encoding> </configuration> </plugin> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project>
3. The configuration of the application.yml file is as follows:
server.port: 8089 spring: application: name: springboot-logstash # The application name here is the ELK index* #elasticsearch: # # Replace the corresponding ip here #uris: Server IP:9200
4. The configuration of the logback-spring.xml file is as follows:
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE configuration> <configuration> <include resource="org/springframework/boot/logging/logback/defaults.xml"/> <include resource="org/springframework/boot/logging/logback/console-appender.xml"/> <!--Application name--> <property name="APP_NAME" value="springboot-logback-elk-demo"/> <!--Log file saving path--> <property name="LOG_FILE_PATH" value="${LOG_FILE:-${LOG_PATH:-${LOG_TEMP:-${java.io.tmpdir:-/tmp}}}/logs}"/> <contextName>${APP_NAME}</contextName> <!--Record logs to file appender every day--> <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender"> <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> <fileNamePattern>${LOG_FILE_PATH}/${APP_NAME}-%d{yyyy-MM-dd}.log</fileNamePattern> <maxHistory>30</maxHistory> </rollingPolicy> <encoder> <pattern>${FILE_LOG_PATTERN}</pattern> </encoder> </appender> <!--Appender output to logstash--> <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender"> <!--Accessible logstash log collection port--> <destination>Server IP:4560</destination> <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder"/> </appender> <root level="DEBUG"> <appender-ref ref="CONSOLE"/> <appender-ref ref="FILE"/> <appender-ref ref="LOGSTASH"/> </root> </configuration>
5. The test controller TestController file is as follows:
package com.example.logstashdemo.controller; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RestController; /** * @author lisw * @program elk_project * @description * @createDate 2021-02-09 13:46:45 * @slogan There will be times when the wind blows and the waves break, and the clouds and sails are hung directly to help the sea. **/ @RestController @RequestMapping("/test") public class TestController {<!-- --> private final Logger logger = LoggerFactory.getLogger(getClass()); @RequestMapping("/elkAdd") public String elkAdd(){<!-- --> logger.info("Log Recording" + System.currentTimeMillis()); return "1"; } @RequestMapping("/elkget") public String elkget(){<!-- --> logger.info("output info"); logger.debug("output debug"); logger.error("output error"); return "1t"; } }
6. Create ELK index
<1>Find the corresponding location
<2>Step 1 of creating an index
<3>Create Index Step 2
<4>The result interface after creating the index
7. Run the application first
8.Test results
<1>Interface one test:
<2>Interface two test:
6. Reference URL
1. Install Docker and Docker-Compose on Centos7.9.
2.docker-compose -v displays the error [/usr/local/bin/docker-compose: line 1: html: No such file or directory].
3. Use Docker to build ELK and integrate it with SpringBoot.
4. SpringBoot integration ELK tutorial.
5. SpringBoot integrates ELK8 and above versions deployed by Docker.