Download and install Elasticsearch, download, install and use IK word segmenter and Kibana, and demonstrate how to use elasticsearch

First, give the network disk link of the version you use: your own version 7.17.14
Link: https://pan.baidu.com/s/1FSlI9jNf1KRP-OmZlCkEZw
Extraction code: 1234

In general, Elastic Search (ES) is not used alone, such as the mainstream technology combination ELK (Elasticsearch + Logstash + Kibana)

1. Elasticsearch download

Before downloading, first look at the corresponding relationship with the JDK version. The version corresponding link is: Version Support Correspondence Table

Enter the download page: Download Elasticsearch. What I downloaded here is version 7.17.14, which supports JDK8 and JDK17. Note: If the Elastic Search version is 7.17.14, the subsequent IK tokenizer and Kibana versions must also correspond.

After downloading and unzipping, look at the directory structure:

Files under config: mainly look at these two

① jvm.options You can configure jvm operating parameters. If the server is too small, you need to adjust the parameters accordingly. Different versions have different default jvm parameters.

② elasticsearch.yml In the configuration file, you can set whether the port allows external access and other settings. in:

  • path.data: Specify the data storage location
  • path.logs: Specify the log storage location
  • http.port: Specify the running port (default port 9200)

It should be noted that starting from the Elasticserach 8 version, after the project is started for the first time, the configuration file will automatically appear with ssl-related configurations. If it is used for local development and there is no ssl-related configuration, you need to configure xpack.security Change the value of .enabled to false, otherwise the service will be inaccessible after it is started.

Startup: After the configuration is completed, double-click elasticsearch.bat in the bin directory to start.


After startup, the console is garbled:

Find the jvm.options file under config and add the content: -Dfile.encoding=GBK:

Then just restart.

2. IK word segmenter download

ES full-text search: The default word segmenter is StandardAnalyzer, and the effect of Chinese word segmentation may not be ideal. We also need to use the IKAnalyzer tokenizer here

The function of the word segmenter is to divide a piece of Chinese or other text into keywords or words. When we search, we will segment our own information, segment the data in the database or index library, and then perform a matching operation. , the default Chinese word segmentation treats each character as one word. For example, “I love China” will be divided into “I”, “Love”, “China” and “Country”. This obviously does not meet the requirements, so we need to install Chinese word segmenter IK to solve this problem.

1. Download address: github download link to download the corresponding version.

2. After downloading, extract it to the plugins folder in the Elastic Search decompression directory. It should be noted that if there is no parent directory after decompression, you need to create a parent directory under plugins to store the decompression of the ik word segmenter. document:

3. Restart Elastic Search and check whether the ik word segmenter is installed successfully:

3. head plug-in in Elasticsearch

Elasticsearch-head is a client tool specifically for Elasticsearch. Elasticsearch-head is a front-end project based on node.js. Prerequisite: nodejs needs to be installed.

Elasticsearch-head is a client plug-in for monitoring Elasticsearch status, including data visualization, performing addition, deletion, modification and query operations, etc.

1. Download the head plug-in: Github download link and then unzip:

2. Enter the directory and enter: npm install, then enter: npm run start

3. Browser access: http://localhost:9100 to display future connections

Solution: Because there is no cross-domain problem configured. Another thing is that you haven’t opened the Elastic Search service at all. Solve the cross-domain problem as follows: modify the elasticsearch.yml configuration file in the config directory of the Elastic Search file:

#Enable cross-domain support
http.cors.enabled: true
#Allow everyone to access across domains
http.cors.allow-origin: "*"

Restart Elastic Search and request http://localhost:9100 again to resolve cross-domain issues.

4. Kibana download and installation configuration

1. Kibana is an open source analysis and visualization platform for Elasticsearch, which is used to search and view interactive storage in Elasticsearch index data in code>. With Kibana, you can perform advanced data analysis and display through various charts.

2. Download address: Download Kibana (if you don’t click on the link directly, just follow the steps below) Similarly, the versions need to be matched.


After decompression:

3. kibana.bat in the bin folder is the startup file, which can be started by double-clicking it. The access address is: http://localhost:5601

After waiting for a while, you will see the following picture style to indicate success:

4. When accessing the browser, it is found that the page is in English. You can choose to install the Chinese plug-in here.

5. Operation demonstration

1. Use Kibana to add, delete, modify, and query data. Open the Kibana console and enter the statement to be executed, as shown below.

Add data
PUT /account/type/1
{
"name": "Zhang San",
"age": 20
}

View added data

① Return to Elasticsearch Head to see that the data has been added successfully, as shown in the figure below:

② You can also query data in Kibana, as shown below:

POST /account/type/_search

Modify data
PUT /account/type/1
{
"name":"李思",
"age":"30"
}
Delete data
DELETE /account/type/1
syntaxbug.com © 2021 All Rights Reserved.