Install elasticsearch, kibana

Hello everyone, I am Su Lin. Today I will install es and kibana.

Deploy single point es

Create a network

Because we also need to deploy the kibana container, we need to interconnect the es and kibana containers. Here first create a network:

docker network create es-net

Mirror

We can use the docker command to pull or long-distance transfer, and we can choose one.

Pull the image

Official website: Docker Hub Container Image Library | App Containerization

Order :

docker pull elasticsearch:7.12.1
Load image

Here we use the image of elasticsearch version 7.12.1

A mirrored tar package is provided:

Link: https://pan.baidu.com/s/1gly5GdwN0ccs1nKWodPcZg
Extraction code: 3goa

Everyone uploads it to the virtual machine and then runs the command to load it:

# Import data
docker load -i es.tar

Run

docker command, deploy single point es:

docker run -d --name es -e "ES_JAVA_OPTS=-Xms512m -Xmx512m" -e "discovery.type=single-node" -v es-data:/usr/share/elasticsearch/data -v es- plugins:/usr/share/elasticsearch/plugins --privileged --network es-net -p 9200:9200 -p 9300:9300 elasticsearch:7.12.1

Command explanation:

  • -e “cluster.name=es-docker-cluster”: Set the cluster name
  • -e “http.host=0.0.0.0”: The listening address can be accessed from the external network
  • -e “ES_JAVA_OPTS=-Xms512m -Xmx512m”: memory size
  • -e “discovery.type=single-node”: non-cluster mode
  • -v es-data:/usr/share/elasticsearch/data: Mount the logical volume and bind the es data directory
  • -v es-logs:/usr/share/elasticsearch/logs: Mount the logical volume and bind the es log directory
  • -v es-plugins:/usr/share/elasticsearch/plugins: Mount the logical volume and bind the es plug-in directory
  • –privileged: Grant logical volume access
  • –network es-net: Join a network named es-net -p 9200:9200: Port mapping configuration

Started successfully

Enter: http://ip:9200 in the browser to see the response results of elasticsearch:

Deploy kibana

Kibana can provide us with a visual interface of elasticsearch to facilitate our learning.

Mirror

download :

docker pull kibana:7.12.1

Upload:

Deployment

Run the docker command to deploy kibana

docker run -d --name kibana -e ELASTICSEARCH_HOSTS=http://es:9200 --network=es-net -p 5601:5601 kibana:7.12.1

The versions of Kibana and elasticsearch must be the same.

Visit:

http://ip address:5601

Click Explore on my own

Click Dev Tools

Install IK word segmenter

Install ik plug-in online

# Enter inside the container
docker exec -it elasticsearch /bin/bash
# Download and install online
./bin/elasticsearch-plugin install https://github.com/medcl/elasticsearchanalysis-ik/releases/download/v7.12.1/elasticsearch-analysis-ik-7.12.1.zip
#quit
exit
#Restart container
docker restart elasticsearch

Install ik plug-in offline

View data volume directory

To install the plug-in, you need to know the location of the plugins directory of elasticsearch. We used data volume mounting, so we need to check the data volume directory of elasticsearch. Check it with the following command:

docker volume inspect es-plugins

It means that the plugins directory is mounted to: /var/lib/docker/volumes/es-plugins/_data.

Upload to the plug-in data volume of the es container

Restart container

# 4. Restart the container
docker restartes
# View es log
docker logs -f es

Test:

The IK word segmenter contains two modes:

  • ik_smart: least segmentation
  • ik_max_word: the finest segmentation

ik_max_word

{
  "tokens" : [
    {
      "token" : "programmer",
      "start_offset" : 0,
      "end_offset" : 3,
      "type" : "CN_WORD",
      "position" : 0
    },
    {
      "token" : "program",
      "start_offset" : 0,
      "end_offset" : 2,
      "type" : "CN_WORD",
      "position" : 1
    },
    {
      "token" : "member",
      "start_offset" : 2,
      "end_offset" : 3,
      "type" : "CN_CHAR",
      "position" : 2
    },
    {
      "token" : "su",
      "start_offset" : 3,
      "end_offset" : 4,
      "type" : "CN_CHAR",
      "position" : 3
    },
    {
      "token" : "麟",
      "start_offset" : 4,
      "end_offset" : 5,
      "type" : "CN_CHAR",
      "position" : 4
    },
    {
      "token" : "learning",
      "start_offset" : 5,
      "end_offset" : 7,
      "type" : "CN_WORD",
      "position": 5
    },
    {
      "token" : "java",
      "start_offset" : 7,
      "end_offset" : 11,
      "type" : "ENGLISH",
      "position" : 6
    },
    {
      "token" : "very simple",
      "start_offset" : 11,
      "end_offset" : 14,
      "type" : "CN_WORD",
      "position" : 7
    },
    {
      "token" : "simple",
      "start_offset" : 12,
      "end_offset" : 14,
      "type" : "CN_WORD",
      "position" : 8
    }
  ]
}

ik_smart

{
  "tokens" : [
    {
      "token" : "programmer",
      "start_offset" : 0,
      "end_offset" : 3,
      "type" : "CN_WORD",
      "position" : 0
    },
    {
      "token" : "su",
      "start_offset" : 3,
      "end_offset" : 4,
      "type" : "CN_CHAR",
      "position" : 1
    },
    {
      "token" : "麟",
      "start_offset" : 4,
      "end_offset" : 5,
      "type" : "CN_CHAR",
      "position" : 2
    },
    {
      "token" : "learning",
      "start_offset" : 5,
      "end_offset" : 7,
      "type" : "CN_WORD",
      "position" : 3
    },
    {
      "token" : "java",
      "start_offset" : 7,
      "end_offset" : 11,
      "type" : "ENGLISH",
      "position" : 4
    },
    {
      "token" : "very simple",
      "start_offset" : 11,
      "end_offset" : 14,
      "type" : "CN_WORD",
      "position": 5
    }
  ]
}

Expanded word dictionary

With the development of the Internet, “word-making movements” have become more and more frequent. Many new words appeared that did not exist in the original vocabulary list. For example: “Aoligei” etc.

Therefore, our vocabulary also needs to be constantly updated. The IK word segmenter provides the function of expanding vocabulary.

1) Open the IK word segmenter config directory:

2) Add the following to the IKAnalyzer.cfg.xml configuration file:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
    <comment>IK Analyzer extended configuration</comment>
    <!--Users can configure their own extended dictionary here *** Add extended dictionary -->
    <entry key="ext_dict">ext.dic</entry>
</properties>

3) Create a new ext.dic. You can copy a configuration file in the config directory to modify it.

olige
big fool
Yang Ke

4) Restart elasticsearch

docker restart es

# View log
docker logs -f elasticsearch

5) Test effect:

{
  "tokens" : [
    {
      "token" : "programmer",
      "start_offset" : 0,
      "end_offset" : 3,
      "type" : "CN_WORD",
      "position" : 0
    },
    {
      "token" : "su",
      "start_offset" : 3,
      "end_offset" : 4,
      "type" : "CN_CHAR",
      "position" : 1
    },
    {
      "token" : "麟",
      "start_offset" : 4,
      "end_offset" : 5,
      "type" : "CN_CHAR",
      "position" : 2
    },
    {
      "token" : "big",
      "start_offset" : 5,
      "end_offset" : 6,
      "type" : "CN_CHAR",
      "position" : 3
    },
    {
      "token" : "儿",
      "start_offset" : 6,
      "end_offset" : 7,
      "type" : "CN_CHAR",
      "position" : 4
    },
    {
      "token" : "Yang Ke",
      "start_offset" : 7,
      "end_offset" : 9,
      "type" : "CN_WORD",
      "position": 5
    },
    {
      "token" : "love to eat",
      "start_offset" : 9,
      "end_offset" : 11,
      "type" : "CN_WORD",
      "position" : 6
    },
    {
      "token" : "oligi",
      "start_offset" : 11,
      "end_offset" : 14,
      "type" : "CN_WORD",
      "position" : 7
    }
  ]
}

Stop word dictionary

In Internet projects, the transmission speed between networks is very fast, so many languages are not allowed to be transmitted on the Internet, such as: sensitive words about religion, politics, etc., then we should also ignore the current vocabulary when searching.

The IK word segmenter also provides a powerful stop word function, allowing us to directly ignore the contents of the current stop word list during indexing.

1) Add the contents of the IKAnalyzer.cfg.xml configuration file:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
        <comment>IK Analyzer extended configuration</comment>
        <!--Users can configure their own extended dictionary here-->
        <entry key="ext_dict">ext.dic</entry>
        <!--Users can configure their own extended stop word dictionary here *** Add stop word dictionary-->
        <entry key="ext_stopwords">stopword.dic</entry>
</properties>

3) Add stop words in stopword.dic

My son

4) Restart elasticsearch

# Restart service
docker restart elasticsearch
docker restart kibana
# View log
docker logs -f elasticsearch

5) Test effect:

I won’t demonstrate it here, you can try it yourself.

That’s it for this issue, see you in the next one!