Use centos to build kafka server Docker

Use centos to build kafka server Docker

  • overall description
  • Preparation
  • Specific steps
    • 1. Install java environment
      • 1.1. Download java package
      • 1.2. Unzip the java package
      • 1.3. Configure environment variables
      • 1.4. Update configuration
    • 2. kafka file
      • 2.1. Download kafka file
      • 2.2. Upload kafka and decompress it
    • 3. Start zookeeper
      • 3.1. Start zookeeper
      • 3.2. View startup status
      • 3.3. Start zookeeper error
    • 4. Start kafka
      • 4.1. Modify the configuration file
      • 4.2. Start kafka
    • 5. Start the producer demo
    • 6. Start the Consumer Demo
    • 7. Package docker
    • 8. Start kafka
      • 8.1. Load image
      • 8.2. Creating containers
      • 8.3. Start kafka
  • Summarize

Overall description

Recently, the project needs to use Kafka to interface with a third-party interface, but the other party does not have a debugging environment, so I simply build a simple Kafka server by myself.
The specific requirements for building are as follows:

  1. Use docker to deploy kafka service
  2. It needs to be built on the basis of pure centos, do not use the ready-made docker that the third party has already done, although it is convenient to use the third party, it is not safe to say
  3. Kafka uses the latest version, currently 3.4.0
  4. Because it is a test, there is no kafka cluster, it is a single deployment

Preliminary preparation

According to the above requirements, before building, you need to prepare the following work. The preparation work has little to do with building kafka in this article, so I won’t introduce it in detail. Friends who need to know can follow the work below and search a lot of tutorials on Baidu. , The main thing is that the first step needs to be installed.

  1. Because it is built on this machine, the windows system needs to install docker, and you can install a docker desktop software, which is convenient for operation and viewing
  2. Download a pure version of centos, I am using 7.6.1810 here, this is available on Alibaba Cloud official website, just download it directly. Alibaba Cloud official download address, select centos after entering, and download the required version.
  3. Import the centos image, the import command is as follows:
docker load -i centos.7.61810.tar
  1. Create a container. Here I mapped two ports. 9092 is used by kafka, and 2181 is the port of zookeeper. I don’t know if zookeeper needs to be mapped, but I have mapped them all. The command is as follows, where [mirror ID] is the ID of the cents mirror we just imported:
docker run -itd -p 9092:9092 -p 2181:2181 --privileged --name kafka-server-3.4.0 [mirror ID] /usr/sbin/init

After the above preparations are completed, we can start our main dish, and build the kafka service under centos.

Concrete steps

1. Install java environment

1.1. Download java package

Kafka also needs to be based on java, so you need to install the java environment on centos. I downloaded java8, downloaded from the official website, and need to log in. Download it yourself, download x64, and upload it to docker after downloading. I put it in / opt directory.

1.2. Unzip the java package

Unzip command:

tar -zxvf jdk-8u341-linux-x64.tar.gz

1.3. Configure environment variables

Edit the /etc/profile file and add the following three lines:

export JAVA_HOME=/opt/jdk1.8.0_341
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

The file is as follows, the picture frame is added:
profile

1.4. Update configuration

Execute the following command to make the configuration just added take effect:

source /etc/profile

Execute at this time:

java-version

Look at the version number of java returned, it is successful.

2. kafka file

2.1. Download kafka file

Download kafka from kafka official website, I downloaded 3.4.0 here, you can download it yourself according to your needs: download address
Select version 3.4.0 and click Enter to download the file kafka_2.13-3.4.0.tgz.

2.2. Upload kafka and decompress

Upload the kafka compressed package downloaded in the previous step to docker and decompress it. Here I also upload it to the /opt directory. The decompression command is as follows:

tar -zxvf kafka_2.13-3.4.0.tgz

3. Start zookeeper

3.1. Start zookeeper

After decompression, a folder with the same name will be generated, enter this folder and execute the following command:

bin/zookeeper-server-start.sh config/zookeeper.properties &

Afterwards, there will be a bunch of log prints. When you see the success printed out at the end, it means success, as shown in the figure below:
zookeeper started successfully

3.2. View startup status

You can also use the following command to see if it is successful:

jps

jps
Seeing the process in the red box means that zookeeper has started successfully.

3.3. Start zookeeper error

I reported an error when starting, the error is: Zookeeper audit is disabled
This is because zookeeper’s new audit log is turned off by default, so this happens. Let’s just open it. The way to open it is to modify the configuration file:
/opt/kafka_2.13-3.4.0/config, modify zookeeper.properties, add the following line:

audit.enable=true

After adding and restarting, no error will be reported.

4. Start kafka

4.1. Modify configuration file

Before starting kafka, you need to modify the configuration file of kafka first, change the hostname to the IP of the host machine, modify: /opt/kafka_2.13-3.4.0/config, modify server.properties:
kafka

4.2. Start kafka

After the modification, start kafka: Note that when starting this block, you have to wait for the above zookeeper to start for a while before starting, because kafka needs to connect to zookeeper, and the startup here is too fast. Zookeeper is not ready yet, and kafka fails to start.

bin/kafka-server-start.sh config/server.properties &

Use the method in 3.2 to check whether the startup is successful. The following two processes indicate success.
kafka

5. Start the producer demo

bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test

Enter the test message on the command line after execution:
producer

6. Start Consumer Demo

bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning

After execution, I received the message just sent:
consumer

7. Package docker

Use the following command to package the docker image:

docker export -o kafka-server-3.4.0.tar 【Container ID】

8. Start kafka

The kafka image has been created above, and here it is necessary to load the image and start it.

8.1. Load image

Load the image with the following command:
cat [name of loaded image] | sudo docker import – [name of newly created image]
The specific commands are as follows:

cat kafka-server-3.4.0.tar | sudo docker import - kafka-server-3.4.0

If the loading is successful, use the following command to check:

docker images

View mirror
Here you need to remember the IMAGE ID later, which will be used when creating the container later.

8.2. Create container

Use the following command to create a container, and replace [Image ID] in the following command with the ID of the image you just created:

docker run -itd -p 9092:9092 -p 2181:2181 --privileged --name kafka-server-3.4.0 [mirror ID] /usr/sbin/init

Use the following command to check whether the container is successfully created:

docker -ps -a

View container
Here you need to remember the CONTAINER ID, which will be used later.

8.3. Start kafka

Since there is no self-starting configuration in the container, you need to enter the container to perform some operations. The command to enter the container is as follows, and replace the [container process ID] with the container ID created in the previous step:

docker exec -it [container process ID] /bin/bash

After entering the container, execute in sequence:
Note that after executing the third item, you still need to modify the kafka configuration file, which is written in 4.1 of this article, change the IP to the IP of the host machine, and then execute the fourth command.

source /etc/profile
cd /opt/kafka_2.13-3.4.0
bin/zookeeper-server-start.sh config/zookeeper.properties &
bin/kafka-server-start.sh config/server.properties &

Use the jps command to check. If there are two processes as written in 4.2, it means that kafka has started successfully.

Summary

So far, the docker of kafka has been deployed. It is relatively simple to use the code to receive kafka messages. You can Baidu it yourself. I will also write an article about springboot integrating kafka when I have time.