1. Environment preparation
Note: Ignore if related tools have already been installed.
- 1Install JAVA operating environment
Step 1: Upload or download the installation package
cd /usr/local
jdk-8u152-linux-x64.tar.gz
Step 2: Unzip the installation package
tar -zxvf jdk-8u152-linux-x64.tar.gz
Step 3: Establish a soft connection
ln -s /usr/local/jdk1.8.0_152/ /usr/local/jdk
Step 4: Modify environment variables
vim /etc/profile
export JAVA_HOME=/usr/local/jdk
export JRE_HOME=
J
A
V
A
H
O
M
E
/
j
r
e
e
x
p
o
r
t
C
L
A
S
S
P
A
T
H
=
.
:
JAVA_HOME/jre export CLASSPATH=.:
JAVAH?OME/jreexportCLASSPATH=.:CLASSPATH:
J
A
V
A
H
O
M
E
/
l
i
b
:
JAVA_HOME/lib:
JAVAH?OME/lib:JRE_HOME/lib
export PATH=
P
A
T
H
:
PATH:
PATH:JAVA_HOME/bin:$JRE_HOME/bin
Run the command source /etc/profile to make the profile file take effect immediately
source /etc/profile
Step 5: Test whether the installation is successful
Using java -version, the version appears as java version “1.8.0_152”
- 2Install maven
Step 1: Upload or download the installation package
cd /usr/local
apache-maven-3.6.1-bin.tar.gz
Step 2: Unzip the installation package
tar -zxvf apache-maven-3.6.1-bin.tar.gz
Step 3: Establish a soft connection
ln -s /usr/local/apache-maven-3.6.1/ /usr/local/maven
Step 4: Modify environment variables
vim /etc/profile
export MAVEN_HOME=/usr/local/maven
export PATH=
P
A
T
H
:
PATH:
PATH:MAVEN_HOME/bin
Run the command source /etc/profile to make the profile file take effect immediately
source /etc/profile
Step 5: Test whether the installation is successful
mvn –v
- 3Install docker
Environment installation:
yum -y install gcc-c++
Step 1: Install some necessary system tools
yum install -y yum-utils device-mapper-persistent-data lvm2
Step 2: Add software source information
yum-config-manager –add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Step 3: Update and install Docker-CE
yum makecache fast
yum -y install docker-ce
Step 4: Start the Docker service
systemctl start docker
systemctl enable docker
Step 5: Test whether the installation is successful
docker -v
Step Six: Configure Image Accelerator
You can use the accelerator by modifying the daemon configuration file /etc/docker/daemon.json
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-EOF’
{
“registry-mirrors”: [“https://ldu6wrsf.mirror.aliyuncs.com”]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
4Install mysql
Already installed or accessible Ignore
Step 1: Pull the image
docker pull mysql:5.7
Step 2: Start
docker run –name mysql –restart=always -v /home/ljaer/mysql:/var/lib/mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=root -d mysql:5.7
Step 3: Test mysql
Enter the container:
docker exec -it mysql /bin/bash
Log in to mysql:
mysql -u root -p
If you enter successfully, the installation is successful.
5Install rabbitmq
Step 1: Pull the image
docker pull rabbitmq:management
Step 2: Start
docker run -d -p 5672:5672 -p 15672:15672 –restart=always –name rabbitmq rabbitmq:management
Step 3: Install the delay queue plugin
- First download the rabbitmq_delayed_message_exchange-3.9.0.ez file and upload it to the server where RabbitMQ is located. Download address: https://www.rabbitmq.com/community-plugins.html
- Switch to the directory where the plug-in is located, execute the docker cp rabbitmq_delayed_message_exchange-3.9.0.ez rabbitmq:/plugins command, and copy the new plug-in to the plugins directory in the container.
- Execute the docker exec -it rabbitmq /bin/bash command to enter the container, and cd plugins to enter the plugins directory
- Execute the ls -l|grep delay command to check whether the plug-in is copied successfully.
- In the plugins directory in the container, execute the rabbitmq-plugins enable rabbitmq_delayed_message_exchange command to enable the plugin.
- The exit command exits the RabbitMQ container, and then executes the docker restart rabbitmq command to restart the RabbitMQ container.
Note: When logging in to the backend under win, it prompts “This is not a private connection, please enter your username and password“, the following operations are required.
Add account: rabbitmqctl add_user admin 123
Set role: rabbitmqctl set_user_tags admin administrator
Set permissions: rabbitmqctl set_permissions -p “/” admin “.” “.” “.*”
Log in again and the login is successful (I don’t know why there is no account password yet)
6Install redis
Already installed or accessible Ignore
Step 1: Pull the image
docker pull redis:latest
Step 2: Start
docker run -d -p 6379:6379 –restart=always redis:latest redis-server
7Install nacos
Already installed or accessible Ignore
Step 1: Pull the image
docker pull nacos/nacos-server:1.4.1
Step 2: Start
docker run –env MODE=standalone –name nacos –restart=always -d -p 8848:8848 -e JVM_XMS=512m -e JVM_XMX=512m nacos/nacos-server:1.4.1
8Install sentinel
Already installed or accessible Ignore
Step 1: Pull the image
docker pull bladex/sentinel-dashboard
Step 2: Start
docker run –name sentinel-dashboard –restart=always -p 8858:8858 -d bladex/sentinel-dashboard:latest
9Install elasticsearch
Already installed or accessible Ignore
Step 1: Pull the image
docker pull elasticsearch:7.8.0
Step 2: Start
Need to create: two folders
mkdir -p /mydata/elasticsearch/plugins
mkdir -p /mydata/elasticsearch/data
Grant permission chmod 777 /mydata/elasticsearch/data
docker run -p 9200:9200 -p 9300:9300 –name elasticsearch –restart=always
-e “discovery.type=single-node”
-e ES_JAVA_OPTS=”-Xms512m -Xmx512m”
-v /mydata/elasticsearch/plugins:/usr/share/elasticsearch/plugins
-v /mydata/elasticsearch/data:/usr/share/elasticsearch/data
-d elasticsearch:7.8.0
Step 3: Install Chinese word segmenter
1. Download elasticsearch-analysis-ik-7.8.0.zip
2. Upload and decompress: unzip elasticsearch-analysis-ik-7.8.0.zip -d ik-analyzer
3. Upload to es container: docker cp ./ik-analyzer a24eb9941759:/usr/share/elasticsearch/plugins
4. Restart es: docker restart a24eb9941759
a24eb9941759: Indicates that the container ID needs to be changed to your own container ID when running.
10 Install kibana
Step 1: Pull the image
docker pull kibana:7.8.0
Step 2: Start
docker run –name kibana –restart=always -e ELASTICSEARCH_URL=http://101.42.15.103:9200 -p 5601:5601 -d kibana:7.8.0
Enter the container modification: docker exec -it kibana /bin/bash
cdconfig
vikibana.yml
elasticsearch.hosts: [ “http://101.42.15.103:9200” ]
docker restart 1dc0f78d78ad Restart kibana!
Test: Whether the installed word segmentation dictionary can be used!
GET /.kibana/_analyze
{
“text”: “I am Chinese”,
“analyzer”: “ik_max_word”
}
11Install zipkin
Step 1: Pull the image
docker pull openzipkin/zipkin
Step 2: Start
docker run –name zipkin –restart=always -d -p 9411:9411 openzipkin/zipkin
12 Install minio
Already installed or accessible Ignore
Step 1: Pull the image
docker pull minio/minio
Step 2: Start
docker run
-p 9000:9000
-p 9001:9001
–name minio
-d –restart=always
-e “MINIO_ROOT_USER=admin”
-e “MINIO_ROOT_PASSWORD=root”
-v /home/data:/data
-v /home/config:/root/.minio
minio/minio server /data –console-address “:9001”
Browser access: http://IP:9000/minio/login,
13 Install logstash
Step 1: Pull the image
docker pull logstash:7.8.0
Step 2: Start
docker run –name logstash -p 5044:5044 –restart=always –link elasticsearch:es -v /mydata/logstash/logstash.conf:/usr/share/logstash/pipeline/logstash.conf -d logstash:7.8 .0
You need to create the environment /mydata/logstash/logstash.conf on the Linux server in advance
logstash.conf input { tcp { mode => "server" host => "0.0.0.0" port => 5044 codec => json_lines } } filter{ } output { elasticsearch { hosts => "101.42.15.103:9200" index => "gmall-%{ + YYYY.MM.dd}" } }
Notice:
Stop all containers
docker stop $(docker ps -aq)
Delete all containers
docker rm $(docker ps -aq)
#Delete all images
docker rmi $(docker images -q)
question:
Docker container port mapping error
docker: Error response from daemon: driver failed programming external connectivity on endpoint lamp3 (46b7917c940f7358948e55ec2df69a4dec2c6c7071b002bd374e8dbf0d40022c): (iptables failed: iptables –wait -t nat -A DOCKER -p tcp -d 0/0 –dport 86 -jDNAT — to-destination 172.17.0.2:80 ! -i docker0: iptables: No chain/target/match by that name.
Solution
The custom chain DOCKER defined when the docker service is started is cleared
Restart systemctl restart docker