1: Install docker. (Required on all servers)
- Install some necessary system tools
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
- Add software source information
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sudo sed -i 's + download.docker.com + mirrors.aliyun.com/docker-ce + ' /etc/yum.repos.d/docker-ce.repo
- Update and install Docker-CE
sudo yum makecache fast sudo yum -y install docker-ce
- Start Docker service
sudo service docker start
- Set up auto-start at boot
systemctl enable docker
- test
docker version
- Configure the accelerator. Note: Configure on a machine other than the harbor warehouse.
cat >> /etc/docker/daemon.json <<-EOF {<!-- --> "registry-mirrors": [ "http://74f21445.m.daocloud.io", "https://registry.docker-cn.com", "http://hub-mirror.c.163.com", "https://docker.mirrors.ustc.edu.cn" ], "insecure-registries": ["node01"], "exec-opts": ["native.cgroupdriver=systemd"] } EOF
9. Restart the docker service
systemctl restart docker
- examine
docker info
Two: Install cri-docker. (Required on all servers)
1. Download and install.
mkdir -p /data/softs cd /data/softs wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.2/cri-dockerd-0.3.2.amd64.tgz
- Decompression software
tar xf cri-dockerd-0.3.2.amd64.tgz mv cri-dockerd/cri-dockerd /usr/local/bin/
- Check the effect
cri-dockerd --version
- Create cri-docker.service
cat > /etc/systemd/system/cri-docker.service <<-EOF [Unit] Description=CRI Interface for Docker Application Container Engine Documentation=https://docs.mirantis.com After=network-online.target firewalld.service docker.service Wants=network-online.target Requires=cri-docker.socket [Service] Type=notify ExecStart=/usr/local/bin/cri-dockerd --container-runtime-endpoint fd:// ExecReload=/bin/kill -s HUP $MAINPID TimeoutSec=0 RestartSec=2 Restart=always StartLimitBurst=3 StartLimitInterval=60s LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TasksMax=infinity Delegate=yes KillMode=process [Install] WantedBy=multi-user.target EOF
- Create cri-docker.socket
cat > /etc/systemd/system/cri-docker.socket <<-EOF [Unit] Description=CRI Docker Socket for the API PartOf=cri-docker.service [Socket] ListenStream=%t/cri-dockerd.sock SocketMode=0660 SocketUser=root SocketGroup=docker [Install] WantedBy=sockets.target EOF
- Start cri-docker
sudo systemctl daemon-reload sudo systemctl start cri-docker sudo systemctl status cri-docker
Three: Install harbor. (Specify one)
- Install docker-compose.
yum -y insta11 docker-compose
- Download software.
mkdir /data/{<!-- -->softs,server} -p & amp; & amp; cd /data/softs wget https://ghproxy.com/https://github.com/goharbor/harbor/releases/download/v2.5.0/harbor-offline-installer-v2.5.0.tgz tar -zxvf harbor-offline-installer-v2.5.0.tgz mv harbor /data/server/harbor cd /data/server/harbor/
- Load the image.
docker load < harbor.v2.5.0.tar.gz docker images
4. Back up the configuration.
cp harbor.yml.tmpl harbor.yml vimharbor.yml 1. Change name 2. Disable http service 3. Change password 4. Set the data path.
./prepare
./install.sh
docker-compose ps
5. Customize the service startup file.
docker-compose down vim /etc/systemd/system/harbor.service
[Unit] Description=Harbor After=docker.service systemd-networkd.service systemd-resolved.service Requires=docker.service Documentation=http://github.com/vmware/harbor [Service] Type=simp1e Restart=on-failure RestartSec=5 #You need to pay attention to the installation location of harbor ExecStart=/usr/bin/docker-compose --file /data/server/harbor/docker-compose.yml up ExecStop=/usr/bin/docker-compose --file /data/server/harbor/docker-compose.yml down [Install] WantedBy=multi-user.target
Load service configuration file systemctl daemon-reload Start service systemctl start harbor check status systemctl status harbor Set up auto-start at boot systemctl enable harbor docker-compose ps
4. Page-based customized warehouse.
- Create a new user.
- New Project.
- How to submit an image.
Step one: Tag the image.
Format: docker tag service name harbor address/warehouse name/server name: corresponding version number
docker tag aaa node01/zzy/aaa:v01
Step 2: Log in to harbor.
Step 3: Submit the image.
docker push harbor address/warehouse name/server name: corresponding version number
Case: All nodes need to perform the following verification.
https://blog.csdn.net/qq_47354826/article/details/115465461
at node03 docker pull nginx docker pull tomcat docker images
Tag
docker history nginx:latest
docker tag nginx:latest node01/zzy/nginx:1.25.2 docker images
2. Log in
docker login node01
3. Push
docker push node01/zzy/nginx:1.25.2
Five: Build k8s.
- Software source customization
Customize Alibaba Cloud’s software source for kubernetes (three units)
cat > /etc/yum.repos.d/kubernetes.repo << EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
- install software.
Execute commands on both node and master.
yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
master node.
systemctl enable kubelet systemctl start kubelet systemctl status kubele
- Master node initialization
kubeadm init --apiserver-advertise-address=10.0.0.247 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.18.0 --service-cidr=10.96.0.0/12 --pod -network-cidr=10.244.0.0/16
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
- Worker nodes join the cluster
https://blog.csdn.net/qq_39261894/article/details/109013696
- Enable autocompletion of commands
yum install bash-completion -y source /usr/share/bash-completion/bash_completion vim.bashrc source <(kubectl completion bash) source <(kubeadm completion bash) source .hashrc
- network
Download the yml of the flannel plug-in
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Modify the image warehouse address in kube-flannel.yml to be a domestic source
sed -i 's/quay.io/quay-mirror.qiniu.com/g' kube-flannel.yml
Install network plug-in
kubectl apply -f kube-flannel.yml