Table of Contents
1. Environmental planning
2. Things to note:
3. Environment preparation:
1. Turn off firewall rules, turn off selinux, and turn off swap:
2. Modify the host name
3. Modify the hosts file on all nodes:
4. Time synchronization of all nodes:
5. All nodes implement Linux resource limits:
6. Upgrade the kernel on all nodes (optional)
7. Adjust kernel parameters:
8. Load the ip_vs module:
4. Install docker on all nodes:
1. Installation:
2. Change daemon.json configuration:
5. Install kubeadm, kubelet and kubectl:
1. Define kubernetes source:
2. Configure Kubelet to use Alibaba Cloud’s pause image:
3. Start kubelet automatically after booting:
6. Installation and configuration of high-availability components:
1. Deploy Haproxy on all master nodes:
2. Configure haproxy proxy:
3. Deploy keepalived on all master nodes:
4. Configure keepalived high availability:
5. Write a health detection script:
6. Start the high-availability proxy cluster:
7. Deploy K8S cluster:
1. Set the cluster initialization configuration file on the master01 node:
2. Update the cluster initialization configuration file:
3. All nodes pull the image:
4. Master01 node is initialized:
5. Modify the controller-manager and scheduler configuration files:
6. Deploy the network plug-in flannel:
7. All nodes join the cluster:
7.1 All master nodes join the cluster:
7.2 node node joins the cluster:
8. View cluster information:
8. Install Harbor private warehouse:
1. Install docker:
2. Modify the configuration files of all node nodes and add private warehouse configuration
3. Install Harbor:
4. Generate certificate:
5. Visit:
1. Environmental Planning
Server type | ip address |
master01 | 192.168.30.100 |
master02 | 192.168.30.200 |
master03 | 192.168.30.203 |
node01 | 192.168.30.14 |
node02 | 192.168.30.15 |
hub.dhj.com | 192.168.30.104 |
2. Notes:
The number of CPU cores on the master node must be greater than 2
The latest version is not necessarily better, but compared to the old version, the core functions are stable, but the new functions and interfaces are relatively unstable.
After learning high-availability deployment of one version, the operations of other versions are similar.
Try to upgrade the host to CentOS 7.9
The kernel is upgraded to 4.19+, a stable kernel
When deploying the k8s version, try to find a small version larger than 5 such as 1.xx.5 (this is generally a more stable version)
3. Environment preparation:
1. Close firewall rules, close selinux, and close swap:
systemctl stop firewalld systemctl disable firewalld setenforce 0 sed -i 's/enforcing/disabled/' /etc/selinux/config iptables -F & amp; & amp; iptables -t nat -F & amp; & amp; iptables -t mangle -F & amp; & amp; iptables -X swapoff -a sed -ri 's/.*swap.*/# & amp;/' /etc/fstab
2. Modify host name
hostnamectl set-hostname master01 hostnamectl set-hostname master02 hostnamectl set-hostname master03 hostnamectl set-hostname node01 hostnamectl set-hostname node02
3. All nodes modify the hosts file:
cat >> /etc/hosts << EOF 192.168.30.100 master01 192.168.30.200 master02 192.168.30.203 master03 192.168.30.14node01 192.168.30.15node02 EOF
4. Time synchronization of all nodes:
yum -y install ntpdate ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime echo 'Asia/Shanghai' >/etc/timezone ntpdate time2.aliyun.com systemctl enable --now crond crontab -e */30 * * * * /usr/sbin/ntpdate time2.aliyun.com
5. All nodes implement Linux resource limits:
vim /etc/security/limits.conf * soft nofile 65536 *hard nofile 131072 * soft nproc 65535 *hard nproc 655350 * soft memlock unlimited *hard memlock unlimited
6. Upgrade the kernel on all nodes (optional)
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm -O /opt/kernel-ml- devel-4.19.12-1.el7.elrepo.x86_64.rpm wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm -O /opt/kernel-ml-4.19.12-1 .el7.elrepo.x86_64.rpm cd /opt/ yum localinstall -y kernel-ml* #Change kernel startup mode grub2-set-default 0 & amp; & amp; grub2-mkconfig -o /etc/grub2.cfg grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)" grubby --default-kernel reboot
7. Adjust kernel parameters:
cat > /etc/sysctl.d/k8s.conf <<EOF net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 fs.may_detach_mounts = 1 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.netfilter.nf_conntrack_max=2310720 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp_keepalive_probes = 3 net.ipv4.tcp_keepalive_intvl =15 net.ipv4.tcp_max_tw_buckets = 36000 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_max_orphans = 327680 net.ipv4.tcp_orphan_retries = 3 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.ip_conntrack_max = 65536 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.tcp_timestamps = 0 net.core.somaxconn = 16384 EOF #Effective parameters sysctl --system
8. Load ip_vs module:
for i in $(ls /usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs|grep -o "^[^.]*");do echo $i; / sbin/modinfo -F filename $i >/dev/null 2> & amp;1 & amp; & amp; /sbin/modprobe $i;done
4. Install docker on all nodes:
1. Installation:
yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum install -y docker-ce docker-ce-cli containerd.io
2. Change daemon.json configuration:
cat > /etc/docker/daemon.json <<EOF { "registry-mirrors": ["https://6ijb8ubo.mirror.aliyuncs.com"], "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "500m", "max-file": "3" } } EOF #Change the resource limit of docker to systemd, keeping it consistent with k8s systemctl daemon-reload systemctl restart docker.service systemctl enable docker.service
5. Install kubeadm, kubelet and kubectl:
Install kubeadm, kubelet and kubectl on all nodes
1. Define kubernetes source:
cat > /etc/yum.repos.d/kubernetes.repo << EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF yum install -y kubelet-1.20.15 kubeadm-1.20.15 kubectl-1.20.15
2. Configure Kubelet to use Alibaba Cloud’s pause image:
cat > /etc/sysconfig/kubelet <<EOF KUBELET_EXTRA_ARGS="--cgroup-driver=systemd --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2" EOF
3. Start kubelet automatically at boot:
systemctl enable --now kubelet
6. Installation and configuration of high-availability components:
1. Deploy Haproxy on all master nodes:
yum -y install haproxy
2. Configure haproxy proxy:
cat > /etc/haproxy/haproxy.cfg << EOF global log 127.0.0.1 local0 info log 127.0.0.1 local1 warning chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon stats socket /var/lib/haproxy/stats defaults mode tcp log global option tcplog optiondontlognull option redispatch retries 3 timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout check 10s maxconn 3000 frontend monitor-in bind *:33305 mode http option httplog monitor-uri /monitor frontend k8s-master bind *:16443 #If the listening port is deployed on the same machine as the apiserver, it will conflict, change the listening port mode tcp option tcplog default_backend k8s-master backend k8s-master mode tcp option tcplog option tcp-check balance roundrobin server k8s-master1 192.168.30.100:6443 check inter 10000 fall 2 rise 2 weight 1 server k8s-master2 192.168.30.200:6443 check inter 10000 fall 2 rise 2 weight 1 server k8s-master3 192.168.30.203:6443 check inter 10000 fall 2 rise 2 weight 1 EOF
3. All master nodes deploy keepalived:
yum -y install keepalived
4. Configure keepalived for high availability:
cd /etc/keepalived/ vim keepalived.conf ! Configuration File for keepalived global_defs { router_id LVS_HA1 } vrrp_script chk_haproxy { script "/etc/keepalived/check_haproxy.sh" interval 2 weight 2 } vrrp_instance VI_1 { stateMASTER interface ens33 virtual_router_id 51 priority 100 advert_int 1 virtual_ipaddress { 192.168.30.10 #Set VIP address } track_script { chk_haproxy } }
5. Write health detection script:
vim check_haproxy.sh #!/bin/bash if ! killall -0 haproxy; then systemctl stop keepalived fi chmod + x check_haproxy.sh
6. Start the high-availability proxy cluster:
systemctl enable --now haproxy systemctl enable --now keepalived
Seven. Deploy K8S cluster:
1. Set the cluster initialization configuration file on the master01 node:
kubeadm config print init-defaults > /opt/kubeadm-config.yaml cd /opt/ vim kubeadm-config.yaml ... 11 localAPIEndpoint: 12 advertiseAddress: 192.168.30.100 #Specify the IP address of the current master node 13 bindPort: 6443 21 apiServer: 22 certSANs: #Add a list of certsSANs under the apiServer attribute, add the IP addresses of all master nodes and the cluster VIP address 23 - 192.168.30.10 24 - 192.168.30.100 25 - 192.168.30.200 26 - 192.168.30.203 30 clusterName: kubernetes 31 controlPlaneEndpoint: "192.168.30.10:16444" #Specify the cluster VIP address 32 controllerManager: {} 38 imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers #Specify the image download address 39 kind: ClusterConfiguration 40 kubernetesVersion: v1.20.15 #Specify kubernetes version number 41 networking: 42 dnsDomain: cluster.local 43 podSubnet: "10.244.0.0/16" #Specify the pod network segment, 10.244.0.0/16 is used to match the flannel default network segment 44 serviceSubnet: 10.96.0.0/16 #Specify service network segment 45 scheduler: {} #Add the following content at the end --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvs #Change the default kube-proxy scheduling method to ipvs mode
2. Update the cluster initialization configuration file:
kubeadm config migrate --old-config kubeadm-config.yaml --new-config new.yaml
3. All nodes pull images:
#Copy the yaml configuration file to other hosts and pull the image through the configuration file for i in master02 master03 node01 node02; do scp /opt/new.yaml $i:/opt/; done kubeadm config images pull --config /opt/new.yaml
4. Master01 node is initialized:
kubeadm init --config new.yaml --upload-certs | tee kubeadm-init.log
After initialization, the following information will appear to join the k8s cluster:
#Tips: ......... Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config #This command is run as a regular user. Execute this command on the master01 node. Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf #If you are the root user, execute this command on the master01 node. Both options are acceptable. Choose your own. You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of the control-plane node running the following command on each as root: #Master node joins the command used, record! kubeadm join 192.168.88.200:16443 --token 7t2weq.bjbawausm0jaxury \ --discovery-token-ca-cert-hash sha256:e76e4525ca29a9ccd5c24142a724bdb6ab86512420215242c4313fb830a4eb98 \ --control-plane --certificate-key 0f2a7ff2c46ec172f834e237fcca8a02e7c29500746594c25d995b78c92dde96 Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use "kubeadm init phase upload-certs --upload-certs" to reload certs afterward. Then you can join any number of worker nodes by running the following on each as root: #nodeNode node is added using the command. Record! kubeadm join 192.168.88.200:16443 --token 7t2weq.bjbawausm0jaxury \ --discovery-token-ca-cert-hash sha256:e76e4525ca29a9ccd5c24142a724bdb6ab86512420215242c4313fb830a4eb98
5. Modify controller-manager and scheduler configuration files:
vim /etc/kubernetes/manifests/kube-scheduler.yaml vim /etc/kubernetes/manifests/kube-controller-manager.yaml ... #- --port=0 #Search for port=0 and comment out this line systemctl restart kubelet All master node configurations
6. Deploy network plug-in flannel:
All nodes upload the flannel image flannel.tar and network plug-in cni-plugins-linux-amd64-v0.8.6.tgz to the /opt directory, and the master node uploads the kube-flannel.yml file cd /opt docker load < flannel.tar mv /opt/cni /opt/cni_bak mkdir -p /opt/cni/bin tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin #Pay attention to the version you are using kubectl apply -f kube-flannel.yml
7. All nodes join the cluster:
7.1 All master nodes join the cluster:
Use your own token
kubeadm join 192.168.30.10:16443 --token 7t2weq.bjbawausm0jaxury \ --discovery-token-ca-cert-hash sha256:e76e4525ca29a9ccd5c24142a724bdb6ab86512420215242c4313fb830a4eb98 \ --control-plane --certificate-key 0f2a7ff2c46ec172f834e237fcca8a02e7c29500746594c25d995b78c92dde96 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
7.2 node node joins the cluster:
kubeadm join 192.168.30.10:16443 --token 7t2weq.bjbawausm0jaxury \ --discovery-token-ca-cert-hash sha256:e76e4525ca29a9ccd5c24142a724bdb6ab86512420215242c4313fb830a4eb98
8. View cluster information:
#View cluster information on master01 kubectl get nodes kubectl get pod -A
8. Install Harbor private warehouse:
Open a new server with the IP address: 192.168.30.104
1. Install docker:
//Modify host name hostnamectl set-hostname hub.dhj.com //All nodes plus host name mapping echo '192.168.30.104 hub.dhj.com' >> /etc/hosts //Install docker yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum install -y docker-ce docker-ce-cli containerd.io mkdir /etc/docker cat > /etc/docker/daemon.json <<EOF { "registry-mirrors": ["https://6ijb8ubo.mirror.aliyuncs.com"], "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "insecure-registries": ["https://hub.dhj.com"] } EOF systemctl start docker systemctl enable docker
2. All nodes modify the configuration file, plus private warehouse configuration
cat > /etc/docker/daemon.json <<EOF { "registry-mirrors": ["https://6ijb8ubo.mirror.aliyuncs.com"], "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "insecure-registries": ["https://hub.dhj.com"] } EOF systemctl daemon-reload systemctl restart docker
3. Install Harbor:
cd /opt/ #Upload harbor-offline-installer-v1.2.2.tgz and docker-compose files to the /opt directory cp docker-compose /usr/local/bin chmod +x /usr/local/bin/docker-compose #Copy the docker-compose orchestration tool to the bin directory and add execution permissions tar -zxvf harbor-offline0installer-v1.2.2.tgz #Unpack the harbor package cd harbor.cfg vimharbour.cfg 5 hostname = hub.dhj.com 9 ui_url_protocol = https 24 ssl_cert = /data/cert/server.crt 25 ssl_cert_key = /data/cert/server.key 59 harbor_admin_password = Harbor12345
4. Generate certificate:
mkdir -p /data/cert #Create certificate directory cd /data/cert openssl genrsa -des3 -out server.key 2048 #Generate private key //Enter password twice: 123456 openssl req -new -key server.key -out server.csr #Generate certificate signing request file //Enter private key password: 123456 //Enter country name: CN //Enter the province name: BJ //Enter city name: BJ //Enter the organization name: www //Enter the organization name: www //Enter domain name: hub.wzw.com //Enter the administrator's email: [email protected] //For all others, just press Enter cp server.key server.key.org #Backup private key openssl rsa -in server.key.org -out server.key #Clear private key password: 123456, regenerate a file, overwrite the previous one with password. openssl x509 -req -days 1000 -in server.csr -signkey server.key -out server.crt #Signature certificate chmod +x /data/cert/* #Add execution permissions to all cd /opt/harbor/ ./install.sh #Execute script
5. Visit:
Use a local browser to access: https://hub.dhj.com Add Exception -> Confirm Security Exception Username: admin Password: Harbor12345
The knowledge points of the article match the official knowledge archives, and you can further learn relevant knowledge. Cloud native entry-level skills treeContainer orchestration (production environment k8s)kubelet, kubectl, kubeadm three-piece set 17041 people Currently studying the system