Table of Contents
1. Basic environment configuration (done for each node)
1.hosts analysis
2. Firewall and selinux
3. Install basic software and configure time synchronization
4. Disable swap partition
5. Change kernel parameters
6. Configure ipvs
7.k8s download
(1) Configure image download related software
(2) Configure cgroup on kubelet
2. Download containerd (do it for each node)
1. Download basic software
2. Add software warehouse information
3.Change the docker-ce.repo file
4. Download containerd and initialize the configuration
5. Change the cgroup on containerd
6. Modify the image source to Alibaba
7. Configure crictl and pull the image for verification
3. Master node initialization (only done on master)
1. Generate and modify configuration files
2. Check whether the image address in /etc/containerd/config.toml has been loaded as Alibaba’s address
3. View the required image and pull it
4.Initialization
(1) Initialize through the generated kubeadm.yml file
(2) Pay attention to an error:
(3) Operations that need to be performed after initialization
4. node node joins master
1. Join according to the command after the master initialization is successful.
2.node1/node2 join
3. View on master
4. Pay attention to error reports
(1) Those files under /etc/kubernetes already exist. Generally, it is because the master has been added. What I choose is to delete the contents in its directory or use reset to reset it.
(2) Port occupation problem. Try to kill the occupying process.
5. Install the network plug-in (master does it, node chooses it)
1. Obtain and modify files
2. Apply the file and view it for verification
6. Configure kubectl command completion
192.168.2.190 | master |
---|---|
192.168.2.191 | node2-191.com |
192.168.2.193 | node4-193.com |
1. Basic environment configuration (done on each node)
1.hosts resolution
[root@master ~]# tail -3 /etc/hosts 192.168.2.190 master 192.168.2.191 node2-191.com 192.168.2.193 node4-193.com
2. Firewall and selinux
[root@master ~]# systemctl status firewalld.service;getenforce ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:firewalld(1) Disabled #temporary systemctl stop firewalld setenforce 0 #disable systemctl disable firewalld sed -i '/^SELINUX=/ c SELINUX=disabled' /etc/selinux/config
3. Install basic software and configure time synchronization
[root@master ~]# yum install -y wget tree bash-completion lrzsz psmisc net-tools vim chrony [root@master ~]# vim /etc/chrony.conf :3,6 s/^/# #Comment out the original line server ntp1.aliyun.com iburst [root@node1-190 ~]# systemctl restart chronyd [root@node1-190 ~]# chronyc sources 210 Number of sources = 1 MS Name/IP address Stratum Poll Reach LastRx Last sample ================================================== ============================= ^* 120.25.115.20 2 8 341 431 -357us[ -771us] + /- 20ms
4. Disable swap partition
[root@master ~]# swapoff -a & amp; & amp; sed -i 's/.*swap.*/# & amp;/' /etc/fstab & amp; & amp; free -m total used free shared buff/cache available Mem: 10376 943 8875 11 557 9178 Swap: 0 0 0
5.Change kernel parameters
[root@node1-190 ~]# cat >> /etc/sysctl.d/k8s.conf << EOF vm.swappiness=0 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF [root@node1-190 ~]# modprobe br_netfilter & amp; & amp; modprobe overlay & amp; & amp; sysctl -p /etc/sysctl.d/k8s.conf vm.swappiness = 0 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1
6. Configure ipvs
[root@node1-190 ~]# yum install ipset ipvsadm -y [root@node1-190 ~]# cat <<EOF > /etc/sysconfig/modules/ipvs.modules #!/bin/bash modprobe --ip_vs modprobe --ip_vs_rr modprobe --ip_vs_wrr modprobe --ip_vs_sh modprobe --nf_conntrack EOF # Add execution permissions to the script file and run it to verify whether it is loaded successfully [root@node1-190 ~]# chmod + x /etc/sysconfig/modules/ipvs.modules & amp; & amp; /bin/bash /etc/sysconfig/modules/ipvs.modules & amp; & amp; lsmod | grep -e ip_vs -e nf_conntrack_ipv4 nf_conntrack_ipv4 15053 2 nf_defrag_ipv4 12729 1 nf_conntrack_ipv4 ip_vs_sh 12688 0 ip_vs_wrr 12697 0 ip_vs_rr 12600 0 ip_vs 145458 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr nf_conntrack 139264 7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4 libcrc32c 12644 4 xfs,ip_vs,nf_nat,nf_conntrack
7.k8s下载
(1) Configure mirror download related software
[root@master ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF ? [root@master ~]# yum install -y kubeadm kubelet kubectl [root@master ~]# kubeadm version kubeadm version: & amp;version.Info{Major:"1", Minor:"28", GitVersion:"v1.28.2", GitCommit:"89a4ea3e1e4ddd7f7572286090359983e0387b2f", GitTreeState:"clean", BuildDate:"2023-09-13T09 :34:32Z", GoVersion:"go1.20.8", Compiler:"gc", Platform:"linux/amd64"}
(2) Configure cgroup on kubelet
[root@master ~]# cat <<EOF > /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--cgroup-driver=systemd" KUBE_PROXY_MODE="ipvs" EOF [root@master ~]# systemctl start kubelet [root@master ~]# systemctl enable kubelet Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
2. Download containerd (do it on each node)
1. Download basic software
[root@master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
2. Add software warehouse information
[root@master ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
3. Change docker-ce.repo file
[root@master ~]# sed -i 's + download.docker.com + mirrors.aliyun.com/docker-ce + ' /etc/yum.repos.d/docker-ce.repo</pre > <h3 id="4. Download containerd and initialize the configuration">4. Download containerd and initialize the configuration</h3> <pre>[root@master ~]# yum install -y containerd [root@master ~]# containerd config default | tee /etc/containerd/config.toml
5. Change cgroup on containerd
[root@master ~]# sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.toml
6. Modify the mirror source to Alibaba
[root@master ~]# sed -i "s#registry.k8s.io#registry.aliyuncs.com/google_containers#g" /etc/containerd/config.toml
7. Configure crictl and pull the image for verification
[root@master ~]# crictl --version crictl version v1.26.0 ? [root@master ~]# cat <<EOF | tee /etc/crictl.yaml runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock timeout: 10 debug: false EOF ? [root@master ~]# systemctl daemon-reload & amp; & amp; systemctl start containerd & amp; & amp; systemctl enable containerd Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /usr/lib/systemd/system/containerd.service. [root@master ~]# crictl pull nginx Image is up to date for sha256:61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99 [root@master ~]#crictl images IMAGE TAG IMAGE ID SIZE docker.io/library/nginx latest 61395b4c586da 70.5MB
3.Master node initialization (only done on master)
1. Generate and modify configuration files
[root@master ~]# kubeadm config print init-defaults > kubeadm.yml [root@master ~]# ll total 8 -rw-r--r-- 1 root root 0 Jul 23 09:59 abc -rw-------. 1 root root 1386 Jul 23 09:02 anaconda-ks.cfg -rw-r--r-- 1 root root 807 Sep 27 16:18 kubeadm.yml [root@master ~]# vim kubeadm.yml
Modify advertiseAddress to the IP of your master host
criSocket uses the default containerd
Change name to the name of your master host
imageRepository is modified to Ali’s address registry.aliyuncs.com/google_containers
Change KubenetesVersion to the real version you downloaded
2. Check whether the image address in /etc/containerd/config.toml has been loaded as Alibaba’s address Address
[root@master ~]# vim /etc/containerd/config.toml [root@master ~]# systemctl restart containerd
3. View the required image and pull it
[root@master ~]# kubeadm config images list --config kubeadm.yml ? [root@master ~]# kubeadm config images pull --config kubeadm.yml [config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.2 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.2 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.2 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.28.2 [config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9 [config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.9-0 [config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.10.1
4.Initialization
(1) Initialize through the generated kubeadm.yml file
[root@master ~]# kubeadm init --config=kubeadm.yml --upload-certs --v=6 ? ... Your Kubernetes control-plane has initialized successfully! ? To start using your cluster, you need to run the following as a regular user: ? mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config ? Alternatively, if you are the root user, you can run: ? export KUBECONFIG=/etc/kubernetes/admin.conf ? You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ ? Then you can join any number of worker nodes by running the following on each as root: ? kubeadm join 192.168.2.190:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:0dbb20609e31e4fe7d8ec76f07e6efd1f56965c5f8aa5d5ae5f1d6e9e958ffbe
(2) Pay attention to an error:
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
solve
#Edit this file and reload the configuration after writing the content [root@master ~]# vim /etc/sysctl.conf net.bridge.bridge-nf-call-iptables = 1 [root@master net]# modprobe br_netfilter #Load module [root@master net]# sysctl -p net.bridge.bridge-nf-call-iptables = 1
(3) Operations that need to be performed after initialization
#If the master node is an ordinary user [root@master ~]# mkdir -p $HOME/.kube [root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@master ~]# chown $(id -u):$(id -g) $HOME/.kube/config #If the master node is the root user [root@master ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
Four.node node joins master
1. Join according to the command after the master is successfully initialized
#You can obtain it later through kubeadm token create --print-join-command kubeadm join 192.168.2.190:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:0dbb20609e31e4fe7d8ec76f07e6efd1f56965c5f8aa5d5ae5f1d6e9e958ffbe
2.node1/node2 join
[root@node2-191 ~]# kubeadm join 192.168.2.190:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:3e56e3aa62b5835b6ed0d16832a4a13d1154ec09fe9 c4f82bff9eaaaee2755c2 [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... ? This node has joined the cluster: ? * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. ? Run 'kubectl get nodes' on the control-plane to see this node join the cluster. ? ? ? [root@node4-193 ~]# kubeadm join 192.168.2.190:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:3e56e3aa62b5835b6ed0d16832a4a13d1154ec09fe9c4f 82bff9eaaaee2755c2 [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... ? This node has joined the cluster: ? * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. ? Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
3. View on master
[root@master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready control-plane 7m32s v1.28.2 node2-191.com Ready <none> 54s v1.28.2 node4-193.com Ready <none> 11s v1.28.2
4. Pay attention to error reporting
solve:
(1)/ Those files under etc/kubernetes already exist, usually because the master has been added. What I choose is to delete the contents in its directory, or use reset to reset
[root@node4-193 ~]# rm -rf /etc/kubernetes/* # or [root@node4-193 ~]# kubeadm reset
(2) Port occupation problem, try to kill the occupying process
5. Install network plug-in (master does it, node chooses it)
1. Get and modify files
Link: Baidu Netdisk Please enter the extraction code Extraction code: tswi
[root@master ~]# wget --no-check-certificate https://projectcalico.docs.tigera.io/archive/v3.25/manifests/calico.yaml [root@master ~]# vim calico.yaml
(1) Find the CLUSTER_TYPE line, add the last two lines, and fill in your own network card name at ens33
(2) Uncomment this part and modify the address
- name: CALICO_IPV4POOL_CIDR value: 10.244.0.0/16"
2. Apply the file and view and verify
[root@master ~]# kubectl apply -f calico.yaml [root@master ~]# kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-658d97c59c-k27lr 1/1 Running 0 18s kube-system calico-node-bzq6k 1/1 Running 0 18s kube-system calico-node-dcb9c 1/1 Running 0 18s kube-system calico-node-v97ll 1/1 Running 0 18s kube-system coredns-66f779496c-nfxfr 1/1 Running 0 4m9s kube-system coredns-66f779496c-q8s6j 1/1 Running 0 4m9s kube-system etcd-k8s-master 1/1 Running 12 4m16s kube-system kube-apiserver-k8s-master 1/1 Running 12 4m16s kube-system kube-controller-manager-k8s-master 1/1 Running 13 4m16s kube-system kube-proxy-7gsls 1/1 Running 0 4m10s kube-system kube-proxy-szdqz 1/1 Running 0 2m54s kube-system kube-proxy-wgrpb 1/1 Running 0 2m58s kube-system kube-scheduler-k8s-master 1/1 Running 13 4m16s
6. Configure kubectl command completion
[root@k8s-master ~]# yum install -y bash-completion [root@k8s-master ~]# source /usr/share/bash-completion/bash_completion [root@k8s-master ~]# source <(kubectl completion bash) [root@k8s-master ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc
The knowledge points of the article match the official knowledge archives, and you can further learn relevant knowledge. Cloud native entry-level skills treeContainer orchestration (learning environment k8s)Install kubectl15941 people are learning the system