Linux 7 offline environment deployment k8s
- Prepare the machine
-
- Install the pre-environment (executed on each machine)
-
- Basic environment
- Pass bridged IPv4 traffic to iptables
- docker environment
- Install k8s core kubectl kubeadm kubelet (executed on all machines)
- Initialize the master node (master node execution)
- init master node
- Initialize work node
Prepare the machine
- Open three machines and communicate with each other within the intranet
- Do not use localhost for the hostname of each machine [does not include underscores, decimal points, or capital letters]
Install the pre-environment (executed on each machine)
Basic environment
-
Turn off the firewall: If it is a cloud server, you need to set a security group policy to allow ports
systemctl stop firewalld systemctl disable firewalld
-
Modify hostname
hostnamectl set-hostname master hostnamectl set-hostname node1 hostnamectl set-hostname node2
-
View modification results
hostnamectl status
-
Set hostname resolution
echo "127.0.0.1 $(hostname)" >> /etc/hosts
-
Turn off selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config setenforce 0
-
Close swap
swapoff -a sed -ri 's/.*swap.*/# & amp;/' /etc/fstab
Pass bridged IPv4 traffic to iptables
-
Modify /etc/sysctl.conf
# If configured, modify it sed -i "s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g" /etc/sysctl.conf sed -i "s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g" /etc/sysctl.conf sed -i "s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g" /etc/sysctl.conf sed -i "s#^net.ipv6.conf.all.disable_ipv6.*#net.ipv6.conf.all.disable_ipv6=1#g" /etc/sysctl.conf sed -i "s#^net.ipv6.conf.default.disable_ipv6.*#net.ipv6.conf.default.disable_ipv6=1#g" /etc/sysctl.conf sed -i "s#^net.ipv6.conf.lo.disable_ipv6.*#net.ipv6.conf.lo.disable_ipv6=1#g" /etc/sysctl.conf sed -i "s#^net.ipv6.conf.all.forwarding.*#net.ipv6.conf.all.forwarding=1#g" /etc/sysctl.conf # If not, append echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf echo "net.ipv6.conf.all.forwarding = 1" >> /etc/sysctl.conf
-
Execute command to apply
sysctl -p
-
Execute the command to view the settings
sysctl -a | grep call
docker environment
-
Install daocker-ce Baidu network disk link: https://pan.baidu.com/s/1-G2onlwF1uYPJTdByruD4g
Extraction code: 71gm -
Upload to server and decompress
tar -zxvf docker-20.10.16.tgz cp docker/* /usr/bin/
-
Create docker.service
vi /usr/lib/systemd/system/docker.service [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify # the default is not to use systemd for cgroups because the delegate issues still # exists and systemd currently does not support the cgroup feature set required # for containers run by docker #Open remote connection ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. #TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process # restart the docker process if it exits prematurely Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target
-
Start docker
systemctl start docker systemctl enable docker
Install k8s core kubectl kubeadm kubelet (executed on all machines)
-
rpm package download address Baidu network disk link: https://pan.baidu.com/s/14_jjVTwCPuq542FyzlVNeQ
Extraction code: zcnp#Upload the rpm package to the server and install it using yum yum -y install ./kubeadm-rpm/* #Start kubelet on boot systemctl enable kubelet & amp; & amp; systemctl start kubelet
Initialize the master node (master node execution)
-
Prepare the image (upload all nodes) Baidu network disk link: https://pan.baidu.com/s/1QEZCgrKF2RMneGsrELJTLA
Extraction code: xi8bkube-apiserver:v1.21.0
kube-proxy:v1.21.0
kube-controller-manager:v1.21.0
kube-scheduler:v1.21.0
coredns:v1.8.0
etcd:3.4.13-0
pause:3.4.1
# Network plug-in mirror calico is used here
calico-cni
calico-node
calico-kube-controllers
calico-pod2Daemon-dlexvol -
Upload the downloaded image to the server and import the image (write sh script to batch import)
Upload the warehouse image downloaded in advance to the servervim loadImages.sh #!/bin/bash #Specify the directory where the archive file is located archive_dir="/root/images/" # Traverse the archive file and load it into Docker for archive in "$archive_dir"/*.tar do docker load -i "$archive" done
-
Start the warehouse image
docker run -d -p 5000:5000 --restart=always --name registry registry:2
-
Tag and push the image
#docker tag k8s.gcr.io/kube-apiserver:v1.21.0 localhost:5000/kube-apiserver:v1.21.0 \t # Push the image to the warehouse #docker push localhost:5000/kube-apiserver:v1.21.0
Here we use the script batch tag and batch push. This script does not need to be modified
vim imageTagPush.sh #!/bin/bash set -e KUBE_VERSION=v1.21.0 KUBE_PAUSE_VERSION=3.4.1 ETCD_VERSION=3.4.13-0 CORE_DNS_VERSION=v1.8.0 GCR_URL=k8s.gcr.io LOCALHOST_URL=localhost:5000 images=( kube-proxy:${KUBE_VERSION} kube-scheduler:${KUBE_VERSION} kube-controller-manager:${KUBE_VERSION} kube-apiserver:${KUBE_VERSION} pause:${KUBE_PAUSE_VERSION} etcd:${ETCD_VERSION} coredns:${CORE_DNS_VERSION} ) for imageName in ${images[@]} ; do docker tag $GCR_URL/$imageName $LOCALHOST_URL/$imageName docker rmi $GCR_URL/$imageName docker push $LOCALHOST_URL/$imageName done
init master node
-
Initialize the master node, this process takes a little longer
########kubeadm init a master######################## kubeadm init \ --apiserver-advertise-address=10.170.11.8 \ --image-repository localhost:5000 \ --kubernetes-version v1.21.0 \ --service-cidr=2.2.2.1/16 \ --pod-network-cidr=3.3.3.1/16 # apiserver-advertise-address master’s ip (internal network) # image-repository Specify the warehouse to be imaged. Use the image warehouse you just pushed to localhost:5000. ## Note: pod-cidr and service-cidr # Specify a network reachable range. The subnet range of the pod + the subnet range of the service load balancing network + the subnet range of the local IP cannot have duplicate domains. #For example apiserver-advertise-address=10.170.xx pod-network-cidr=192.170.0.0/16 This will not work # --pod-network-cidr --service-cidr I don’t know how to modify it. You can leave it alone. \t \t ####Follow the prompts to continue#### ## The first step after init is completed: Copy the relevant folders \t mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config \t ## Export environment variables export KUBECONFIG=/etc/kubernetes/admin.conf
-
Deploy a pod network component (Baidu network disk link: https://pan.baidu.com/s/13KCSekAsm5ONMv5TCW_GYA
Extraction code: e8t9)vim calico.yaml # Search: CALICO_IPV4POOL_CIDR and change it to --pod-network-cidr=3.3.3.1/16 when initializing the cluster: #- name: CALICO_IPV4POOL_CIDR #value: "3.3.3.1/16" \t # Start calico kubectl apply -f calico.yaml
-
View running status
kubectl get pod -A ##Get all deployed application Pods in the cluster kubectl get nodes ##View the status of all machines in the cluster
Initialize work node
-
After kubeadm init is successfully executed, there will be a command as follows: kubeadm join. If it expires, it can be executed on the master node (kubeadm token create –print-join-command)
## Execute on the work node kubeadm join 172.24.80.222:6443 --token nz9azl.9bl27pyr4exy2wz4 \ --discovery-token-ca-cert-hash sha256:4bdc81a83b80f6bdd30bb56225f9013006a45ed423f131ac256ffe16bae73a20
-
The kubectl command needs to be run using kubernetes-admin and requires the admin.conf file (the conf file is created in the master node /etc/kubernetes through the “kubeadmin init” command), but the slave node does not have a conf file and KUBECONFIG =/root is not set /admin.conf environment variable, so you need to copy the conf file to the slave node and set the environment variable.
# Copy the admin.conf file and execute it on the master node. K8s-Master, K8s-Node1, K8s-Node2 need to be resolved in /etc/hosts scp /etc/kubernetes/admin.conf root@K8s-Node2:/etc/kubernetes/ # Set environment variables and execute on Node node. #Node1 echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile source ~/.bash_profile
-
Verify cluster
#Get all nodes kubectl get nodes \t #Label the node ###Add tag "h1" kubectl label node machine hostname node-role.kubernetes.io/worker='' ###Go to tag kubectl label node machine hostname node-role.kubernetes.io/worker-