k8s cluster external etcd cluster storage data

1. Preface

The k8s cluster can store data in the form of an external etcd cluster, or can choose to use etcd containerization to build a cluster to store data, because I used a containerized etcd cluster in the construction of a k8s high-availability cluster. Now let’s talk about using an external etcd Cluster way

2. Host information

hostname ip port service
k8s-master01 10.1.60.119 2379、2380 etcd
k8s-master02 10.1.60.120 2379, 2380 etcd
k8s-master03 10.1.60.121 2379, 2380 etcd

3. Build etcd cluster

3.1 Install etcd

yum -y install etcd

3.2 Edit etcd configuration file

vi /etc/etcd/etcd.conf

#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://10.1.60.119:2380"
ETCD_LISTEN_CLIENT_URLS="http://10.1.60.119:2379,http://127.0.0.1:2379"
etcd_name="k8s-master01"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.1.60.119:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://10.1.60.119:2379"
ETCD_INITIAL_CLUSTER="k8s-master01=http://10.1.60.119:2380,k8s-master02=http://10.1.60.120:2380,k8s-master03=http://10.1.60.121:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

This is the configuration of one of the hosts, and the rest of the hosts only need to modify the ip address and etcd_name according to this configuration

3.3 Start etcd service and configure startup

systemctl start etcd & amp; & amp; systemctl enable etcd

3.4 View etcd cluster status

Be sure to check the etcd cluster of all nodes after starting

etcdctl member list

etcdctl cluster-health

4.k8s cluster construction

Basic environment configuration reference: kubeadm deploys k8s version 1.26.0 high-availability cluster_Apex Predator’s Blog-CSDN Blog

The previous construction is the same as in the above article, except that the etcd cluster is deployed, and it is not different until the k8s cluster is initialized.

4.1k8s cluster initialization

Export initialization configuration file

kubeadm config print init-defaults > kubeadm.yaml

edit configuration file

vi kubeadm.yaml

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
-groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  -signing
  - authentication
kind:InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.1.60.119 #Configure the control node
  bindPort: 6443 #control node default port
nodeRegistration:
  criSocket: unix:///var/run/cri-dockerd.sock #configure to use cri-docker
  imagePullPolicy: IfNotPresent
  name: node
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers #Configure the image source address as Alibaba Cloud address
kind: ClusterConfiguration
kubernetesVersion: 1.26.0 #Configure k8s version
controlPlaneEndpoint: 10.1.60.124:16443 #Configure vip address
etcd: #Configure etcd cluster address
  external:
    endpoints:
      - http://10.1.60.119:2379
      - http://10.1.60.120:2379
      - http://10.1.60.121:2379
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16 #Configure the pod address segment
  serviceSubnet: 10.96.0.0/12 #Configure service address segment
scheduler: {}

Initialize the k8s cluster using the initialization configuration file

kubeadm inti –config kubeadm.yaml –upload-cert

After the initialization is successful, execute the command according to the output prompt

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

export KUBECONFIG=/etc/kubernetes/admin.conf

Check the node status after execution

kubectl get nodes

4.2 Add other control nodes to the cluster

Create the following directory on the 120 master node

mkdir -p /etc/kubernetes/pki/etcd/

Execute the following command on the 119 master node to copy the certificate to the 120 master node

scp /etc/kubernetes/pki/ca.crt [email protected]:/etc/kubernetes/pki/

scp /etc/kubernetes/pki/ca.key [email protected]:/etc/kubernetes/pki/

scp /etc/kubernetes/pki/sa.key [email protected]:/etc/kubernetes/pki/

scp /etc/kubernetes/pki/sa.pub [email protected]:/etc/kubernetes/pki/

scp /etc/kubernetes/pki/front-proxy-ca.crt [email protected]:/etc/kubernetes/pki/

scp /etc/kubernetes/pki/front-proxy-ca.key [email protected]:/etc/kubernetes/pki/

The difference between this and the built-in etcd cluster is that there is no need to copy the certificate of etcd

View the tokens that joined the cluster on 119 master

kubeadm token create –print-join-command

Execute the token output above on 120 master and add some parameters to join the cluster

kubeadm join 10.1.60.124:16443 –token zj1hy1.ufpwaj7wxhymdw3a –discovery-token-ca-cert-hash sha256:9636d912ddb2a9b1bdae085906c11f6839bcf060f8b9924132f 6d82b8aaefecd –control-plane –cri-socket unix:///var/run/cri-dockerd. sock

Execute directory creation after joining the cluster (a prompt will be output after executing the command in the previous step)

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

export KUBECONFIG=/etc/kubernetes/admin.conf

Execute the following command to check whether the join is successful

kubectl get nodes

The remaining 121 master nodes repeat the above steps to join the cluster

4.3 Join the working nodes to the cluster

Execute the following command on any master node to view the tokens that join the cluster

kubeadm token create –print-join-command

Execute the token generated by the master node and add some parameters to join the cluster

kubeadm join 10.1.60.124:16443 –token zj1hy1.ufpwaj7wxhymdw3a –discovery-token-ca-cert-hash sha256:9636d912ddb2a9b1bdae085906c11f6839bcf060f8b9924132f 6d82b8aaefecd –cri-socket unix:///var/run/cri-dockerd.sock

You can see that the difference between the control node joining the cluster and the working node joining the cluster is the presence or absence of the –control-plane parameter

Execute the following command on any master node to view cluster information

kubectl get nodes

Each node node can join the cluster by performing the above steps

4.4 Install the network plug-in

Download flannel’s yaml file

wget https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml

use this yaml file

kubectl create -f kube-flannel.yml

Wait until the flannel container is running, you can check it with the following command

kubectl get pods -n kube-flannel

After flannel is started, check the cluster status and you can see that it is ready

kubectl get nodes

So far, the k8s high-availability cluster has been built