02. Install and configure persistent storage (nfs) for Kubernetes and KubeSphere clusters and dynamically generate pv mounts for pods through StatefulSet

Kubernetes and KubeSphere clusters install and configure persistent storage (nfs) and dynamically generate pv mounts for pods through StatefulSet

  • Introduction
  • 1. Install and configure the pre-environment
    • 1.1 Install nfs file system
      • 1.1.1 Install nfs-server
      • 1.1.2 Configure nfs-client
      • 1.1.3 Test nfs
  • 2. Configure default storage (Storageclass)
    • 2.1 Create a default public storage class
      • 2.1.1 Create nfs-storage storage class
      • 2.1.2 Create RBAC permissions
      • 2.1.3 Create PV Provisioner (storage plug-in)
      • 2.1.4 Create a PVC persistent volume
    • 2.2 Use statefulset to dynamically generate pv for pod
      • 2.2.1 Create a test pod
  • 3. KubeSphere storage persistence and other component installation
    • 3.1 Storage persistence and other component installation
      • 3.1.1 Storage persistence and configuration of other components

Introduction

Persistent storage is a prerequisite for installing KubeSphere. When using KubeKey to build a KubeSphere cluster, different storage systems can be installed as plug-ins. If KubeKey detects that no default storage type is specified, OpenEBS will be installed by default.

Similarly, when using Kubernetes to deploy our own services, we also need to mount pv persistent volumes for the corresponding pods to achieve local persistence of service data.

This chapter demonstrates how to use the nfs file system, configure persistent k8s cluster local file storage, and dynamically generate pv mounts for pods through StatefulSet.

Previous article reference:
01. Provision production-ready Kubernetes and KubeSphere clusters on Linux with KubeKey

The version is as follows

name version
CentOS 7.6 +
Kubernetes 1.23.8
KubeSphere 3.3 .1

Host assignment

hostname IP role container runtime container runtime Version
master01 192.168.0.3 control plane, etcd, worker docker 19.3.8 +
node01 192.168.0.5 worker docker 19.3.8 +
node02 192.168.0.7 worker docker 19.3.8 +
node03 192.168.0.8 worker docker 19.3.8 +

1. Install and configure the pre-environment

1.1 Install nfs file system

1.1.1 Install nfs-server

# Each host in the cluster executes the following command to start the nfs service; create a shared directory
 yum install -y nfs-utils
 mkdir -p /nfs/data
 
# Execute the following command on master01, you can choose a hard disk with a relatively large storage capacity as the nfs server according to the actual situation
 echo "/nfs/data/ *(rw,insecure,sync,no_subtree_check,no_root_squash)" > /etc/exports

# Execute on master01
 systemctl enable rpcbind
 systemctl enable nfs # systemctl enable nfs-server
 systemctl start rpcbind
 systemctl start nfs
 exportfs -r # Make the configuration take effect
 exportfs #Check whether the configuration is effective

# Specified network segment sharing can refer to the following
# mkdir -p /data/volumes/{v1,v2,v3}
# Edit the /etc/exports file of the master node, and share the directory to the network segment 192.168.0.0/24 (the network segment can be filled in according to your own environment, and the exports file needs to be configured on each master node)
# vim /etc/exports
# /data/volumes/v1 192.168.0.0/24(rw,no_root_squash,no_all_squash)
# publish
# exportfs -arv
# exporting 192.168.0.0/24:/data/volumes/v1
# Check
# showmount -e
# Export list for master01:
# /data/volumes/v1 192.168.0.0/24

Common parameters are:
**rw** read-write read and write
**ro** read-only
**sync** When requesting or writing data, the data will be synchronously written to the hard disk of the NFS server before returning. Data is safe, but performance is reduced
**async** saves the data to the memory first, and then writes to the hard disk when there is space in the hard disk, which is more efficient, but may cause data loss.
**root_squash** When the NFS client uses the root user to access, it is mapped to the anonymous user of the NFS server
**no_root_squash** When the NFS client uses the root user to access, it is mapped to the root user of the NFS server
**all_squash** No matter what account the NFS client uses, it is mapped to the anonymous user of the NFS server

1.1.2 Configure nfs-client

The IP address is the IP address of the master (nfs server)

# view
 showmount -e 192.168.0.3
 
# In addition to the nfs server node, other nodes must execute
# mount to nfs server
 mount -t nfs 192.168.0.3:/nfs/data /nfs/data
# Set auto mount at boot
 echo "192.168.0.3:/nfs/data /nfs/data nfs defaults 0 0" >> /etc/fstab

1.1.3 Test nfs

# Create a new test-nfs.yml in the /home directory of master01
apiVersion: v1
kind: Pod
metadata:
  name: test-nfs-pod
spec:
  containers:
    -name: busybox
      image: busybox
      command:
        -sh
        - -c
        - 'echo hello world > /mnt/hello'
      imagePullPolicy: IfNotPresent
      volumeMounts:
        - mountPath: "/mnt"
          name: nfs
  volumes:
    -name: nfs
      nfs: # use NFS storage
        path: /nfs/data # NFS storage path
        server: 192.168.0.3 # NFS server address

# The logic of busybox above is to write "hello world" into the /mnt/hello file, and the /mnt directory is mounted with NFS, so in theory, there will also be a hello file in the /nfs directory of the nfs virtual machine.
# After creation, run kubectl apply -f test-nfs.yml
# Check the nfs virtual machine (master), check if there is a hello file in the /nfs/data directory
# After the test is successful, you can delete the hello file: rm -rf /nfs/hello
# Execute the following command to delete the test Pod just now: kubectl delete -f test-nfs.yml

2. Configure default storage (Storageclass)

Storageclass solves the need for manual PV creation. Every time a PVC is created to declare the use of storage, it is necessary to manually create a PV to meet the usage of the PVC.
A mechanism can be used to dynamically create the corresponding persistent storage volume (PV) according to the storage usage (PVC) declared by the user. k8s uses StorageClass to realize the dynamic creation of persistent storage.
Currently supported class reference: https://kubernetes.io/zh-cn/docs/concepts/storage/storage-classes/
The core concept in NFS StorageClass is Provisioner, so what is Provisioner?
Provisioner is a necessary resource in StorageClass. It is an automatic allocator of storage resources, which can be regarded as a back-end storage driver. For the NFS type, K8S does not provide an internal Provisioner, but an external Provisioner can be used. Provisioner must conform to the development specification (CSI) of the storage volume. Provisioner provided by NFS is used in this document.
Process Breakdown:
① pod mount pvc
② pvc configuration storage class request pv
③storageclass finds provisioner to apply for pv
④nfs provisioner generates pv required by pvc and provides it to pod for storage

2.1 Create a default public storage class

The following operations can be performed on the master node of master01
It should be noted that StorageClass is a global resource, so the namespace is default, which does not affect other namespace calls If *K8s version is 1.20.x *
And the version of nfs-subdir-external-provisioner (nfs-client) is lower than 4.0
Add to the command of /etc/kubernetes/manifests/kube-apiserver.yaml:
– –feature-gates=RemoveSelfLink=false Otherwise, an error will be reported: Kubernetes v1.20.13 reported “unexpected error getting claim reference: selfLink was
empty, can’t make reference”

2.1.1 Create nfs-storage storage class

# Create sc.yaml and execute kubectl apply -f sc.yaml to create a storage class named nfs-storage
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage ## can be changed to your own storage name
  annotations:
    storageclass.beta.kubernetes.io/is-default-class: 'true'
    storageclass.kubernetes.io/is-default-class: "true" ## true is the default class, false is not the default class
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner ## Specify the source name of the storage provider here
reclaimPolicy: Delete ## Specify the reclaim policy. Here, Delete is selected. The back-end storage connected to the PV completes the deletion of the Volume.
volumeBindingMode: Immediate ## Specify the binding mode, here is the choice of immediate binding, that is, after the storage volume statement is created, the storage burrito will be dynamically created immediately and bound to the storage volume statement, and "WaitForFirstConsumer" , the storage volume is not created and bound to the storage volume claim until the first time the storage volume claim is used by the container group
parameters:
  archiveOnDelete: "true" ## When deleting pv, whether the content of pv should be backed up
  ##pathPattern: "${.PVC.namespace}/${.PVC.annotations.nfs.io/storage-path}" ## There is no configuration by default, and it supports NFS to create subdirectories for different applications using NFS different directory

# kubectl get sc view
# local is the OpenEBS type storage installed by default on KubeKey
# nfs-storage is what we just generated
[root@master01 ~]# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUME BINDING MODE ALLOW VOLUME EXPANSION AGE
local (default) openebs.io/local Delete WaitForFirstConsumer false 3d22h
nfs-storage (default) k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 17h

2.1.2 Create RBAC permissions

Create a Service Account to control the running authority of NFS provisioner in the k8s cluster rbac (Role-Based Access Control role-based access control, that is, users are associated with roles and permissions), which is an authentication->authorization->access mechanism

# Create rbac.yaml and execute kubectl apply -f rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistent volumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistent volume claims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
  

2.1.3 Create PV’s Provisioner (storage plug-in)

Create a PV storage plug-in so that PV can be created automatically. One is to create a mount point (volume) under the NFS shared directory, and the other is to create a PV and associate the PC with the NFS mount.
This article uses high-availability configuration If you don’t need it, you can set replicas to 1, and comment out the ENABLE_LEADER_ELECTION high-availability election environment variable in env

# Create rbac.yaml and execute kubectl apply -f rbac.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 3 # If you want to be highly available, you can change it to 3 here, generally an odd number greater than or equal to 3
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          # Note that the NFS-client image must match the k8s version, if it is not compatible, it cannot be bound
          image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2
          # resources:
          # limits:
          # cpu: 10m
          # requests:
          # cpu: 10m
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: ENABLE_LEADER_ELECTION
              value: "True" ## Set high availability to allow election
            - name: NFS_SERVER
              value: 192.168.0.5 ## Specify your own nfs server address
            - name: NFS_PATH
              value: /nfs/data ## The directory shared by the nfs server
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.0.3
            path: /nfs/data

# Check the container log to see if the started NFS plug-in is normal
# If there are errors and other related information, you must check carefully, otherwise it will cause NFS-celient to remain in the pending state
# https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/issues/25
[root@master01 ~]# kubectl get pod | grep nfs
nfs-client-provisioner-86b55c565d-fqmm5 1/1 Running 0 20h
nfs-client-provisioner-86b55c565d-p2sks 1/1 Running 0 20h
nfs-client-provisioner-86b55c565d-r5stq 1/1 Running 0 20h

2.1.4 Create PVC persistent volume

# # Create pvc.yaml and execute kubectl apply -f pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-pvc
  namespace: default
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 200Mi
  storageClassName: nfs-storage ## Default storage is not added here
  
# kubectl get pvc view
[root@master01 nfs]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGE CLASS AGE
test-pvc Bound pvc-dd852bab-8b86-4d42-8666-9b2b550597aa 200Mi RWX nfs-storage 8s
[root@master01 nfs]# kubectl delete -f pvc.yaml

2.2 Use statefulset to dynamically generate pv for pod

RC, Deployment, and DaemonSet are all stateless services. The IPs, names, start-stop sequences, etc. of the Pods they manage are all random. What is a StatefulSet? As the name implies, a stateful collection manages all stateful services, such as MySQL, MongoDB clusters, etc.
StatefulSet is essentially a variant of Deployment. It has become a GA version in version v1.9. In order to solve the problem of stateful services, the Pods it manages have fixed Pod names and start and stop sequences. In StatefulSet, The Pod name is called the network identifier (hostname), and shared storage must also be used.
In Deployment, the corresponding service is service, and the corresponding headless in StatefulSet
service, headless service, that is, headless service, the difference from service is that it does not have Cluster
IP, when resolving its name, it will return the Endpoint list of all Pods corresponding to the Headless Service.
In addition, StatefulSet creates a DNS domain name for each Pod copy controlled by StatefulSet on the basis of Headless Service. The format of this domain name is:

$(podname).(headless server name)
FQDN: $(podname).(headless server name).namespace.svc.cluster.local

2.2.1 Create a test pod

# Create test.yaml and execute kubectl apply -f test.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nfs-web
spec:
  serviceName: "nginx"
  replicas: 3
  selector:
    matchLabels:
      app: nfs-web # has to match .spec.template.metadata.labels
  template:
    metadata:
      labels:
        app: nfs-web
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      -name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates: # pvc template
  - metadata:
      name: www
      namespace: default # must be consistent with the namspace of the pod
      annotations:
        volume.beta.kubernetes.io/storage-class: nfs-storage
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 10Mi
# kubectl get pvc Check that you can see that pvc and pv are dynamically generated for pod
# The pv dynamically generated in StatefulSet will not be deleted along with pod deletion
# It corresponds to the pod one by one. After the pod is restored, the data still exists
[root@master01 nfs]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGE CLASS AGE
www-nfs-web-0 Bound pvc-bdd6ff3e-9524-4e21-8cb6-29aebfd67e6f 10Mi RWO nfs-storage 4s
www-nfs-web-1 Bound pvc-9eea90b4-3ac7-4a7d-a037-301c96eb707e 10Mi RWO nfs-storage 0s
[root@master01 nfs]#
[root@master01 nfs]# kubectl delete -f test.yaml

3. KubeSphere storage persistence and other component installation

3.1 Storage persistence and other component installation

3.1.1 Storage persistence and configuration of other components

1. Log in to the console as the admin user, click Platform Management in the upper left corner, and select Cluster Management.
2. Click Custom Resource Definition, enter ClusterConfiguration in the search bar, and click the search result to view its detailed page.

3. In the custom resource, click the operation on the right side of ks-installer and select Edit YAML. .

4. In the YAML file, after modifying the configuration content, click OK in the lower right corner, save the configuration, and the components will start to be installed. The installation will take a certain amount of time and you need to wait patiently.

1. Operation on the kubesphere platform: metrics_server is changed to true

 metrics_server:
    enabled: true

2. Operation on the kubesphere platform: It is recommended to modify the storage configuration: storageClass: “nfs-storage” , if not modified, a storage space local (default) will be created by default.

 monitoring:
    gpu:
      nvidia_dcgm_exporter:
        enabled: false
    node_exporter:
      port: 9100
    storageClass: 'nfs-storage'
# If monitoring is enabled, it is recommended to modify it to nfs-storage
  monitoring:
    gpu:
      nvidia_dcgm_exporter:
        enabled: false
    node_exporter:
      port: 9100
    storageClass: 'nfs-storage'

3. Operation on the kubesphere platform: the network configuration is changed to the following:

 network:
    ippool:
      type: calico
    networkpolicy:
      enabled: true
    topology:
      type: weave-scope

4. Operation on the kubesphere platform: Open the application store:

 openpitrix:
    store:
      enabled: true

Portal: KubeSphere enables pluggable components official document

5. Execute the following command in kubectl to check the installation process:
 kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0]. metadata.name}') -f

Previous article reference:
01. Provision production-ready Kubernetes and KubeSphere clusters on Linux with KubeKey