k8s configures persistent storage through nfs-provisioner

1. Introduction to nfs-client-provisioner

There is no built-in Provisioner for NFS storage in Kubernetes clusters. But you can configure an external Provisioner for NFS in the cluster.

Nfs-client-provisioner is an open source NFS external Provisioner that uses NFS Server to provide persistent storage for Kubernetes clusters and supports dynamic purchase of PV. However, nfs-client-provisioner itself does not provide NFS, and requires an existing NFS server to provide storage. The naming rules of the persistent volume directory are:

no

a

m

e

the s

p

a

c

e

?

{namespace}-

namespace?{pvcName}-${pvName}.

The external NFS driver of K8S can be divided into two categories according to its working mode (as NFS server or NFS client):

nfs-client:

It mounts the remote NFS server to the local directory through the built-in NFS driver of K8S; then associates itself with the storage class as a storage provider. When a user creates a corresponding PVC to apply for a PV, the provider compares the requirements of the PVC with its own attributes, and once satisfied, creates a subdirectory to which the PV belongs in the locally mounted NFS directory to provide dynamic storage services for Pods .

nfs-server:

Different from nfs-client, this driver does not use k8s NFS driver to mount remote NFS to local redistribution, but directly maps local files to the inside of the container, and then uses ganesha.nfsd in the container to provide NFS externally Service; each time a PV is created, the corresponding folder is directly created in the local NFS root directory, and the subdirectory is exported.

This article will introduce the application of nfs-client-provisioner, use NFS Server to provide Kubernetes as the backend of persistent storage, and dynamically provide PV. The prerequisite is that there is an installed NFS server, and the NFS server can communicate with the Slave node network of Kubernetes. Deploy the nfs-client driver as a deployment to the K8S cluster, and then provide external storage services

2. Prepare the NFS server

2.0 Current environment information
[root@master1 ~]# kubectl get nodes -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master1.k8s.test Ready <none> 5d21h v1.22.17 10.140.20.141 <none> CentOS Linux 7 (Core) 6.3.2-1.el7.elrepo.x86_64 docker://19.3.15
master2.k8s.test Ready <none> 5d21h v1.22.17 10.140.20.142 <none> CentOS Linux 7 (Core) 6.3.2-1.el7.elrepo.x86_64 docker://19.3.15
master3.k8s.test Ready <none> 5d21h v1.22.17 10.140.20.143 <none> CentOS Linux 7 (Core) 6.3.2-1.el7.elrepo.x86_64 docker://19.3.15
node1.k8s.test Ready <none> 5d21h v1.22.17 10.140.20.156 <none> CentOS Linux 7 (Core) 6.3.2-1.el7.elrepo.x86_64 docker://19.3.15
2.1 Install nfs server through yum
[root@master1 ~]# rpm -qa|egrep "nfs|rpc"
[root@master1 ~]# yum -y install nfs-utils rpcbind
2.2 Start service and set boot startup
#Start nfs-server, and join boot
[root@master1 ~]# systemctl start rpcbind.service
[root@master1 ~]# systemctl enable rpcbind.service
[root@master1 ~]# systemctl start nfs
[root@master1 ~]# systemctl enable nfs-server --now
#Check whether the nfs server has started normally
[root@master1 ~]# systemctl status nfs-server
2.3 Edit the configuration file and set the shared directory
[root@master1 ~]# mkdir /data/nfs-provisioner -p
[root@master1 ~]# cat > /etc/exports <<EOF
/data/nfs_provisioner 10.140.20.0/24(rw,no_root_squash)
EOF
#The configuration file will take effect without restarting the nfs service
[root@master1 ~]# exportfs -arv
exporting 10.140.20.0/24:/data/nfs_provisioner
Parameters used to configure the NFS service program configuration file:
parameter meaning
ro read-only
rw read-write
root_squash when NFS When the client accesses as the root administrator, the anonymous user of the NFS server is not mapped
no_root_squash When the NFS client accesses as the root administrator, the mapping The root administrator of the NFS server
all_suash No matter what account the NFS client uses to access, it is mapped to the anonymous user of the NFS server
sync Write data to memory and hard disk at the same time to ensure no data loss
async Save the data to the memory first, and then write to the hard disk. High efficiency but easy to lose data
2.4 Client testing
The client needs to install nfs-utils
[root@master1 ~]# yum -y install nfs-utils
[root@master1 ~]# systemctl enable nfs --now
[root@master1 ~]# systemctl status nfs
Client Authentication
[root@master2 ~]# showmount -e 10.140.20.141
Export list for 10.140.20.141:
/data/nfs_provisioner 10.140.20.0/24

3. Deploy nfs-provisioner

3.1.0 Create namespace and working directory
[root@master1 ~]# kubectl create namespace test
[root@master1 ~]# mkdir nfs-provisioner
[root@master1 ~]# cd nfs-provisioner
3.1 Create ServiceAccount
[root@master1 ~]# cat > nfs-sa.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: test
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistent volumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistent volume claims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: test
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: test
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: test
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: test
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
EOF
Apply active
[root@master1 nfs-provisioner]# kubectl apply -f nfs-sa.yaml
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
3.2 Creating a Deployment
[root@master1 ~]# cat > nfs-deployment.yaml << EOF
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
  namespace: test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-client-provisioner
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-beijing.aliyuncs.com/mydlq/nfs-subdir-external-provisioner:v4.0.0
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: nfs-provisioner # Just keep consistent with the provisioner in 3.Storage
            - name: NFS_SERVER
              value: 10.140.20.141
            - name: NFS_PATH
              value: /data/nfs_provisioner
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.140.20.141
            path: /data/nfs_provisioner
EOF
Apply active
[root@master1 nfs-provisioner]# kubectl apply -f nfs-deployment.yaml
deployment.apps/nfs-client-provisioner created
3.3 Create storage class
[root@master1 ~]# cat > nfs-sc.yaml << EOF
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  annotations:
    storageclass.kubernetes.io/is-test-class: "true"
  name: nfs-storage
  namespace: test
provisioner: nfs-provisioner
volumeBindingMode: Immediate
reclaimPolicy: Delete
EOF
Apply active
[root@master1 nfs-provisioner]# kubectl apply -f nfs-sc.yaml
storageclass.storage.k8s.io/nfs-storage created
3.4 Create pvc
[root@master yaml]# cat > nfs-pvc.yaml << EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc
  namespace: test
  labels:
    app: nfs-pvc
spec:
  accessModes: #Specify access type
  - ReadWriteOnce
  volumeMode: Filesystem #Specify volume type
  resources:
    requests:
      storage: 2Gi
  storageClassName: nfs-storage #Specify the name of the created storage class
EOF

# create pvc
[root@master1 nfs-provisioner]# kubectl apply -n test -f nfs-pvc.yaml
persistentvolumeclaim/nfs-pvc created
#View pvc
[root@master1 nfs-provisioner]# kubectl get pvc -n test
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGE CLASS AGE
nfs-pvc Bound pvc-791dc175-c068-4977-a14b-02f8cb153bc3 2Gi RWO nfs-storage 8s
www-web-0 Bound pvc-b666f81e-9723-4e88-8e81-157b9e081577 10Mi RWO nfs-storage 17m
www-web-1 Bound pvc-a900806b-47e0-432e-81fd-865c5ff6e3ba 10Mi RWO nfs-storage 16m
#View nfs shared directory
[root@master1 nfs-provisioner]# ls /data/nfs_provisioner/
test-nfs-pvc-pvc-791dc175-c068-4977-a14b-02f8cb153bc3
test-www-web-0-pvc-b666f81e-9723-4e88-8e81-157b9e081577
test-www-web-1-pvc-a900806b-47e0-432e-81fd-865c5ff6e3ba


#Summary: Create pvc and use storageclass, then pv will be automatically created and bound

4. Create an application test and dynamically add PV

4.1 Create an nginx application
cat > nginx_sts_pvc.yaml << EOF
apiVersion: v1
kind: Service
metadata:
  name: nginx
  namespace: test
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
  namespace: test
spec:
  serviceName: "nginx"
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      -name: nginx
        image: nginx
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "nfs-storage" #Use the new sc
      resources:
        requests:
          storage: 10Mi
EOF
Apply active
[root@master1 nfs-provisioner]# kubectl apply -f nginx_sts_pvc.yaml
service/nginx created
statefulset.apps/web created

4.2 Inspection results
Check deployment, statefulset status
[root@master1 nfs-provisioner]# kubectl get sts,deploy -n test
NAME READY AGE
statefulset.apps/web 2/2 4m49s

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nfs-client-provisioner 1/1 1 1 15m

Check pod status
[root@master1 nfs-provisioner]# kubectl get pods -n test
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-fb55999fb-pcrqt 1/1 Running 0 9m30s
web-0 1/1 Running 0 3m40s
web-1 1/1 Running 0 3m15s

Check whether the nfs-server server has created a pv persistent volume:
[root@master1 nfs-provisioner]# kubectl get pvc,pv -n test
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGE CLASS AGE
persistentvolumeclaim/nfs-pvc Bound pvc-791dc175-c068-4977-a14b-02f8cb153bc3 2Gi RWO nfs-storage 2m45s
persistentvolumeclaim/www-web-0 Bound pvc-b666f81e-9723-4e88-8e81-157b9e081577 10Mi RWO nfs-storage 19m
persistentvolumeclaim/www-web-1 Bound pvc-a900806b-47e0-432e-81fd-865c5ff6e3ba 10Mi RWO nfs-storage 19m

NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGE CLASS REASON AGE
persistentvolume/pvc-791dc175-c068-4977-a14b-02f8cb153bc3 2Gi RWO Delete Bound test/nfs-pvc nfs-storage 2m45s
persistentvolume/pvc-a900806b-47e0-432e-81fd-865c5ff6e3ba 10Mi RWO Delete Bound test/www-web-1 nfs-storage 19m
persistentvolume/pvc-b666f81e-9723-4e88-8e81-157b9e081577 10Mi RWO Delete Bound test/www-web-0 nfs-storage 19m


[root@master1 nfs-provisioner]# kubectl exec -it -n test web-0 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@web-0:/# echo 1 > /usr/share/nginx/html/1.txt
root@web-0:/# exit
[root@master1 nfs-provisioner]# ls /data/nfs_provisioner/
test-nfs-pvc-pvc-791dc175-c068-4977-a14b-02f8cb153bc3
test-www-web-0-pvc-b666f81e-9723-4e88-8e81-157b9e081577
test-www-web-1-pvc-a900806b-47e0-432e-81fd-865c5ff6e3ba
[root@master1 nfs-provisioner]# cat /data/nfs_provisioner/test-www-web-0-pvc-b666f81e-9723-4e88-8e81-157b9e081577/1.txt
1