“Kubernetes Storage: Creating Dynamic Storage StorageClass Based on NFS”

Summary: Organizing is not easy. If it is helpful to you, can you like it and follow it?

For more details, please refer to: Enterprise-level K8s cluster operation and maintenance practice


1. Environmental information

IP address Operating system K8S version Cluster role NFS role
192.168.1.32 Ubuntu 20.04.6 LTS v1.24.17 master node< /strong> client
192.168.1.33 Ubuntu 20.04.6 LTS< /font> v1.24.17 worker node client
192.168.1.34 Ubuntu 20.04.6 LTS v1.24.17 worker node client
192.168.1.35 Ubuntu 20.04.6 LTS v1.24.17 worker node client
192.168.1.36 Ubuntu 20.04.6 LTS v1.24.17 worker node client
192.168.1.37 Ubuntu 20.04.6 LTS v1.24.17 worker node < font color="#999AAA">client
192.168.1.38 Ubuntu 20.04.6 LTS server

2. Use nfs-client to implement storageclass

2.1. Install nfs server

Note: The following operations only need to be performed on the nfs server host.

1. First, make sure your Ubuntu system already updated. Execute the following command in the terminal:

apt update
apt upgrade

Next, install the NFS kernel server software package :

apt install nfs-kernel-server -y

2. Configure the NFS shared directory:

Create a directory to be shared, such as / data/k8s/volumes

mkdir -p /data/k8s/volumes

In order to allow the client to access the directory, it is required Modify its permissions:

chown nobody:nogroup /data/k8s/volumes
chmod 755 /data/k8s/volumes

Next, configure NFS export, edit/ etc/exports file to define the directories to be shared and the clients allowed access

vim /etc/exports
/data/k8s/volumes 192.168.1.0/24(rw,sync,no_subtree_check)

Apply the changes to the NFS server and restart Services

exportfs -ra
systemctl restart nfs-kernel-server

2.2. Install nfs client

Note: The following operations only need to be performed on the nfs client host.

First, install the NFS client on the client Terminal software package:

# The client does not need to create a shared directory or edit the configuration file, it only needs to install the service.
apt install nfs-common -y

2.3. Install nfs client plug-in based on helm

Note: The following operations only need to be performed on one of the master node hosts in the K8S cluster.

1. helm installation< /strong>

root@k8s-master-32:~# wget https://get.helm.sh/helm-v3.12.3-linux-amd64.tar.gz
root@k8s-master-32:~# tar -zxvf helm-v3.12.3-linux-amd64.tar.gz
root@k8s-master-32:~# mv linux-amd64/helm /usr/local/bin/helm

2. nfs-client-provisioner installation< /font>

# Add repository
root@k8s-master-32:~# helm repo add moikot https://moikot.github.io/helm-charts
root@k8s-master-32:~# helm repo update

#Install chart
helm install --generate-name \
--set nfs.server=192.168.1.38 \
--set nfs.path=/data/k8s/volumes \
--set storageClass.reclaimPolicy=Delete \
--set image.repository=registry.cn-beijing.aliyuncs.com/pylixm/nfs-subdir-external-provisioner \
--set image.tag=v4.0.0 \
moikot/nfs-client-provisioner --version 1.3.0

The following table lists the possible Configuration parameters and their default values are as shown in the figure below:

Note: Starting from Kubernetes v1.20, the default The metadata.selfLink field has been removed. However, some applications still rely on this field, such as nfs-client-provisioner. If you still want to continue using these apps, you will need to re-enable this field in your current environment. However, in the current K8S 1.24.17 version, turning on this field will cause kube-apiserver to fail to start normally. After testing, the default quay.io/external_storage/nfs-client-provisioner:latest image was modified to a provisioner that is not based on the SelfLink function. Mirror, namely registry.cn-beijing.aliyuncs.com/pylixm/nfs-subdir-external-provisioner:v4.0.0

3. Test, create a test-calim pvc

vim test-claim.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
  annotations:
    volume.beta.kubernetes.io/storage-class: "nfs-client" #Can be viewed using the kubectl get storageclasses command
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      Storage: 2Gi

7. Test, create a test-pod Documents

vim test-pod.yaml

Kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: busybox:1.24
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS & amp; & amp; exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim

8. As shown in the picture below, view sharing Directory/data/volumes/v1/, if there is a SUCCESS file below, it means the deployment is successful


2.4. Manually install the nfs client plug-in

1. Download the nfs client resource file

wget https://github.com/kubernetes-retired/external-storage/archive/refs/heads/master.zip
unzip master.zip
cd external-storage-master/nfs-client/deploy/

As shown below:


2. Modify deployment.yaml

Note: Starting from Kubernetes v1.20, the default The metadata.selfLink field has been removed. However, some applications still rely on this field, such as nfs-client-provisioner. If you still want to continue using these apps, you will need to re-enable this field in your current environment. However, in the current K8S 1.24.17 version, turning on this field will cause kube-apiserver to fail to start normally. After testing, the default quay.io/external_storage/nfs-client-provisioner:latest image was modified to a provisioner that is not based on the SelfLink function. Mirror, namely registry.cn-beijing.aliyuncs.com/pylixm/nfs-subdir-external-provisioner:v4.0.0


3. Create an account< /strong>

root@k8s-master-32:~/external-storage-master/nfs-client/deploy# kubectl create -f rbac.yaml

4. Create nfs-client

Configure nfs as StorageClass and install the corresponding automatic configuration program nfs-client to automatically create Persistent Volume (PV). Whenever a storage class is created, a pv will be automatically created in kubernetes, and a folder will be automatically created in the nfs directory, eliminating the tediousness of vivid creation.

root@k8s-master-32:~/external-storage-master/nfs-client/deploy# kubectl create -f deployment.yaml

5. Create nfs-client kubernetes storage class

vim class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
reclaimPolicy: Delete
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "false"

root@k8s-master-32:~/external-storage-master/nfs-client/deploy# kubectl create -f class.yaml

6. Test, create a test-calim pvc

vim test-claim.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
  annotations:
    volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      Storage: 2Gi

7. Test, create a test-pod Documents

vim test-pod.yaml

Kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: busybox:1.24
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS & amp; & amp; exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim

8. As shown in the picture below, view sharing Directory/data/volumes/v1/, if there is a SUCCESS file below, it means the deployment is successful


Summary: Organizing is not easy. If it is helpful to you, can you like and follow it?

For more details, please refer to: Enterprise-level K8s cluster operation and maintenance practice

syntaxbug.com © 2021 All Rights Reserved.