14-k8s-Basic storage EmptyDir, HostPath, NFS

Article directory

    • 1. Related concepts
    • 2. EmptyDir storage
    • 3. HostPath storage
    • 4. NFS storage

1. Related concepts

  1. Overview

    Volume is defined on the Pod, and then mounted to specific file directories by multiple containers in the Pod. Realize data sharing between different containers in the same Pod and persistent storage of data. The life cycle of a Volume is not related to the life cycle of a single container in a Pod. The relationship between the life cycle of Volume and the life cycle of Pod needs to be determined according to the storage type.

  2. Common Volume types supported by kubernetes

    Basic storage: EmptyDir, HostPath, NFS
    Advanced storage: PV, PVC
    Configuration storage: ConfigMap, Secret

2. EmptyDir storage

  1. Overview

    EmptyDir is created when a Pod is assigned to a Node. Its initial content is empty, and there is no need to specify the corresponding directory file on the host, because kubernetes will automatically allocate a directory. When the Pod is destroyed, the data in EmptyDir will also be permanently deleted. Delete, so it is also called temporary storage.

  2. Practical logic

    1. First declare a volume of EmptyDir storage type
    2. Prepare two containers nginx and busybox in a Pod
    3. Mount the volumes to the directories of the two containers respectively.
    4. Then the nginx container is responsible for writing logs to the volume, and busybox reads the contents of the log file from the volume to the console through commands.
    
  3. Write yml file: vi /opt/volume-emptydir.yaml

    apiVersion: v1
    Kind: Pod
    metadata:
      name: volume-emptydir
    spec:
      containers:
        - name: nginx
          image: nginx:latest
          ports:
            - name: nginx-port
              containerPort: 80
              protocol: TCP
          volumeMounts: # Mount logs-volume to the /var/log/nginx directory of the nginx container. nginx will write the user's access log to the access.log file in this directory.
            - name: logs-volume
              mountPath: /var/log/nginx
        - name: busybox
          image: busybox:latest
          command: ["/bin/sh", "-c", "tail -f /logs/access.log"]
          volumeMounts: # Mount logs-volume to the /logs directory in the busybox container
            - name: logs-volume
              mountPath: /logs
      volumes: # declare volume
        - name: logs-volume
          emptyDir: {<!-- -->}
    
  4. Run: kubectl apply -f /opt/volume-emptydir.yaml

  5. Check whether it is started: kubectl get pod -o wide

  6. Test access and generate logs: curl 192.169.189.78

  7. View the log (-c is the specified container): kubectl logs volume-emptydir -c busybox -f

3. HostPath storage

  1. Overview

    HostPath is to mount an actual directory in the Node host to the Pod for use by the container. Even if the Pod is destroyed, the data can still be saved on the Node host, also called local storage.

  2. Practical logic

    1. First declare a volume of HostPath storage type, and the data is stored in the directory of the Node host.
    2. Prepare two containers nginx and busybox in a Pod
    3. Mount the volumes to the directories of the two containers respectively.
    4. Then the nginx container is responsible for writing logs to the volume, and busybox reads the contents of the log file from the volume to the console through commands.
    
  3. Write yml file: vi /opt/volume-hostpath.yaml

    apiVersion: v1
    Kind: Pod
    metadata:
      name: volume-hostpath
    spec:
      containers:
        - name: nginx
          image: nginx:latest
          ports:
            - name: nginx-port
              containerPort: 80
              protocol:TCP
          volumeMounts: # Mount logs-volume to the /var/log/nginx directory of the nginx container. nginx will write the user's access log to the access.log file in this directory.
            - name: logs-volume
              mountPath: /var/log/nginx
        - name: busybox
          image: busybox:latest
          command: ["/bin/sh", "-c", "tail -f /logs/access.log"]
          volumeMounts: # Mount logs-volume to the /logs directory in the busybox container
            - name: logs-volume
              mountPath: /logs
      volumes: # declare volume
        - name: logs-volume
          hostPath:
            path: /root/logs
            type: DirectoryOrCreate
    

    ps: type type analysis

    DirectoryOrCreate: Use the directory if it exists, create it if it does not exist.
    Directory: The directory must exist
    FileOrCreate: Use the file if it exists, create it if it does not exist
    File: The file must exist
    Socket: unix socket must exist
    CharDevice: character device must exist
    BlockDevice: The block device must exist
    
  4. Start the pod: kubectl apply -f /opt/volume-hostpath.yaml

  5. View pod: kubectl get pod volume-hostpath -o wide

  6. Visit nginx: curl 192.169.235.141

  7. View the log (-c is the specified container): kubectl logs volume-hostpath -c busybox -f

  8. View the node: kubectl describe pod volume-hostpath

  9. If the pod is on node 11, check the log: ls /root/logs

4. NFS storage

  1. Overview

    Although HostPath can solve the problem of data persistence, once the Node node fails, problems will arise if the Pod is transferred to other Node nodes. At this time, a separate network storage system needs to be prepared. The more commonly used ones are NFS and CIFS

    NFS is a network file storage system that can build an NFS server and then directly connect the storage in the Pod to the NFS system. No matter how the Pod is transferred on the node, as long as there is no problem with the connection between Node and NFS, the data can be successfully accessed.

  2. Build NFS server (all k8s servers)

    ps1: In the production environment, nfs + keepalived can be used for high availability to prevent single points of failure, and rsync + inotify can be used to synchronize shared data between master and backup.

    Only one NFS server is built in the test environment here.

    ? 1) Install the NFS service (the client only needs to install the NFS service, which is used by the client to drive the NFS device): yum install -y nfs-utils

    ? 2) Prepare the shared directory: mkdir -p /root/data/nfs

    ? 3) Expose the shared directory to all hosts in the 192.168.248.0/24 network segment with read and write permissions: vi /etc/exports

    ? Add /root/data/nfs 192.168.248.0/24(rw,no_root_squash)

    ? ps1: It turns out that /etc/exports is empty

    ? ps2: No_root_squash means: If the NFS client uses root when connecting to the server, it also has root permissions on the server’s shared directory.

    ? 5) Start the NFS service: systemctl start nfs –》 systemctl enable nfs

    ? 6) View: exportfs -v

    ? 7) If nfs is also installed on other stations, you can mount the storage path of the dnf server and share the mounting path (can be ignored): mount -t nfs 192.168.248.11:/root/data/nfs /root/data/nfs

  3. Practical logic

    1. First declare a volume of NFS storage type, and the data is stored in the directory of the remote host.
    2. Prepare two containers nginx and busybox in a Pod
    3. Mount the volumes to the directories of the two containers respectively.
    4. Then the nginx container is responsible for writing logs to the volume, and busybox reads the contents of the log file from the volume to the console through commands.
    
  4. Write script: vi /opt/volume-nfs.yaml

    apiVersion: v1
    Kind: Pod
    metadata:
      name: volume-nfs
    spec:
      containers:
        - name: nginx
          image: nginx:latest
          ports:
            - name: nginx-port
              containerPort: 80
              protocol:TCP
          volumeMounts: # Mount logs-volume to the /var/log/nginx directory of the nginx container. nginx will write the user's access log to the access.log file in this directory.
            - name: logs-volume
              mountPath: /var/log/nginx
        - name: busybox
          image: busybox:latest
          command: ["/bin/sh", "-c", "tail -f /logs/access.log"]
          volumeMounts: # Mount logs-volume to the /logs directory in the busybox container
            - name: logs-volume
              mountPath: /logs
      volumes: # declare volume
        - name: logs-volume
          nfs:
            server: 192.168.248.10 # NFS server address
            path: /root/data/nfs # The shared directory of the NFS server
            readOnly: false # Whether to read only
    
  5. Start: kubectl apply -f /opt/volume-nfs.yaml

  6. View pod: kubectl get pod volume-nfs -o wide

  7. Access Nginx in the Pod: curl 192.169.189.68

  8. View the log (-c is the specified container): kubectl logs volume-nfs -c busybox -f

  9. View the file: ls /root/data/nfs