Storage issues between containers and hosts

Introduction

Function: Facilitate backup and recovery of data and realize data sharing

1. In a single machine

Mount with data volume

2. In multiple machines

ssh

Establish a secret-free channel to scp the data, and then use the volume to mount it into the container.

nfs network file system

Build nfs server + client mounting

The process is as follows

1. Install software package

yum install -y nfs-utils
2. Start the service
[root@nfs-server ~]# systemctl start nfs
[root@nfs-server ~]# systemctl enable nfs
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.
[root@nfs-server ~]#
3. Check the port and process of rpcbind
[root@nfs-server ~]# ps aux|grep nfs
root 1919 0.0 0.0 0 0 ? S< 17:24 0:00 [nfsd4_callbacks]
root 1925 0.0 0.0 0 0 ? S 17:24 0:00 [nfsd]
root 1926 0.0 0.0 0 0 ? S 17:24 0:00 [nfsd]
root 1927 0.0 0.0 0 0 ? S 17:24 0:00 [nfsd]
root 1928 0.0 0.0 0 0 ? S 17:24 0:00 [nfsd]
root 1929 0.0 0.0 0 0 ? S 17:24 0:00 [nfsd]
root 1930 0.0 0.0 0 0 ? S 17:24 0:00 [nfsd]
root 1931 0.0 0.0 0 0 ? S 17:24 0:00 [nfsd]
root 1932 0.0 0.0 0 0 ? S 17:24 0:00 [nfsd]
root 1964 0.0 0.0 112824 972 pts/0 S + 17:25 0:00 grep --color=auto nfs
[root@nfs-server ~]# ss -anplut|grep rpcbind
udp UNCONN 0 0 *:797 *:* users:(("rpcbind",pid=1894,fd=7))
udp UNCONN 0 0 *:111 *:* users:(("rpcbind",pid=1894,fd=6))
udp UNCONN 0 0 [::]:797 [::]:* users:(("rpcbind",pid=1894,fd=10))
udp UNCONN 0 0 [::]:111 [::]:* users:(("rpcbind",pid=1894,fd=9))
tcp LISTEN 0 128 *:111 *:* users:(("rpcbind",pid=1894,fd=8))
tcp LISTEN 0 128 [::]:111 [::]:* users:(("rpcbind",pid=1894,fd=11))
[root@nfs-server ~]#
The nfsd process outsources the listening port work to the rpcbind process.
4. Create nfs shared directory
[root@nfs-server ~]# mkdir /web
[root@nfs-server ~]# cd /web
[root@nfs-server web]# echo "welcome to sanchuang changsha nongda" >index.html
[root@nfs-server web]# ls
index.html
[root@nfs-server web]#
5. Edit the /etc/exports file **Modifying the configuration file requires reloading the configuration file (exportfs -arv)
/web 192.168.1.0/24(rw,sync,all_squash)

/web Path to the shared directory
192.168.1.0/24 Allows access to the network segment of the machine
(rw,sync,all_squash) The permissions owned by rw can read and write. sync modified the data on the host and synchronized it to the nfs server.

all_squash Any user on any machine who connects will be treated as an ordinary user nobody.

[root@nfs-server web]# exportfs -av makes the shared directory effective
exporting 192.168.1.0/24:/web
[root@nfs-server web]#
6.Set the permissions of the shared directory
[root@localhost web]# cat /etc/passwd|grep nfs
rpcuser:x:29:29:RPC Service User:/var/lib/nfs:/sbin/nologin
nfsnobody:x:65534:65534:Anonymous NFS User:/var/lib/nfs:/sbin/nologin
[root@localhost web]#
[root@nfs-server web]# chown nobody:nobody /web
[root@nfs-server web]# ll -d /web
drwxr-xr-x 2 nobody nobody 24 August 26 17:28 /web
[root@nfs-server web]#

Sharing permissions
Permissions in linux system

[root@nfs-server web]# service firewalld stop
Redirecting to /bin/systemctl stop firewalld.service
[root@nfs-server web]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination

CHAIN FORWARD (policy ACCEPT)
target prot opt source destination

Chain OUTPUT (policy ACCEPT)
target prot opt source destination
[root@nfs-server web]# getenforce
Disabled
[root@nfs-server web]#
===========
all clients
1.Install the software
[root@sc-docker _data]# yum install nfs-utils -y
[root@sc-docker2 /]#yum install nfs-utils -y
2. Create a new mounting directory and then mount it
[root@sc-docker ~]# mkdir /nfs-web
[root@sc-docker2 ~]# mkdir /nfs-web

[root@sc-docker2 ~]# mount 192.168.1.133:/web /nfs-web/
[root@sc-docker ~]# df -Th|grep nfs
192.168.1.133:/web nfs4 17G 1.5G 16G 9% /nfs-web
[root@sc-docker ~]#
[root@sc-docker ~]# cd /nfs-web/
[root@sc-docker nfs-web]# ls
index.html
[root@sc-docker nfs-web]# cat index.html
welcome to sanchuang changsha nongda
[root@sc-docker nfs-web]#
[root@sc-docker2 ~]# cd /nfs-web/
[root@sc-docker2 nfs-web]# ls
index.html
[root@sc-docker2 nfs-web]# cat index.html
welcome to sanchuang changsha nongda
[root@sc-docker2 nfs-web]#


[root@sc-docker2 nfs-web]# mount|grep nfs
192.168.1.133:/web on /nfs-web type nfs4 (rw,relatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec= sys,clientaddr=192.168.1.124,local_lock=none,addr=192.168.1.133)
[root@sc-docker2 nfs-web]#

nfs client creates volume + container mount

docker volume create --driver local \
--opt type=nfs \
--opt o=addr=<NFS server address>,nolock,soft,rw,sync \
--opt device=:<Full path to shared directory>\
<volume name>

Create a volume and mount it to the shared directory /web on the nfs server 192.168.1.133
[root@sc-docker nfs-web]# docker volume create --driver local --opt type=nfs --opt o=addr=192.168.1.133,nolock,soft,rw,sync --opt device=:/web nfs-web-6
nfs-web-6
[root@sc-docker nfs-web]# docker volume ls
DRIVER VOLUME NAME
local 2f1f1ac5ccdde7a9d80e277a974eeb3c2b6ff98b7126349f52caaef0042bbf9f
local 5103a4c07fe6745fba25c88320cc42a385c1e44b9cd461ed6b7be894e77bf357
local b9eeeb4fc5d95a184919ad9dbdb6a329771950a85a98e3729ad911107eb681d4
local nfs-web
local nfs-web-2
local nfs-web-6
local nginx-web
[root@sc-docker nfs-web]# docker volume inspect nfs-web-6
[
    {
        "CreatedAt": "2022-08-26T18:24:33 + 08:00",
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/nfs-web-6/_data",
        "Name": "nfs-web-6",
        "Options": {
            "device": ":/web",
            "o": "addr=192.168.1.133,nolock,soft,rw,sync",
            "type": "nfs"
        },
        "Scope": "local"
    }
]
[root@sc-docker nfs-web]#
Create a container using nfs-web-6 volume
docker run -d -p 8818:80 -v nfs-web-6:/usr/share/nginx/html --name siyx-nginx-8 nginx

Reference: https://www.ibm.com/cn-zh/topics/storage-area-network

NAS Network Attached Storage

NAS (Network Attached Storage) Network Attached Storage Network storage implements data transmission based on standard network protocols and provides services for various operating systems such as Windows/Linux/Mac OS in the network. Computers provide file sharing and data backup.

NAS is a personal storage device that can be placed at home or in the office

Use traditional networks to store and share files

Connect storage devices directly to the Internet to provide data and file services.

NAS network storage is particularly convenient if it is used for office work in an enterprise. Install a NAS storage device in the LAN, and users in the same LAN can connect to it. After the connection, an additional NAS storage device will be added as shown below. A large array disk similar to a local disk can store public files in it, so that everyone does not need to copy them and can directly search and modify them in the storage, which greatly improves work efficiency.

NAS provides a lot of security mechanisms to set permissions for each user, which can be classified into categories, departments, etc. There are also many security mechanisms to keep data safe.

Features:

1. Professional storage equipment is required –》NAS—>NFS
2. Use traditional tcp/ip network
3. Other computers/mobile phones/pads and other devices can be used

SAN Storage Area Network

Storage Area Network (SAN) is the most commonly used storage network architecture for enterprises. Business-critical services that require high throughput and low latency often run on this architecture. Today, the number of SAN deployments using all-flash storage is growing rapidly. All-flash storage delivers better performance, consistently low latency, and lower total cost than spinning disk. SANs store data in centralized shared storage, allowing enterprises to apply consistent methods and tools for security, data protection and disaster recovery.

SAN (short for Storage Area Network) is literally Storage Area Network. It uses Fibre Channel (Fibre Channel) technology and uses Fiber Channel switches. Connect storage arrays and server hosts to establish a regional network dedicated to data storage. SAN network storage is a high-speed network or sub-network. The SAN storage system provides data transmission between computers and storage systems.

Computer memory and local storage resources may not provide adequate storage, storage protection, multi-user access, or speed and performance for enterprise applications. Therefore, most organizations employ some form of SAN in addition to network-attached storage (NAS) for increased efficiency and better data management.

Traditionally, there has been a limited number of storage devices that can be connected to a server, limiting the storage capacity of the network. But SANs introduce network flexibility,enabling a single server or many heterogeneous servers across multiple data centers to share a common storage utility. SANs eliminate the traditional dedicated connections between network file servers and storage, as well as the concept of servers effectively owning and managing storage devices, which eliminates bandwidth bottlenecks. SANs eliminate single points of failure to increase storage reliability and availability.

SANs are also the best choice for disaster recovery (DR) because the network may contain many storage devices, including disk, tape, and optical storage. The utility may also be stored far away from the server it is used on.

Advantages:

  1. Improving application availability Storage exists independently of applications and can be accessed through multiple paths to improve reliability, availability, and serviceability.
  2. Better application performance SANs offload storage processing from servers and onto a separate network.
  3. Central and Comprehensive SANs enable simpler management, scalability, flexibility and high availability.
  4. Remote site data transfer and storage SAN protects data from disasters and malicious attacks through remote replication.
  5. Simple Centralized Management SAN simplifies management by creating a single image of the storage media.

NAS and SAN

Unlike direct-attached storage (DAS), network-based storage allows multiple computers to access it over a network, allowing for better data sharing and collaboration. Its off-site storage capabilities also make it better suited for backup and data protection. Network-attached storage (NAS) and storage area network (SAN) are two typical network-based storage setups.

A NAS is typically a single device composed of redundant storage containers or a redundant array of independent disks (RAID). SAN storage can be a network of multiple devices, including SSD and flash storage, hybrid storage, hybrid cloud storage, backup software and appliances, and cloud storage. It’s important to choose storage that’s right for your use case. Here are the differences between NAS and SAN:

SAN

  • multi-device network
  • block storage system
  • Fiber Channel Network
  • Optimized for multiple users
  • Faster performance
  • Highly scalable
  • More expensive and complex to set up

NAS

  • Single storage device or RAID
  • file storage system
  • TCP/IP Ethernet
  • restricted user
  • speed limit
  • Limited expansion options
  • Lower cost and easier to set up

Cloud storage

Cloud storage is an online storage (English: Cloud storage) model that stores data on multiple virtual servers usually hosted by third parties rather than on dedicated servers< /strong>. Hosting companies operate large data centers, and people who need data storage hosting can meet their data storage needs by buying or renting storage space from them. Data center operators prepare storage virtualized resources on the back end based on customer needs and provide them in the form of a storage resource pool. Customers can thenuse this storage resource pool to store files or Objects. In fact, these resources may be distributed across many server hosts.

syntaxbug.com © 2021 All Rights Reserved.