Linux disk management (partitioning, SWAP, LVM, RAID, hot backup)

Partition

Concept:

  1. Data Organization and Management: Disk partitioning allows hard disk space to be divided into logical units, which helps organize and manage data more efficiently. Different types of data can be stored in different partitions, making the data easier to maintain and back up.

  2. Improving performance: Partitions can be used to separate operating system files, application files, and user data to improve system performance. For example, separating operating system files from user data can reduce fragmentation and increase file access speeds.

  3. Data security: If the entire disk has only one partition, the risk of file system damage or data loss is greater. Partitioning allows you to isolate different types of data to reduce the risk of potential data loss. If one partition becomes corrupted, data in other partitions is usually not affected.

  4. Backup and Restore: Partitions can help make data backup and restore easier. Specific partitions can be selectively backed up instead of the entire hard drive, saving time and storage space.

  5. Multi-operating system support: If you need to install multiple operating systems (such as Linux and Windows) on the same computer, partitioning is necessary. Each operating system usually requires a separate partition to be installed and run.

  6. Disk Space Utilization: Each partition can be resized as needed to use disk space more efficiently. This is important to avoid wasted space and ensure proper system operation.

  7. Disk Maintenance: Partitions also help simplify disk maintenance tasks such as file system checks and defragmentation. Each partition can be checked and maintained regularly without having to operate the entire hard drive.

In short, disk partitioning plays a key role in Linux systems, helping users better manage disk space, improve performance, ensure data security, and achieve flexibility. Different types of partitions can meet different needs. Partitioning according to specific conditions will help improve the efficiency and maintainability of the system.

mbr partition

  1. 4 primary partition limit: MBR partition table supports up to 4 primary partitions. This means that the hard drive can be divided into up to 4 independent primary partitions, each of which can contain a file system.

  2. Logical and extended partitions: If you need more partitions, you can use a primary partition to create an extended partition, and then create multiple logical partitions within the extended partition. This approach allows overcoming the 4 primary partition limit, but requires additional steps and management.

  3. 2 TB partition size limit: The MBR partition table supports a maximum partition size of 2 TB (2^41 bytes). If the hard disk capacity exceeds 2 TB, all available space will not be fully utilized.

  4. UEFI not supported: MBR partition tables work fine on legacy BIOS systems, but are not supported on modern UEFI (Unified Extensible Firmware Interface) systems, which typically use GPT (GUID Partition Table) instead of MBR .

It should be noted that MBR partition tables are gradually replaced by GPT partition tables in modern computers because GPT provides more partition support, larger hard disk capacity support, and better data integrity and security. If your computer supports UEFI, it is usually more recommended to use the GPT partition table. But for old BIOS systems, MBR partitioning is still a common partitioning scheme.

Partition instance

Install an additional hard drive

Find settings

 Refresh the hardware device and check the disk to confirm that the disk has been recognized

Set an alias to facilitate refresh and other operations
alias scan='echo "- - -" > /sys/class/scsi_host/host0/scan;echo "- - -" > /sys/class/scsi_host/host1/scan;echo \ "- - -" > /sys/class/scsi_host/host2/scan'
//Set an alias and assign the hardware refresh process directly to the alias to facilitate subsequent use
View disk

Create a primary partition, use the fdisk tool

How to delete partition

Write n and change it to d

Partition above 2T

If it is above 2T, it is not suitable to use fdisk. Use gdisk.

Download gdisk

sudo yum install gdisk

Create file system

File system

  • Organization and storage, the file system provides a structure for organizing data and can divide files into different directories. The organized data can then be stored on physical devices

  • Data access: The file system provides an access interface to the storage device, and the data in the storage device is accessed through the interface.

  • Data management: The file system provides data management functions, including copying, moving, deleting, and renaming files or directories.

  • Data protection: Files use permission control mechanisms to limit the access rights of different users.

Create (format) file system

Check the file type and confirm that the operation is successful

Mount

Mount point: The mount point must already exist, and it is best to be a newly created empty directory (to avoid overwriting the data in the directory)

Temporarily mounted, disappears after restart
First create an empty folder data as the mount point

Check if the mount is successful

Unmount (cancel the mount)

Force unmount umount -lf

Permanent mounting
vim /etc/fstab
///etc/fstab (file system table) is a configuration file in the Linux system. It contains information about the file system to be mounted in the system and the mounting options. This file is usually used to define the file system that is automatically mounted when the system starts.
//You must not make a mistake, otherwise the virtual machine will not be usable after restarting

0 0: Indicates no data backup and no check of the file system of the partition (the purpose is to improve efficiency)

SWAP swap partition

The system has physical memory. If the physical memory is not enough, part of the space in other physical memory needs to be released. Provided for physical memory use. The released space is stored in swap. After use, the memory stored in swap will be released to the program.

The main functions of Swap swap partition include:

  1. Memory expansion: Allows the system to continue running applications when physical memory is insufficient, although performance may be reduced but will not cause crashes.

  2. Process Staging: Move inactive processes or data to the Swap partition so that physical memory can be used for more urgent tasks.

  3. System stability: Ensure system stability and availability by avoiding memory exhaustion.

When choosing the size of your Swap partition, the usual advice is:

  • If the physical memory is less than 4GB, the Swap partition size should be at least 2 times the physical memory.

  • If the physical memory is between 4GB and 16GB, the Swap partition size can be set to 1.5 times the physical memory size.

  • If the physical memory is greater than 16GB, the Swap partition size can be set to a relatively small proportion of the physical memory, such as 16GB or 32GB.

Swap mount the new partition

mkswap /dev/sdc1 //System file type to create swap
swapon /dev/sdc1 //Start the partition
//swap completes expansion

LVM

Concept

LVM (Logical Volume Manager) is an advanced tool for managing disk storage on Linux systems. It provides disk virtualization and logical volume management functions. LVM makes disk management more flexible, allowing you to create, resize and manage logical volumes without relying on physical disk partitions.

Common operations of LVM:

  1. Create Physical Volume: Use the pvcreate command to create a physical volume, which can be a hard disk partition or other block device.

  2. Create Volume Group: Use the vgcreate command to create a volume group and add one or more physical volumes to the volume group.

  3. Create Logical Volume: Use the lvcreate command to create a logical volume within the volume group. You can specify the size and name.

  4. Format and mount logical volume (Logical Volume): Use standard file system tools (such as mkfs and mount) to format the logical volume and mounted into the file system.

  5. Resize logical volumes: Use the lvresize command to expand or reduce the size of a logical volume.

  6. Remove logical volumes or volume groups: Use the lvremove command to delete logical volumes that are no longer needed, and the vgremove command to delete volume groups.

  7. Extend volume group: Add new physical volumes to an existing volume group to expand storage capacity.

  8. Migrate data: LVM allows you to move data between physical volumes, which is useful for replacing hard drives or rebalancing storage.

The main advantages of LVM include flexibility, data protection and the ability to dynamically adjust. It makes storage management easier and adaptable to changing needs. However, caution is required when using LVM as incorrect operation may result in data loss. Therefore, before using LVM, it is recommended to back up important data and read LVM’s documentation carefully to understand how it works.

Basic composition

  1. Physical Volume (PV): A physical volume is the basic building block of LVM, which can be a hard disk partition, a disk drive or a network storage device. These physical volumes are added to the LVM system to create logical volumes.

  2. Volume Group (VG): A volume group is a collection of physical volumes that acts as a container for logical volumes. You can add one or more physical volumes to a volume group and represent the volume group as a single device in the operating system. This enables the physical volumes within the volume group to work together.

  3. Logical Volume (LV): A logical volume is a virtual volume created within a volume group. It is similar to a traditional hard disk partition, but is more flexible. Logical volumes can be dynamically resized without repartitioning the hard drive.

  4. Physical Extent (PE): A physical block is a logical division of a physical volume. Logical volumes in a volume group are built from physical blocks, and the size of the physical blocks is part of the LVM configuration.

Main commands

Create a volume group (provided there is an existing physical volume)

The volume group name is jz1, followed by the address of the physical volume.

View volume groups

Create a logical volume

-L + 2G: Specifies the size of the logical volume. If it is larger than the size of the volume group, the creation will fail.

-n: Specify the logical volume name

Finally, there is the volume group naming process, which specifies which volume group to create the logical volume.

View logical volumes

Create a file system and mount it

Create a file system
Mount

View

Logical volume expansion (when the logical volume space is full)

View

Expansion

lvextend: This is the command itself, used to perform an extend operation of a logical volume.

-L: This is the parameter that specifies the new size of the logical volume to be extended.

+ 10M: This is the value after the -L parameter, indicating that the logical volume is to be extended by 10MB. + means to add 10MB to the current size.

/dev/jz1/lj1: This is the path to the logical volume to be extended. /dev/jz1/lj1 represents the path of the logical volume, where /dev is the device directory, jz1 is the name of the volume group (Volume Group), and lj1 is the name of the logical volume.

So, the meaning of this command is to extend the logical volume lj1 located in volume group /dev/jz1, increasing its size by 10MB.

Refresh, expansion successful

Shrinking (xfs file system cannot be reduced, ext43 can be reduced.)

Unmount first, then reduce the size

Disk quota

Preparation

Turn off detection

Empowerment

Unmount

Mounting must be implemented through disk quotas. Otherwise, an error will be reported

Restrict disk

Limit the number of files

Simulate creating a file with a size of 120M.

dd if=/dev/zero of=/test/123.txt bs=10M count=12 dd Continuous copy command if=specifies the output file/dev/zero zero device file, which can provide unlimited null characters and can be used Generate a file of a specific size and point these null characters to the file bs=10M. The size of each copy is 10M count: copy 12 times

Cancel quota

xfs_quota -x -c 'disable -up' /data

RAID

It is a data storage technology designed to improve the availability, redundancy and performance of data. It provides data protection and performance gains by combining multiple hard drives to create a single logical storage unit. Different RAID levels provide different data protection and performance characteristics.

  1. RAID 0:
  • Also called striping, data is evenly distributed across two or more hard drives.

  • Provides excellent performance because data can be read from and written to multiple disks simultaneously.

  • No redundancy is provided, if one hard drive fails, all data is lost.

  • Fastest in reading and writing

  1. RAID 1:
  • Also called mirroring, data is copied to two or more hard drives.

  • Provides data redundancy so that if one hard drive fails, the data is still available.

  • Performance is generally not as good as RAID 0 because data must be written to multiple hard drives simultaneously.

  • The number of hard drives must be an even number

  • The writing speed is slightly slower, and the reading performance is similar to raid0

  1. RAID 5:
  • Use distributed parity for data redundancy and performance.

  • The data is divided into blocks and stored on multiple hard drives, and the parity of each block is also stored on a different hard drive.

  • If a hard drive fails, the data can be rebuilt using parity.

  • Provides good performance and moderate redundancy.

  • At least three hard disks, disk utilization: (n-1)N

  • Poor writing performance, strong reading performance

  1. RAID 6:
  • Similar to RAID 5, but uses two independent parity blocks, providing a higher level of redundancy.

  • Can tolerate the failure of two hard drives.

  • Provides good performance and higher levels of data redundancy.

  • At least four plates

  • The read performance is okay, but the write performance is particularly poor.

  1. RAID 10:
  • Also known as RAID 1 + 0, it is a combination of RAID 0 and RAID 1.

  • Data is first mirrored and then striped, providing high performance and redundancy.

  • It can tolerate one failure in each base group and has high availability.

  • At least four pieces, and must be an even number

  1. RAID 50:
  • Also known as RAID 5+0, it is the striped version of RAID 5.

  • Data is divided into blocks and stored in multiple RAID 5 groups, which are then striped.

  • Provides high performance and moderate redundancy.

  1. RAID 60:
  • Similar to RAID 50, but uses two independent RAID 6 groups, providing a higher level of redundancy.

  • Can tolerate multiple hard drive failures.

Choosing the appropriate RAID level depends on specific needs, including performance, data redundancy and availability. RAID technology can be implemented at both the hardware and software levels and is typically used in environments such as servers, storage arrays, and data centers to protect and improve data availability.

Hot backup

In Linux systems, disk hot backup is usually implemented through soft RAID (Redundant Array of Independent Disks) technology. Soft RAID allows you to combine multiple physical disks into a logical volume, improving data redundancy and availability. Among them, hot backup is a method for quickly switching to a backup disk when the primary disk fails.

Here are the general steps for configuring hot backup in Linux:

  1. Install mdadm: First, make sure you have mdadm installed on your system, which is a tool for managing software RAID on Linux. You can install it using a package manager such as apt (Debian/Ubuntu) or yum (CentOS/RHEL).

  2. Add physical disks: If you have additional physical disks, connect them to the system and identify them. You can use the command fdisk -l to list all disks.

  3. Create a RAID array: Use the mdadm command to create a RAID array. For example, if you want to create a RAID 1 (mirror) array, you can run the following command:

mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdX /dev/sdY

Among them, /dev/md0 is the name of the new RAID device, --level specifies the RAID level (1 means mirroring), --raid-devices code> specifies the physical disks participating in hot backup.

4. Create a file system: Once the RAID array is created, you need to create a file system on it. You can use the mkfs command to perform this operation, such as:

mkfs.ext4 /dev/md0

5. Mount the RAID array: Mount the newly created RAID array to a directory in the system to access the data. You can mount it using the mount command, or add it to /etc/fstab to automatically mount it at system startup.

6. Add hot backup: For hot backup, you can use the mdadm command to add other physical disks as spare devices. For example:

mdadm /dev/md0 --add /dev/sdZ

This way, if the primary disk fails, the backup disk takes over.

7. Monitoring and maintenance: Regularly check the status of the RAID array to ensure that everything is normal. You can use the mdadm command to view array status and event logs.

Hot disk backup provides data redundancy and availability, but requires careful configuration and maintenance to ensure it works properly. Please note that the commands and options in the above steps may vary depending on your needs and Linux distribution. Before performing any RAID operations, be sure to back up important data and consult relevant documentation carefully.

Comprehensive experiment example (RAID5 hot backup)

Create

View creation progress

details

Format, mount

Test

Write data

Forced offline

Sdb1 is forced offline here. You can see that sde1 comes up and the hot backup is successful.

The knowledge points of the article match the official knowledge files, and you can further learn relevant knowledge. Cloud native entry-level skills treek8s package management (helm)Installation helm16562 people are learning the system