RAID存储

RAID 概念

RAID,即廉价磁盘冗余阵列(Redundant Array of Inexpensive Disks)

大容量磁盘价格高昂,RAID的核心思路是将多块容量小成本低的磁盘组合成大容量高性能可靠性的大容量磁盘。

随着磁盘成本大幅下降,“廉价”概念失去意义,RAID咨询委员会(RAID Advisory Board, RAB)将“廉价(Inexpensive)”替换为“独立(Independent)”,RAID自此定义为独立磁盘冗余阵列(Redundant Array of Independent Disks)

RAID 实现方式

从技术实现维度,RAID主要分为三类,核心差异在于是否依赖专用硬件芯片:

  • 软 RAID:无独立的RAID控制/处理芯片和I/O处理芯片,所有RAID功能由操作系统和CPU完成,实现成本低但效率最差。
  • 硬 RAID:配备专用RAID控制/处理芯片、I/O处理芯片及阵列缓冲,不占用主机CPU资源,性能优异但硬件成本高。
  • 软硬混合 RAID:具备RAID控制/处理芯片,但无独立I/O处理芯片,需CPU和驱动程序辅助完成功能,性能与成本介于软RAID和硬RAID之间。

操作系统识别到的设备是raid卡提供的设备,而不是直接管理底层硬盘。

RAID 级别

RAID通过数据条带、镜像、数据校验三类核心技术实现高性能、高可靠性、容错能力和扩展性。不同技术的组合策略形成不同RAID级别,以适配不同数据应用场景。

D. A. Patterson等的论文最初定义了RAID1RAID5,1988年后扩展出RAID0和RAID6;后续厂商推出的RAID7、RAID10/01、RAID50等无统一标准,业界公认的核心级别为RAID0RAID5,实际应用中以RAID0、RAID1、RAID4、RAID5、RAID6、RAID10为主。

各RAID级别无高低之分,需结合业务对可用性、性能、成本的需求选择适配的级别和实现方式。

在这里插入图片描述

RAID 实践

实验环境需在虚拟机中添加6块20G硬盘,设备名分别为sdb、sdc、sdd、sde、sdf、sdg,用于后续各类RAID阵列的创建与测试。

Linux系统中通过mdadm工具实现软RAID的创建、配置、监控与维护,以下为核心RAID级别的实操流程。

管理 RAID 0
1 创建 RAID 0
#要先安装mdadm工具
[root@centos ~ 10:03:30]# yum install -y mdadm
# 创建RAID 0阵列:设备名/dev/md0,级别0,成员盘2块(sdb、sdc)
[root@centos ~ 10:04:00]# mdadm --create /dev/md0 --level 0 --raid-devices 2 /dev/sd{b,c}
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

注释:--level指定RAID级别,--raid-devices指定成员盘数量,/dev/sd{b,c}sdbsdc的简写

2 查看 RAID 0 状态
# 查看RAID概要信息(内核态RAID状态)
[root@centos7 ~ 10:25:18]# cat /proc/mdstat 
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] 
md0 : active raid0 sdc[1] sdb[0]
      41908224 blocks super 1.2 512k chunks

unused devices: <none>
# 查看RAID设备详细信息
[root@centos ~ 10:26:18]# mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Thu Apr  9 10:26:18 2026
        Raid Level : raid0
        Array Size : 41908224 (39.97 GiB 42.91 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Thu Apr  9 10:26:18 2026
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

        Chunk Size : 512K

Consistency Policy : none

              Name : centos.7.song.cloud:0  (local to host centos.7.song.cloud)
              UUID : 2b98a5df:a88d3fed:c1839d4b:aaf4bfd7
            Events : 0

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc

注释:核心关注Raid Level(级别)、State(状态,clean为正常)、Chunk Size(条带块大小)、成员盘状态。

# 查看RAID设备与物理盘的映射关系
[root@centos7 ~ 10:26:18]# lsblk /dev/md0
NAME MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
md0    9:0    0  40G  0 raid0

[root@centos7 ~ 10:27:18]# lsblk /dev/sdb /dev/sdc
NAME  MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
sdb     8:16   0  20G  0 disk  
└─md0   9:1    0  20G  0 raid0 
sdc     8:32   0  20G  0 disk  
└─md0   9:1    0  20G  0 raid0 
3 格式化与挂载 RAID 0
# 格式化RAID设备为XFS文件系统
[root@centos ~ 10:27:06]# mkfs.xfs /dev/md0
meta-data=/dev/md0               isize=512    agcount=16, agsize=654720 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=10475520, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=5120, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
# 创建挂载点
[root@centos ~ 10:27:23]# mkdir -p /raid/raid0
# 挂载RAID设备到挂载点
[root@centos ~ 10:27:43]# mount /dev/md0 /raid/raid0/
# 验证挂载结果
[root@centos ~ 10:28:04]# df -h
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 979M     0  979M   0% /dev
tmpfs                    991M     0  991M   0% /dev/shm
tmpfs                    991M  9.6M  981M   1% /run
tmpfs                    991M     0  991M   0% /sys/fs/cgroup
/dev/mapper/centos-root   50G  1.9G   49G   4% /
/dev/sda1               1014M  171M  844M  17% /boot
/dev/mapper/centos-home  147G   33M  147G   1% /home
tmpfs                    199M     0  199M   0% /run/user/0
/dev/md0                  40G   33M   40G   1% /raid/raid0
# 测试数据写入
[root@centos ~ 10:28:10]# cp /etc/ho* /raid/raid0/
[root@centos ~ 10:28:32]# ls /raid/raid0/
host.conf  hostname  hosts  hosts.allow  hosts.deny
4 删除 RAID 0
# 卸载挂载点
[root@centos ~ 10:28:38]# umount /raid/raid0
# 停止RAID阵列(销毁阵列)
[root@centos ~ 10:28:56]# mdadm --stop /dev/md0
mdadm: stopped /dev/md0
# 清除物理盘上的RAID超级块(恢复为普通磁盘)
[root@centos ~ 10:29:10]# mdadm --zero-superblock /dev/sd{b,c}

#查看是否清除
[root@centos ~ 10:29:54]# wipefs /dev/sd{b,c}

5 补充说明
  • RAID 0不支持新增成员盘扩展容量:
[root@centos7 ~]# mdadm --add /dev/md0 /dev/sdd mdadm: add new device failed for /dev/sdd as 2: Invalid argument
  • RAID 0不支持标记单盘故障(无冗余,单盘故障即阵列失效):
[root@centos7 ~]# mdadm --fail /dev/md0 /dev/sdc mdadm: Cannot remove /dev/sdc from /dev/md0, array will be failed.
管理 RAID 1
1 创建 RAID 1
# 创建RAID 1阵列:设备名/dev/md1,级别1,成员盘2块(sdb、sdc)
[root@centos ~ 10:34:32]# mdadm --create /dev/md1 --level 1 --raid-devices 2 /dev/sd{b,c}
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.

注释:RAID 1元数据默认存储在磁盘开头,若用于/boot分区需指定--metadata=0.90兼容老旧引导程序

2 查看 RAID 1 状态
[root@centos ~ 10:57:40]# mdadm --detail /dev/md1 
/dev/md1:
           Version : 1.2
     Creation Time : Thu Apr  9 10:57:40 2026
        Raid Level : raid1
        Array Size : 20954112 (19.98 GiB 21.46 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Thu Apr  9 10:58:06 2026
             State : clean, resyncing 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

     Resync Status : 30% complete    #到100%才可以格式化硬盘

              Name : centos.7.song.cloud:1  (local to host centos.7.song.cloud)
              UUID : 1c3a6683:9319d75f:19a612b8:4b5eb857
            Events : 4

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc

注释:resyncing表示镜像数据正在同步,需等待同步完成(100%)后再进行格式化操作。

# 查看RAID与物理盘映射
[root@centos7 ~]# lsblk /dev/md1
NAME MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
md1    9:1    0  20G  0 raid1

[root@centos7 ~]# lsblk /dev/sdb /dev/sdc
NAME  MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
sdb     8:16   0  20G  0 disk  
└─md1   9:1    0  20G  0 raid1 
sdc     8:32   0  20G  0 disk  
└─md1   9:1    0  20G  0 raid1 
3 格式化与挂载 RAID 1
# 等待同步完成后,格式化RAID设备
[root@centos ~ 10:59:25]# mkfs.xfs /dev/md1
mkfs.xfs: /dev/md1 appears to contain an existing filesystem (xfs).
mkfs.xfs: Use the -f option to force overwrite.#没有清空需要用-f强制格式化
[root@centos ~ 10:59:43]# mkfs.xfs -f /dev/md1
meta-data=/dev/md1               isize=512    agcount=4, agsize=1309632 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=5238528, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
# 创建挂载点
[root@centos ~ 10:59:55]# mkdir /raid/raid1
# 挂载设备
[root@centos ~ 11:00:15]# mount /dev/md1 /raid/raid1
# 验证挂载
[root@centos ~ 11:00:39]# df -h
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 979M     0  979M   0% /dev
tmpfs                    991M     0  991M   0% /dev/shm
tmpfs                    991M  9.6M  981M   1% /run
tmpfs                    991M     0  991M   0% /sys/fs/cgroup
/dev/mapper/centos-root   50G  1.9G   49G   4% /
/dev/sda1               1014M  171M  844M  17% /boot
/dev/mapper/centos-home  147G   33M  147G   1% /home
tmpfs                    199M     0  199M   0% /run/user/0
/dev/md1                  20G   33M   20G   1% /raid/raid1
# 测试数据写入
[root@centos ~ 11:00:42]# cp /etc/ho* /raid/raid1
[root@centos ~ 11:01:21]# ls /raid/raid1
host.conf  hostname  hosts  hosts.allow  hosts.deny
4 增加热备盘
# 为RAID 1添加热备盘sdd
[root@centos ~ 11:01:43]# mdadm --add /dev/md1 /dev/sdd
mdadm: added /dev/sdd
# 查看热备盘状态(spare为备用)
[root@centos ~ 11:02:26]# mdadm --detail /dev/md1 
/dev/md1:
           Version : 1.2
     Creation Time : Thu Apr  9 10:57:40 2026
        Raid Level : raid1
        Array Size : 20954112 (19.98 GiB 21.46 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Thu Apr  9 11:02:33 2026
             State : clean 
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 1

Consistency Policy : resync

              Name : centos.7.song.cloud:1  (local to host centos.7.song.cloud)
              UUID : 1c3a6683:9319d75f:19a612b8:4b5eb857
            Events : 18

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc

       2       8       48        -      spare   /dev/sdd

5 模拟磁盘故障
# 手动标记sdc为故障盘
[root@centos ~ 11:02:50]# mdadm --fail /dev/md1 /dev/sdc
mdadm: set /dev/sdc faulty in /dev/md1

# 查看故障后状态(sdd自动顶替并同步)
[root@centos ~ 11:03:21]# mdadm --detail /dev/md1 
/dev/md1:
           Version : 1.2
     Creation Time : Thu Apr  9 10:57:40 2026
        Raid Level : raid1
        Array Size : 20954112 (19.98 GiB 21.46 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Thu Apr  9 11:03:27 2026
             State : clean, degraded, recovering 
    Active Devices : 1
   Working Devices : 2
    Failed Devices : 1
     Spare Devices : 1

Consistency Policy : resync

    Rebuild Status : 7% complete

              Name : centos.7.song.cloud:1  (local to host centos.7.song.cloud)
              UUID : 1c3a6683:9319d75f:19a612b8:4b5eb857
            Events : 21

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       2       8       48        1      spare rebuilding   /dev/sdd

       1       8       32        -      faulty   /dev/sdc


# 验证数据可正常访问
[root@centos ~ 11:05:06]# ls /raid/raid1
host.conf  hostname  hosts  hosts.allow  hosts.deny
6 删除故障磁盘
# 移除故障盘sdc
[root@centos ~ 11:05:36]# mdadm --remove /dev/md1 /dev/sdc
mdadm: hot removed /dev/sdc from /dev/md1
# 验证移除结果
[root@centos ~ 11:05:54]# mdadm --detail /dev/md1 
/dev/md1:
           Version : 1.2
     Creation Time : Thu Apr  9 10:57:40 2026
        Raid Level : raid1
        Array Size : 20954112 (19.98 GiB 21.46 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Thu Apr  9 11:06:04 2026
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : centos.7.song.cloud:1  (local to host centos.7.song.cloud)
              UUID : 1c3a6683:9319d75f:19a612b8:4b5eb857
            Events : 40

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       2       8       48        1      active sync   /dev/sdd

7 再次模拟故障
# 手动标记sdd为故障盘
[root@centos ~ 11:06:12]# mdadm --fail /dev/md1 /dev/sdd
mdadm: set /dev/sdd faulty in /dev/md1

[root@centos ~ 11:06:38]# mdadm -D /dev/md1 
/dev/md1:
           Version : 1.2
     Creation Time : Thu Apr  9 10:57:40 2026
        Raid Level : raid1
        Array Size : 20954112 (19.98 GiB 21.46 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Thu Apr  9 11:06:38 2026
             State : clean, degraded 
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 1
     Spare Devices : 0

Consistency Policy : resync

              Name : centos.7.song.cloud:1  (local to host centos.7.song.cloud)
              UUID : 1c3a6683:9319d75f:19a612b8:4b5eb857
            Events : 42

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       -       0        0        1      removed

       2       8       48        -      faulty   /dev/sdd

#可以查看文件
[root@centos ~ 11:07:02]# ls /raid/raid1
host.conf  hostname  hosts  hosts.allow  hosts.deny
#可以写入文件
[root@centos ~ 11:07:25]# echo hello > /raid/raid1/test.txt
#卸载挂载后可以重新挂载
[root@centos ~ 11:07:44]# umount /raid/raid1 
[root@centos ~ 11:08:00]# mount /dev/md1 /raid/raid1
[root@centos ~ 11:08:14]# ls /raid/raid1
host.conf  hostname  hosts  hosts.allow  hosts.deny  test.txt

总结:raid1阵列中任一成员故障,不影响数据的完整性。

8 删除 RAID 1
# 卸载挂载点
[root@centos ~ 11:08:19]# umount /dev/md1 

# 停止RAID阵列
[root@centos ~ 11:08:44]# mdadm --stop /dev/md1 
mdadm: stopped /dev/md1

# 清除物理盘超级块
[root@centos ~ 11:08:57]# mdadm --zero-superblock /dev/sd{b..d}

使用dd工具填充更彻底。

[root@centos7 ~ 11:22:09]# dd if=/dev/zero of=/dev/sdb bs=1M count=1024

作用:使用0填充/dev/sdb硬盘。

  • if:Input File,/dev/zero 全为0
  • of:Output File
  • bs:Block Size,每次添加多大数据流
  • count:一共填充多少个Block
9 补充说明

RAID 1核心价值是数据冗余,而非扩容:即使新增磁盘,阵列总容量仍等于单盘容量(镜像机制),无法通过加盘提升可用空间。

管理 RAID 5
1 创建 RAID 5
# 创建RAID 5阵列:设备名/dev/md5,级别5,成员盘4块(sdb、sdc、sdd、sde)
[root@centos ~ 11:34:50]# mdadm --create /dev/md5 --level 5 --raid-device 4 /dev/sd{b..e}
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md5 started.

注释:RAID 5最少需3块盘,此处用4块盘(1块用于分布式校验)。

2 查看 RAID 5 状态
[root@centos ~ 11:35:46]# mdadm --detail /dev/md5 
/dev/md5:
           Version : 1.2
     Creation Time : Thu Apr  9 11:35:46 2026
        Raid Level : raid5
        Array Size : 62862336 (59.95 GiB 64.37 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Thu Apr  9 11:36:32 2026
             State : clean, degraded, recovering## 
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 45% complete

              Name : centos.7.song.cloud:5  (local to host centos.7.song.cloud)
              UUID : fcdb16be:28944a98:61093f8e:254430ce
            Events : 8

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       4       8       64        3      spare rebuilding   /dev/sde

注释:recovering表示阵列正在构建/同步,需等待同步完成后再格式化。

# 查看RAID与物理盘映射
[root@centos ~ 11:36:34]# lsblk /dev/md5 
NAME MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
md5    9:5    0  60G  0 raid5 

[root@centos ~ 11:37:01]# lsblk /dev/sd{b..e}
NAME  MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
sdb     8:16   0  20G  0 disk  
└─md5   9:5    0  60G  0 raid5 
sdc     8:32   0  20G  0 disk  
└─md5   9:5    0  60G  0 raid5 
sdd     8:48   0  20G  0 disk  
└─md5   9:5    0  60G  0 raid5 
sde     8:64   0  20G  0 disk  
└─md5   9:5    0  60G  0 raid5 

3 格式化与挂载 RAID 5
# 等待阵列同步完成后,格式化设备
[root@centos ~ 11:37:20]# mkfs.xfs /dev/md5 
meta-data=/dev/md5               isize=512    agcount=16, agsize=982144 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=15714304, imaxpct=25
         =                       sunit=128    swidth=384 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=7680, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
# 创建挂载点
[root@centos ~ 11:37:37]# mkdir /raid/raid5
# 挂载设备
[root@centos ~ 11:37:47]# mount /dev/md5 /raid/raid5
# 验证挂载
[root@centos ~ 11:38:04]# df -h /raid/raid5
Filesystem      Size  Used Avail Use% Mounted on
/dev/md5         60G   33M   60G   1% /raid/raid5

# 测试数据写入
[root@centos ~ 11:38:16]# cp /etc/ho* /raid/raid5
[root@centos ~ 11:38:40]# ls /raid/raid5
host.conf  hostname  hosts  hosts.allow  hosts.deny
4 增加热备盘
# 为RAID 5添加热备盘sdf
[root@centos ~ 11:38:49]# mdadm --add /dev/md5 /dev/sdf
mdadm: added /dev/sdf

# 查看热备盘状态
[root@centos ~ 11:39:10]# mdadm --detail /dev/md5 
/dev/md5:
           Version : 1.2
     Creation Time : Thu Apr  9 11:35:46 2026
        Raid Level : raid5
        Array Size : 62862336 (59.95 GiB 64.37 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 4
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Thu Apr  9 11:39:10 2026
             State : clean 
    Active Devices : 4
   Working Devices : 5
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : centos.7.song.cloud:5  (local to host centos.7.song.cloud)
              UUID : fcdb16be:28944a98:61093f8e:254430ce
            Events : 19

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       4       8       64        3      active sync   /dev/sde

       5       8       80        -      spare   /dev/sdf

5 模拟磁盘故障
# 标记sdb为故障盘
[root@centos ~ 11:39:23]# mdadm --fail /dev/md5 /dev/sdb
mdadm: set /dev/sdb faulty in /dev/md5

# 查看故障后状态(sdf自动顶替并同步)
[root@centos ~ 11:39:56]# mdadm --detail /dev/md5 
/dev/md5:
           Version : 1.2
     Creation Time : Thu Apr  9 11:35:46 2026
        Raid Level : raid5
        Array Size : 62862336 (59.95 GiB 64.37 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 4
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Thu Apr  9 11:40:04 2026
             State : clean, degraded, recovering 
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 1
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 12% complete

              Name : centos.7.song.cloud:5  (local to host centos.7.song.cloud)
              UUID : fcdb16be:28944a98:61093f8e:254430ce
            Events : 24

    Number   Major   Minor   RaidDevice State
       5       8       80        0      spare rebuilding   /dev/sdf
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       4       8       64        3      active sync   /dev/sde

       0       8       16        -      faulty   /dev/sdb

# 验证数据可正常访问
[root@centos ~ 11:40:09]# ls /raid/raid5
host.conf  hostname  hosts  hosts.allow  hosts.deny
6 删除故障磁盘
# 移除故障盘sdb
[root@centos ~ 11:40:21]# mdadm --remove /dev/md5 /dev/sdb
mdadm: hot removed /dev/sdb from /dev/md5

# 验证移除结果
[root@centos ~ 11:40:40]# mdadm --detail /dev/md5 
/dev/md5:
           Version : 1.2
     Creation Time : Thu Apr  9 11:35:46 2026
        Raid Level : raid5
        Array Size : 62862336 (59.95 GiB 64.37 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Thu Apr  9 11:40:48 2026
             State : clean, degraded, recovering 
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 52% complete

              Name : centos.7.song.cloud:5  (local to host centos.7.song.cloud)
              UUID : fcdb16be:28944a98:61093f8e:254430ce
            Events : 34

    Number   Major   Minor   RaidDevice State
       5       8       80        0      spare rebuilding   /dev/sdf
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       4       8       64        3      active sync   /dev/sde
7 扩容 RAID 5

注释:RAID 5仅支持扩容(增加成员盘),不支持减容;扩容仅在阵列“clean”正常状态下可执行,降级/重构时禁止。

# 新增2块盘(sdb、sdg)到RAID 5
[root@centos ~ 11:41:06]# mdadm --add /dev/md5 /dev/sdb /dev/sdg
mdadm: added /dev/sdb
mdadm: added /dev/sdg

# 查看新增盘状态(spare为备用)
[root@centos ~ 11:42:34]# mdadm --detail /dev/md5 
/dev/md5:
           Version : 1.2
     Creation Time : Thu Apr  9 11:35:46 2026
        Raid Level : raid5
        Array Size : 62862336 (59.95 GiB 64.37 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 4
     Total Devices : 6
       Persistence : Superblock is persistent

       Update Time : Thu Apr  9 11:42:34 2026
             State : clean 
    Active Devices : 4
   Working Devices : 6
    Failed Devices : 0
     Spare Devices : 2

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : centos.7.song.cloud:5  (local to host centos.7.song.cloud)
              UUID : fcdb16be:28944a98:61093f8e:254430ce
            Events : 47

    Number   Major   Minor   RaidDevice State
       5       8       80        0      active sync   /dev/sdf
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       4       8       64        3      active sync   /dev/sde

       6       8       16        -      spare   /dev/sdb
       7       8       96        -      spare   /dev/sdg

# 扩展阵列成员数为5(--grow为扩容参数)
[root@centos ~ 11:42:41]# mdadm --grow /dev/md5 --raid-device 5

# 等待阵列重构完成(查看进度)###一定要等待重构完成
[root@centos ~ 11:43:07]# mdadm --detail /dev/md5
/dev/md5:
           Version : 1.2
     Creation Time : Thu Apr  9 11:35:46 2026
        Raid Level : raid5
        Array Size : 62862336 (59.95 GiB 64.37 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 5
     Total Devices : 6
       Persistence : Superblock is persistent

       Update Time : Thu Apr  9 11:43:20 2026
             State : clean, reshaping 
    Active Devices : 5
   Working Devices : 6
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

    Reshape Status : 15% complete
     Delta Devices : 1, (4->5)

              Name : centos.7.song.cloud:5  (local to host centos.7.song.cloud)
              UUID : fcdb16be:28944a98:61093f8e:254430ce
            Events : 77

    Number   Major   Minor   RaidDevice State
       5       8       80        0      active sync   /dev/sdf
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       4       8       64        3      active sync   /dev/sde
       7       8       96        4      active sync   /dev/sdg

       6       8       16        -      spare   /dev/sdb

# 验证RAID容量(从60G扩容至80G)
[root@centos ~ 11:45:47]# lsblk /dev/md5 
NAME MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
md5    9:5    0  80G  0 raid5 /raid/raid5

# 扩展文件系统(XFS文件系统用xfs_growfs)
[root@centos ~ 13:25:01]# xfs_growfs /raid/raid5
meta-data=/dev/md5               isize=512    agcount=16, agsize=982144 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=15714304, imaxpct=25
         =                       sunit=128    swidth=384 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=7680, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 15714304 to 20954112
[root@centos ~ 13:31:41]# df -h /dev/md5
Filesystem      Size  Used Avail Use% Mounted on
/dev/md5         80G   34M   80G   1% /raid/raid5
8 再次模拟磁盘故障
#先移除热备盘
[root@centos ~ 13:31:54]# mdadm --remove /dev/md5 /dev/sdb
mdadm: hot removed /dev/sdb from /dev/md5
#使sdg故障
[root@centos ~ 13:44:53]# mdadm --fail /dev/md5 /dev/sdg
mdadm: set /dev/sdg faulty in /dev/md5
#查看
[root@centos ~ 13:45:14]# mdadm -D /dev/md5 |tail -5
       2       8       48        2      active sync   /dev/sdd
       4       8       64        3      active sync   /dev/sde
       -       0        0        4      removed

       7       8       96        -      faulty   /dev/sdg
#可以查看
[root@centos ~ 13:45:34]# ls /raid/raid5
host.conf  hostname  hosts  hosts.allow  hosts.deny
#可以写入
[root@centos ~ 13:47:01]# echo hello > /raid/raid5/test.txt
#可以卸载并重新挂载
[root@centos ~ 13:47:36]# umount /raid/raid5 
[root@centos ~ 13:47:49]# mount /dev/md5 /raid/raid5
[root@centos ~ 13:48:08]# ls /raid/raid5
host.conf  hostname  hosts  hosts.allow  hosts.deny  test.txt

9 删除 RAID 5
# 卸载挂载点
[root@centos ~ 13:48:16]# umount /dev/md5
# 停止RAID阵列
[root@centos ~ 13:48:41]# mdadm --stop /dev/md5 
mdadm: stopped /dev/md5

# 清除物理盘超级块
[root@centos ~ 13:49:41]# mdadm --zero-superblock /dev/sd{b..g}

也可以使用dd彻底擦除:

[root@centos ~ 13:49:23]# for device in /dev/sd{b..g}
> do
> dd if=/dev/zero of=$device bs=1M count=1024
> done
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.90595 s, 563 MB/s
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.668338 s, 1.6 GB/s
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.658689 s, 1.6 GB/s
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.681235 s, 1.6 GB/s
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.658485 s, 1.6 GB/s
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.660326 s, 1.6 GB/s

10 重构 RAID 5

注释:若停止阵列后未清除超级块,可通过以下命令重构阵列,数据不丢失:

[root@centos7 ~]# mdadm --assemble /dev/md5 /dev/sd{b..g}

LVM 存储

实验环境需在虚拟机中添加3块20G的硬盘,设备名分别为sdb、sdc、sdd,可通过以下命令查看硬盘信息:

[root@centos7 ~]# lsblk /dev/sd{b..d}
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb    8:16   0  20G  0 disk 
sdc    8:32   0  20G  0 disk 
sdd    8:48   0  20G  0 disk

逻辑卷基本管理

1 传统硬盘分区管理的缺点

传统硬盘分区方案在灵活性和可靠性上存在明显不足,主要体现在:

  1. 分区空间需占用物理连续的磁盘区域,扩容操作难度大(尤其是扩容分区后方无空闲空间时);
  2. 基于分区创建的文件系统无法跨多个硬盘,受单块硬盘容量限制;
  3. 硬盘物理损坏时,分区内的所有数据会直接丢失,无冗余保护机制
2 逻辑卷管理的优势

LVM 作为一种灵活的存储管理方案,弥补了传统分区的不足,核心优势包括:

  1. 灵活的空间调整:支持逻辑卷空间的在线扩展和缩减,无需重新规划磁盘布局;
  2. 跨盘存储能力:逻辑卷可整合多块硬盘的空间,轻松创建超大容量文件系统;
  3. 数据冗余保护:支持创建镜像卷(RAID 逻辑卷),单块硬盘损坏时数据不丢失;
  4. 快照功能:可创建逻辑卷快照,保留某一时刻的数据集,类似虚拟机快照,便于数据恢复。
3 LVM 核心概念

LVM 将多个磁盘/分区整合为统一的存储池,再按需划分逻辑卷使用,核心概念如下:

  • 物理卷(PV, Physical Volume):LVM 的基础存储单元,由磁盘、磁盘分区或 RAID 等块设备创建,包含 LVM 专属的管理参数;
  • 卷组(VG, Volume Group):由一个或多个物理卷组成的逻辑存储池,可理解为“虚拟硬盘”;
  • 逻辑卷(LV, Logical Volume):从卷组中划分出的逻辑空间,可理解为“虚拟分区”,可在其上创建文件系统并挂载使用。

简单总结:卷组整合多个物理卷形成存储池,逻辑卷从卷组中划分并提供给用户使用。

在这里插入图片描述

4 LVM 基本管理流程

LVM 的核心操作遵循“创建物理卷 → 创建卷组 → 创建逻辑卷”的流程,具体如下:

在这里插入图片描述

5 创建物理卷(PV)

物理卷是 LVM 的基础,需先将块设备初始化为 PV 才能纳入 LVM 管理:

# 1. 创建单个物理卷
[root@centos ~ 17:03:45]# pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created.

# 2. 批量创建多个物理卷(其余的)
[root@centos ~ 17:07:03]# pvcreate /dev/sd{c..g}
  Physical volume "/dev/sdc" successfully created.
  Physical volume "/dev/sdd" successfully created.
  Physical volume "/dev/sde" successfully created.
  Physical volume "/dev/sdf" successfully created.
  Physical volume "/dev/sdg" successfully created.

# 3. 查看所有 PV 列表(简洁版)
[root@centos ~ 17:07:26]# pvs
  PV         VG     Fmt  Attr PSize    PFree 
  /dev/sda2  centos lvm2 a--  <199.00g  4.00m
  /dev/sdb          lvm2 ---    20.00g 20.00g
  /dev/sdc          lvm2 ---    20.00g 20.00g
  /dev/sdd          lvm2 ---    20.00g 20.00g
  /dev/sde          lvm2 ---    20.00g 20.00g
  /dev/sdf          lvm2 ---    20.00g 20.00g
  /dev/sdg          lvm2 ---    20.00g 20.00g
# 4. 查看单个 PV 的详细信息(以 /dev/sdb 为例)
[root@centos ~ 17:07:35]# pvdisplay /dev/sdb
  "/dev/sdb" is a new physical volume of "20.00 GiB"
  --- NEW Physical volume ---
  PV Name               /dev/sdb
  VG Name               
  PV Size               20.00 GiB
  Allocatable           NO
  PE Size               0   
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               3J5SC0-2DkD-NY5y-0fE7-m8ST-qelU-rus9Nb
6 创建卷组(VG)

卷组整合多个物理卷形成统一存储池,可按需创建单 PV 或多 PV 的 VG:

# 1. 创建包含单个 PV 的卷组(卷组名:webapp,关联 PV:/dev/sdb)
[root@centos ~ 17:08:23]# vgcreate webapp /dev/sdb
  Volume group "webapp" successfully created

# 2. 创建包含多个 PV 的卷组(卷组名:dbapp,关联 PV:/dev/sdc、/dev/sdd)
[root@centos ~ 17:09:38]# vgcreate dbapp /dev/sd{c,d}
  Volume group "dbapp" successfully created

# 3. 查看 PV 归属(验证 PV 已加入对应 VG)
[root@centos ~ 17:10:05]# pvs
  PV         VG     Fmt  Attr PSize    PFree  
  /dev/sda2  centos lvm2 a--  <199.00g   4.00m
  /dev/sdb   webapp lvm2 a--   <20.00g <20.00g
  /dev/sdc   dbapp  lvm2 a--   <20.00g <20.00g
  /dev/sdd   dbapp  lvm2 a--   <20.00g <20.00g
  /dev/sde          lvm2 ---    20.00g  20.00g
  /dev/sdf          lvm2 ---    20.00g  20.00g
  /dev/sdg          lvm2 ---    20.00g  20.00g

# 4. 查看所有 VG 列表(简洁版)
[root@centos ~ 17:10:34]# vgs
  VG     #PV #LV #SN Attr   VSize    VFree  
  centos   1   3   0 wz--n- <199.00g   4.00m
  dbapp    2   0   0 wz--n-   39.99g  39.99g
  webapp   1   0   0 wz--n-  <20.00g <20.00g

# 5. 查看单个 VG 的详细信息(以 dbapp 为例)
[root@centos ~ 17:10:56]# vgdisplay dbapp
  --- Volume group ---
  VG Name               dbapp
  System ID             
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               39.99 GiB
  PE Size               4.00 MiB
  Total PE              10238
  Alloc PE / Size       0 / 0   
  Free  PE / Size       10238 / 39.99 GiB
  VG UUID               pVOr3i-vmuf-kArV-MsVE-UZCS-OokX-Sn438D
7 创建逻辑卷(LV)

逻辑卷从卷组中划分,可直接创建文件系统,支持跨 PV 分配空间:

# 1. 创建单 PV 逻辑卷(卷组:webapp,LV名:webapp01,大小:5G)
[root@centos ~ 17:11:28]# lvcreate -n webapp01 -L 5G webapp 
  Logical volume "webapp01" created.

# 2. 创建跨 PV 逻辑卷(卷组:dbapp,LV名:data01,大小:25G,跨sdc、sdd)
[root@centos ~ 17:13:13]# lvcreate  -n data01 -L 10G dbapp 
  Logical volume "data01" created.

# 3. 查看所有 LV 列表(简洁版)
[root@centos ~ 17:13:59]# lvs
  LV       VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  home     centos -wi-ao---- 146.99g                                                    
  root     centos -wi-ao----  50.00g                                                    
  swap     centos -wi-ao----   2.00g                                                    
  data01   dbapp  -wi-a-----  10.00g                                                    
  webapp01 webapp -wi-a-----   5.00g                                                   

逻辑卷支持三种访问路径,本质指向同一设备:

[root@centos ~ 17:14:03]# ls -l /dev/dbapp/data01 /dev/mapper/dbapp-data01 
lrwxrwxrwx 1 root root 7 Apr  9 17:13 /dev/dbapp/data01 -> ../dm-4
lrwxrwxrwx 1 root root 7 Apr  9 17:13 /dev/mapper/dbapp-data01 -> ../dm-4
  • /dev/<VG名>/<LV名>:直观易记,推荐使用;
  • /dev/mapper/<VG名>-<LV名>:系统映射路径;
  • /dev/dm-N:内核设备映射器路径(N为数字)。

查看逻辑卷详细信息

[root@centos ~ 17:15:28]# lvdisplay /dev/dbapp/data01 
  --- Logical volume ---
LV Path                /dev/dbapp/data01 # LV 访问路径
  LV Name                data01 #卷名
  VG Name                dbapp # 所属卷组
  LV UUID                q3blD8-IWOj-t6AO-HolL-F5yC-RbDf-YlYNAv
  LV Write Access        read/write
  LV Creation host, time centos.7.song.cloud, 2026-04-09 17:13:59 +0800
  LV Status              available # 可用状态
  # open                 0 # 未挂载
  LV Size                10.00 GiB # LV 大小
  Current LE             2560 # LE(逻辑扩展单元)数量,1LE=1PE
  Segments               1 # 跨2个PV(sdc、sdd)
  Allocation             inherit # 继承VG的分配策略
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:4 # 块设备号
   
# 验证 PV 空间使用  
[root@centos ~ 17:17:26]# pvs
  PV         VG     Fmt  Attr PSize    PFree  
  /dev/sda2  centos lvm2 a--  <199.00g   4.00m
  /dev/sdb   webapp lvm2 a--   <20.00g <15.00g
  /dev/sdc   dbapp  lvm2 a--   <20.00g <10.00g
  /dev/sdd   dbapp  lvm2 a--   <20.00g <20.00g
  /dev/sde          lvm2 ---    20.00g  20.00g
  /dev/sdf          lvm2 ---    20.00g  20.00g
  /dev/sdg          lvm2 ---    20.00g  20.00g
 # 验证 LV 跨盘
[root@centos ~ 17:17:44]# lsblk /dev/sd{b..d}
NAME              MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb                 8:16   0  20G  0 disk 
└─webapp-webapp01 253:3    0   5G  0 lvm  
sdc                 8:32   0  20G  0 disk 
└─dbapp-data01    253:4    0  10G  0 lvm  
sdd                 8:48   0  20G  0 disk 
8 在逻辑卷上创建文件系统

逻辑卷创建后需格式化文件系统并挂载,才能供业务使用:

# 1. 格式化 XFS 文件系统(CentOS7 推荐)
[root@centos ~ 17:18:02]# mkfs.xfs /dev/webapp/webapp01 
meta-data=/dev/webapp/webapp01   isize=512    agcount=4, agsize=327680 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=1310720, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
# 2. 临时挂载逻辑卷(重启失效)
[root@centos ~ 17:21:23]# mkdir /usr/share/nginx/html -p
[root@centos ~ 17:22:48]# mount /dev/webapp/webapp01 /usr/share/nginx/html/

# 3. 持久化挂载(修改 /etc/fstab,重启生效)
# 需在 /etc/fstab 中添加如下行:
# /dev/webapp/webapp01  /usr/share/nginx/html  xfs  defaults  0 0
[root@centos ~ 17:23:48]# mount -a # 挂载
9 LVM 清理

如需清理 LVM 配置,需按“卸载文件系统 → 删除 LV → 删除 VG → 删除 PV”的顺序操作:

# 1. 卸载已挂载的逻辑卷
[root@centos ~ 17:28:27]# umount /dev/webapp/webapp01

# 2. 删除逻辑卷(需确认,输入y)
[root@centos ~ 17:29:21]# lvremove /dev/webapp/webapp01 /dev/dbapp/data01 
Do you really want to remove active logical volume webapp/webapp01? [y/n]: y
  Logical volume "webapp01" successfully removed
Do you really want to remove active logical volume dbapp/data01? [y/n]: y
  Logical volume "data01" successfully removed

# 3. 删除卷组
[root@centos ~ 18:11:46]# vgremove webapp dbapp 
  Volume group "webapp" successfully removed
  Volume group "dbapp" successfully removed

# 4. 删除物理卷(清除 LVM 元数据)
[root@centos ~ 18:11:58]# pvremove /dev/sd{b..d}
  Labels on physical volume "/dev/sdb" successfully wiped.
  Labels on physical volume "/dev/sdc" successfully wiped.
  Labels on physical volume "/dev/sdd" successfully wiped.

卷组的扩展与缩减

卷组的容量可通过添加/移除 PV 灵活调整,满足业务存储需求变化。

1 环境准备

先创建基础卷组和逻辑卷,用于后续扩展/缩减测试:

# 1. 创建卷组 webapp(关联 PV /dev/sdb,自动初始化 PV)
[root@centos ~ 18:12:20]# vgcreate webapp /dev/sdb
  Physical volume "/dev/sdb" successfully created.
  Volume group "webapp" successfully created

# 2. 在 webapp 中创建 10G 逻辑卷 webapp01
[root@centos ~ 18:22:59]# lvcreate -n webapp01 -L 10G webapp 
WARNING: xfs signature detected on /dev/webapp/webapp01 at offset 0. Wipe it? [y/n]: y
  Wiping xfs signature on /dev/webapp/webapp01.
  Logical volume "webapp01" created.

2 扩展卷组(VG)

当卷组空间不足时,可添加新 PV 扩展容量:

# 将 /dev/sdc、/dev/sdd 加入 webapp 卷组(自动初始化 PV)
[root@centos ~ 18:23:23]# vgextend webapp /dev/sd{c..d}
  Physical volume "/dev/sdc" successfully created.
  Physical volume "/dev/sdd" successfully created.
  Volume group "webapp" successfully extended

3 缩减卷组(VG)

如需移除卷组中的 PV(如更换更大硬盘),需确保 PV 未被使用,步骤如下:

# 1. 查看 PV 使用状态(/dev/sdb 已分配10G,sdc/sdd 空闲)
[root@centos ~ 18:24:19]# pvs
  PV         VG     Fmt  Attr PSize    PFree  
  /dev/sda2  centos lvm2 a--  <199.00g   4.00m
  /dev/sdb   webapp lvm2 a--   <20.00g <10.00g
  /dev/sdc   webapp lvm2 a--   <20.00g <20.00g
  /dev/sdd   webapp lvm2 a--   <20.00g <20.00g

# 2. 直接移除已使用的 PV 会报错
[root@centos ~ 18:24:36]# vgreduce webapp /dev/sdb
  Physical volume "/dev/sdb" still in use

# 3. 迁移 PV 数据(将 /dev/sdb 的数据移到 /dev/sdd)
[root@centos ~ 18:24:55]# pvmove /dev/sdb
  /dev/sdb: Moved: 3.71%
  /dev/sdb: Moved: 100.00%
#迁移数据要指定盘否则会造成数据传到随机盘

# 4. 再次查看 PV 状态(/dev/sdb 空闲,/dev/sdd 已使用)
[root@centos ~ 18:25:43]# pvs
  PV         VG     Fmt  Attr PSize    PFree  
  /dev/sda2  centos lvm2 a--  <199.00g   4.00m
  /dev/sdb   webapp lvm2 a--   <20.00g <20.00g
  /dev/sdc   webapp lvm2 a--   <20.00g <20.00g
  /dev/sdd   webapp lvm2 a--   <20.00g <20.00g
  /dev/sde   webapp lvm2 a--   <20.00g <10.00g

# 5. 移除空闲的 /dev/sdb 从 webapp 卷组
[root@centos ~ 18:26:12]# vgreduce webapp /dev/sdb
  Removed "/dev/sdb" from volume group "webapp

# 6. 验证移除结果(/dev/sdb 已脱离 webapp)
[root@centos ~ 18:26:36]# pvs
  PV         VG     Fmt  Attr PSize    PFree  
  /dev/sda2  centos lvm2 a--  <199.00g   4.00m
  /dev/sdb          lvm2 ---    20.00g  20.00g
  /dev/sdc   webapp lvm2 a--   <20.00g <20.00g
  /dev/sdd   webapp lvm2 a--   <20.00g <20.00g
  /dev/sde   webapp lvm2 a--   <20.00g <10.00g
  /dev/sdf   webapp lvm2 a--   <20.00g <20.00g
  /dev/sdg   webapp lvm2 a--   <20.00g <20.00g

逻辑卷的扩展与缩减

逻辑卷的容量可直接调整,需结合卷组空闲空间操作。

1 扩展逻辑卷(LV)

卷组有空闲空间时,可直接扩展 LV 容量:

# 给 webapp01 增加 2G 空间(总容量变为 12G)
[root@centos ~ 18:26:42]# lvextend -L +2G /dev/webapp/webapp01 
  Size of logical volume webapp/webapp01 changed from 10.00 GiB (2560 extents) to 12.00 GiB (3072 extents).
  Logical volume webapp/webapp01 successfully resized.

# 验证 LV 大小
[root@centos ~ 18:42:45]# lvs /dev/webapp/webapp01 
  LV       VG     Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  webapp01 webapp -wi-a----- 12.00g 
2 缩减逻辑卷(LV)

逻辑卷空间充足时,可缩减容量(注意:缩减有数据丢失风险,需谨慎):

[root@centos ~ 18:44:09]# lvreduce -L -2G /dev/webapp/webapp01 
  WARNING: Reducing active logical volume to 10.00 GiB.
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce webapp/webapp01? [y/n]: y
  Size of logical volume webapp/webapp01 changed from 12.00 GiB (3072 extents) to 10.00 GiB (2560 extents).
  Logical volume webapp/webapp01 successfully resized.

# 验证 LV 大小
[root@centos ~ 18:48:32]# lvs /dev/webapp/webapp01 
  LV       VG     Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  webapp01 webapp -wi-a----- 10.00g 

文件系统的扩展与缩减

LV 容量调整后,需同步调整文件系统大小,否则无法使用新增空间(或缩减后空间异常)。

1 扩展 XFS 文件系统

XFS 是 CentOS7 默认文件系统,仅支持扩展,不支持缩减,步骤如下:

环境准备
# 1. 格式化 LV 为 XFS
[root@centos7 ~]# mkfs.xfs /dev/webapp/webapp01

# 2. 创建挂载点并挂载
##已创 [root@centos7 ~]# mkdir /usr/share/nginx/html
[root@centos ~ 18:49:46]# mount /dev/webapp/webapp01 /usr/share/nginx/html/

# 3. 写入测试数据
[root@centos ~ 18:50:41]# cp /etc/host* /usr/share/nginx/html/
[root@centos ~ 18:51:09]# ls /usr/share/nginx/html/
host.conf  hostname  hosts  hosts.allow  hosts.deny
扩展操作
# 第一步:扩展 LV 到 15G
[root@centos ~ 18:51:18]# lvextend -L 15G /dev/webapp/webapp01 
  Size of logical volume webapp/webapp01 changed from 10.00 GiB (2560 extents) to 15.00 GiB (3840 extents).
  Logical volume webapp/webapp01 successfully resized.
已挂载、可读写、激活

# 第二步:扩展 XFS 文件系统(指定挂载点)
[root@centos ~ 18:52:08]# xfs_growfs /usr/share/nginx/html/
meta-data=/dev/mapper/webapp-webapp01 isize=512    agcount=4, agsize=655360 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=2621440, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 2621440 to 3932160

# 验证文件系统大小(已扩展到15G,数据未丢失)
[root@centos ~ 18:52:24]# df -h /usr/share/nginx/html/
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/webapp-webapp01   15G   33M   15G   1% /usr/share/nginx/html

# 快捷方式:LV + 文件系统一键扩展(-r 参数自动调用 xfs_growfs)
[root@centos ~ 18:52:54]# lvextend -rL 20G /dev/webapp/webapp01
  Size of logical volume webapp/webapp01 changed from 15.00 GiB (3840 extents) to 20.00 GiB (5120 extents).
  Logical volume webapp/webapp01 successfully resized.
meta-data=/dev/mapper/webapp-webapp01 isize=512    agcount=6, agsize=655360 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=3932160, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 3932160 to 5242880

# 验证最终大小(20G)
[root@centos ~ 18:55:07]# lvs /dev/webapp/webapp01 
  LV       VG     Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  webapp01 webapp -wi-ao---- 20.00g                                                    
[root@centos ~ 18:55:37]# df -h /usr/share/nginx/html/
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/webapp-webapp01   20G   33M   20G   1% /usr/share/nginx/html
[root@centos ~ 18:55:50]# 
2 扩展 EXT4 文件系统

EXT4 支持扩展和缩减,扩展步骤如下:

环境准备
# 1. 卸载原有 XFS 文件系统
[root@centos ~ 18:55:50]# umount /usr/share/nginx/html

# 2. 格式化 LV 为 EXT4
[root@centos ~ 18:56:45]# mkfs.ext4 /dev/webapp/webapp01 
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
1310720 inodes, 5242880 blocks
262144 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2153775104
160 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
	4096000

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done 

# 3. 重新挂载并写入测试数据
[root@centos ~ 18:56:57]# mount /dev/webapp/webapp01 /usr/share/nginx/html/
[root@centos ~ 18:57:31]# df -h /usr/share/nginx/html/
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/webapp-webapp01   20G   45M   19G   1% /usr/share/nginx/html
[root@centos ~ 18:58:36]# cp /etc/host* /usr/share/nginx/html/
[root@centos ~ 18:59:01]# ls /usr/share/nginx/html/
host.conf  hostname  hosts  hosts.allow  hosts.deny  lost+found
扩展操作
# 第一步:扩展 LV 到 25G
[root@centos7 ~]# lvextend -L 25G /dev/webapp/webapp01
[root@centos7 ~]# lvs /dev/webapp/webapp01
  LV       VG     Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  webapp01 webapp -wi-ao---- 25.00g 

#可以看到逻辑卷已经扩展到25G,但是文件系统没有同步
[root@centos ~ 19:02:20]# lsblk /dev/mapper/webapp-webapp01 
NAME            MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
webapp-webapp01 253:3    0  25G  0 lvm  /usr/share/nginx/html
[root@centos ~ 19:02:34]# df -h /usr/share/nginx/html/
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/webapp-webapp01   20G   45M   19G   1% /usr/share/nginx/html


# 第二步:扩展 EXT4 文件系统(指定 LV 设备名)即同步文件系统
[root@centos ~ 19:02:57]# resize2fs /dev/webapp/webapp01 
resize2fs 1.42.9 (28-Dec-2013)
Filesystem at /dev/webapp/webapp01 is mounted on /usr/share/nginx/html; on-line resizing required
old_desc_blocks = 3, new_desc_blocks = 4
The filesystem on /dev/webapp/webapp01 is now 6553600 blocks long.

# 验证文件系统大小(25G,数据未丢失)
[root@centos ~ 19:04:38]# df -h /usr/share/nginx/html/
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/webapp-webapp01   25G   44M   24G   1% /usr/share/nginx/html
#内容没有丢失
[root@centos ~ 19:05:19]# ls /usr/share/nginx/html/
host.conf  hostname  hosts  hosts.allow  hosts.deny  lost+found


# 快捷方式:LV + 文件系统一键扩展(-r 参数自动调用 resize2fs)
[root@centos ~ 19:05:36]# lvextend -rL 30G /dev/webapp/webapp01
  Size of logical volume webapp/webapp01 changed from 25.00 GiB (6400 extents) to 30.00 GiB (7680 extents).
  Logical volume webapp/webapp01 successfully resized.
resize2fs 1.42.9 (28-Dec-2013)
Filesystem at /dev/mapper/webapp-webapp01 is mounted on /usr/share/nginx/html; on-line resizing required
old_desc_blocks = 4, new_desc_blocks = 4
The filesystem on /dev/mapper/webapp-webapp01 is now 7864320 blocks long.

# 验证最终大小(30G)
[root@centos ~ 19:06:18]# lvs /dev/webapp/webapp01
  LV       VG     Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  webapp01 webapp -wi-ao---- 30.00g                                                    
[root@centos ~ 19:06:40]# df -h /usr/share/nginx/html
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/webapp-webapp01   30G   44M   28G   1% /usr/share/nginx/html
3 缩减 EXT4 文件系统(自学)

EXT4 缩减需离线操作(卸载文件系统),且缩减后容量不能小于已用容量,步骤如下:

# 第一步:卸载文件系统(必须离线)
[root@centos ~ 19:06:46]# umount /usr/share/nginx/html

# 第二步:检查文件系统完整性(-f 强制检查)
[root@centos ~ 19:07:26]# e2fsck -f /dev/webapp/webapp01 
e2fsck 1.42.9 (28-Dec-2013)
第一步: 检查inode,块,和大小
第二步: 检查目录结构
第3步: 检查目录连接性
Pass 4: Checking reference counts
第5步: 检查簇概要信息
/dev/webapp/webapp01:14/1966080 文件(0.0% 为非连续的), 167445/7864320 块

# 第三步:缩减文件系统到 10G(需指定目标大小)
[root@centos ~ 19:07:42]# resize2fs /dev/webapp/webapp01 10G
resize2fs 1.42.9 (28-Dec-2013)
Resizing the filesystem on /dev/webapp/webapp01 to 2621440 (4k) blocks.
The filesystem on /dev/webapp/webapp01 is now 2621440 blocks long.


# 第四步:缩减 LV 到 10G(需确认,输入y)
[root@centos ~ 19:08:21]# lvreduce -L 10G /dev/webapp/webapp01
  WARNING: Reducing active logical volume to 10.00 GiB.
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce webapp/webapp01? [y/n]: y
  Size of logical volume webapp/webapp01 changed from 30.00 GiB (7680 extents) to 10.00 GiB (2560 extents).
  Logical volume webapp/webapp01 successfully resized.

# 第五步:挂载并验证(数据未丢失,容量为10G)
[root@centos ~ 19:08:45]# mount /dev/webapp/webapp01 /usr/share/nginx/html/
[root@centos ~ 19:09:01]# df -h /usr/share/nginx/html/
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/webapp-webapp01  9.8G   37M  9.2G   1% /usr/share/nginx/html
[root@centos ~ 19:09:11]# ls /usr/share/nginx/html/
host.conf  hostname  hosts  hosts.allow  hosts.deny  lost+found

补充说明:若文件系统基于传统磁盘分区创建,扩展/缩减操作与上述 LVM 逻辑卷的操作完全一致。

逻辑卷快照

LVM 快照可保留逻辑卷某一时刻的完整数据,用于数据备份或恢复,核心特性:快照容量需不小于原 LV 容量(避免数据溢出)。

# 1. 创建快照(-s 表示快照,-n 快照名,-L 快照大小,指定原 LV)
[root@centos ~ 19:13:24]# lvcreate -s -n webapp01-snap1 -L 10G /dev/webapp/webapp01
  Logical volume "webapp01-snap1" created.

# 2. 挂载快照(查看快照中的数据)
# 快照的文件系统和原先逻辑卷的文件系统的uuid是一致的。 默认情况下,快照和原卷同一时刻只能挂载一个。
# 一定要挂载,则需要修改uuid
[root@centos ~ 19:14:37]# uuidgen
5c500531-2774-4782-a335-a62f6071e0b7

[root@centos ~ 19:14:43]# xfs_admin -U 5c500531-2774-4782-a335-a62f6071e0b7 /dev/webapp/webapp01-snap1 
Clearing log and setting UUID
writing all SBs
new UUID = 5c500531-2774-4782-a335-a62f6071e0b7

# 如果是修改ext4文件系统uuid命令使用tune2fs
[root@centos7 ~]# tune2fs -U 5c500531-2774-4782-a335-a62f6071e0b7 /dev/webapp/webapp01-snap1

# 挂载测试
[root@centos ~ 19:16:28]# mkdir /webapp/webapp01-snap1 -p
[root@centos ~ 19:16:31]# mount /dev/webapp/webapp01 /usr/share/nginx/html/
[root@centos ~ 19:16:52]# mount /dev/webapp/webapp01-snap1 /webapp/webapp01-snap1/

# 3. 快照和原本的都挂载上
[root@centos ~ 19:17:23]# df -h
Filesystem                          Size  Used Avail Use% Mounted on
devtmpfs                            979M     0  979M   0% /dev
tmpfs                               991M     0  991M   0% /dev/shm
tmpfs                               991M  9.6M  981M   1% /run
tmpfs                               991M     0  991M   0% /sys/fs/cgroup
/dev/mapper/centos-root              50G  1.9G   49G   4% /
/dev/sda1                          1014M  171M  844M  17% /boot
/dev/mapper/centos-home             147G   33M  147G   1% /home
tmpfs                               199M     0  199M   0% /run/user/0
/dev/mapper/webapp-webapp01          10G   33M   10G   1% /usr/share/nginx/html#
/dev/mapper/webapp-webapp01--snap1   10G   33M   10G   1% /webapp/webapp01-snap1#

# 4. 在快照 LV 中写入新数据(验证快照独立性)
[root@centos ~ 19:19:25]# echo hello world > /webapp/webapp01-snap1/hello.txt
[root@centos ~ 19:20:17]# cat /webapp/webapp01-snap1/hello.txt
hello world
            #原LV上没有
[root@centos ~ 19:20:22]# cat /webapp/webapp01/hello.txt
cat: /webapp/webapp01/hello.txt: No such file or directory

RAID 逻辑卷(自学)(待补充)

LVM 支持创建 RAID 类型的逻辑卷(如 RAID1、RAID5),提供数据冗余保护,以下为实战示例。

Logo

AtomGit 是由开放原子开源基金会联合 CSDN 等生态伙伴共同推出的新一代开源与人工智能协作平台。平台坚持“开放、中立、公益”的理念,把代码托管、模型共享、数据集托管、智能体开发体验和算力服务整合在一起,为开发者提供从开发、训练到部署的一站式体验。

更多推荐