S/W Raid mirroring

Mirror setup

  1. Partition disks:

# for pv in sdb sdc
> do
> fdisk -uc /dev/${pv} << eof
> n
> p
> 1
>
>
> t
> fd
> w
> eof
> done
[[snip]]
# fdisk -l /dev/sd[bc] | grep ^/dev
/dev/sdb1               2       11530    10484736   fd  Linux raid autodetect
/dev/sdc1               2       11530    10484736   fd  Linux raid autodetect
  1. Create md array:

# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
  1. Set up mdadm.conf:

# mdadm --detail --scan | tee /etc/mdadm.conf
ARRAY /dev/md0 metadata=1.2 name=guest:0 UUID=53e000db:7723b759:ea37062a:db5d2fc1
  1. Create pv, vg, lv, as normal:

# pvcreate /dev/md0
  Writing physical volume data to disk "/dev/md0"
  Physical volume "/dev/md0" successfully created
# vgcreate -s 4m vgmirror /dev/md0
  Volume group "vgmirror" successfully created
# lvcreate -L 4g -n mirror vgmirror
  Logical volume "mirror" created
# mkfs.ext4 /dev/vgmirror/mirror
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
[[snip]]
This filesystem will be automatically checked every 25 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
# mkdir -p -m 755 /mnt/mirror
# mount /dev/vgmirror/mirror /mnt/mirror
# echo "This is a test of the emergency broadcasting system." > /mnt/mirror/README
# md5sum /mnt/mirror/README
9a75a8122c57b3ae0e4831d30b744593  /mnt/mirror/README

Splitting the mirror

  1. We can split the mirror for protection; but, at the moment, it doesn’t appear that we can import the split mirror. This would be an effective backup; but doesn’t have the same flexibility as does LVM. a. Set one drive to failed and remove it from the original array

# mdadm -f /dev/md0 /dev/sdc1
mdadm: set /dev/sdc1 faulty in /dev/md0
# mdadm -r /dev/md0 /dev/sdc1
mdadm: hot removed /dev/sdc1 from /dev/md0

b.  Create a new array using the 'failed' drive
# mdadm --create /dev/md1 --level=1 --raid-devices=2 missing /dev/sdc1
[[output snipped]]
# mdadm --detail --scan
ARRAY /dev/md0 metadata=1.2 name=guest:0 UUID=53e000db:7723b759:ea37062a:db5d2fc1
ARRAY /dev/md1 metadata=1.2 name=guest:1 UUID=20591d51:ecf7a5c9:2e7d494c:afd2a99f

c.  If the upgrade/testing/patching is successful,
    1.  Stop and delete both arrays:
# mdadm -S /dev/md1
mdadm: stopped /dev/md1
# mdadm -S /dev/md0
mdadm: stopped /dev/md0

    2.  Recreate the original array using the original device:
# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 missing
[[output snipped]]
# mdadm --add /dev/md0 /dev/sdc1
mdadm: added /dev/sdc1

d.  If the upgrade/testing/patching failed:
    1.  Stop and delete both arrays:
# mdadm -S /dev/md1
mdadm: stopped /dev/md1
# mdadm -S /dev/md0
mdadm: stopped /dev/md0

    2.  Recreate the original array using the split device:
# mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdc1
[[output snipped]]
# mdadm --add /dev/md0 /dev/sdb1
mdadm: added /dev/sdb1

d.  Reactivea the lv, remount.
# lvchange -a y vgmirror/mirror
# mount /dev/vgmirror/mirror /mnt/mirror

Error conditions: simulated disk failure

  1. Most troubleshooting details will come from the –detail mdadm report.

# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Sun Mar 24 16:20:06 2013
     Raid Level : raid1
     Array Size : 10483640 (10.00 GiB 10.74 GB)
  Used Dev Size : 10483640 (10.00 GiB 10.74 GB)
   **Raid Devices : 2**
  **Total Devices : 2**
    Persistence : Superblock is persistent

    Update Time : Sun Mar 24 16:45:27 2013
          State : clean, degraded, recovering
 Active Devices : 1
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 1

 Rebuild Status : 78% complete

           Name : guest:0  (local to host guest)
           UUID : 728ec06f:492cca76:edaef5cd:cbab6e39
         Events : 41

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       2       8       33        1      spare rebuilding   /dev/sdc1
  1. The biggest problem comes in if one of the devices says ‘removed’. There doesn’t appear to be a good way to clean that up short of recreating the s/w raid as described above

# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Sun Mar 24 15:39:57 2013
     Raid Level : raid1
     Array Size : 10483640 (10.00 GiB 10.74 GB)
  Used Dev Size : 10483640 (10.00 GiB 10.74 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Sun Mar 24 16:08:26 2013
          State : active, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : guest:0  (local to host guest)
           UUID : 53e000db:7723b759:ea37062a:db5d2fc1
         Events : 26

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       0        0        1      removed