Linux RAID: comparing LVM vs S/W

Title:

Linux RAID (LVM vs S/W)

Author:

Douglas O’Leary <dkoleary@olearycomputers.com>

Description:

Comparison of LVM and software RAID implementations on Redhat/CentOS Linux

Date created:

03/22/13

Date updated:

04/18/14

Disclaimer:

Standard: Use the information that follows at your own risk. If you screw up a system, don’t blame it on me…



Overview

There seems to be little hard data on the net comparing/contrasting Linux LVM and software RAID implementations. Most of the posts answering this question suggest that there is a difference between the intended purposes of the two solutions. Software RAID is for redundancy while LVM is for flexibility, as it’s stated. My take on it is, particularly for mirroring, that is a distinction without a difference.

I started researching this question when a client asked me which solution to use to mirror two solid state disks which, due to architectural issues, couldn’t be used in a h/w raid controller. Being a long term HPUX admin, I naturally said LVM, not even really aware, at that time, that there was another option. Someone clued me in to the existance of software mirroring so off I went on research/testing/comparing the two solutions. This document is the result of that research.

A description of the test environment follows the summary.

Conclusion

The table below shows the capabilities of both LVM and software RAID side by side. Most of the questions that I’ve seen, and certainly the ones that I postd, have been asking for exactly that. Discussion of the various elements will follow in the next section.

Function

LVM

S/W

RAID 0

X

X

RAID 1

X [1]

X

RAID 4

X [1]

X

RAID 5

X [1]

X

RAID 6

X [1]

X

RAID 10

X [2]

X

Automatic resilver

X [4]

X

Split mirror

X

X [3]

Use split mirror

X

Discussion

  • Redendancy vs Flexibility: As stated above, the commonly accepted purpose for s/w RAID is redundancy whereas LVM’s primary purpose is flexibility. Redhat Enterprise Linux (and CentOS) versions prior to 6.3 already supported mirroring and striping (RAIDs 1 and 0 respectively). As of Redhat Enterprise Linux 6.3, LVM supports RAID levels 4, 5, and 6. The recently released Redhat Enterprise Linux 6.4 now supports RAID 10. So, as of the current version of Redhat and CentOS, LVM supports the same RAID levels as does software RAID.

  • Automatic resilvering: If one drive from a mirrored set fails, both mirroring technologies will support continuing operations on the other drive w/o service interruption. When the dead drive gets replaced, though, the recovery process on LVM, by default, is manual whereas software RAID will realize the new drive is available and automatically start resilvering the mirror set. One response (Thanks, Alex) indicated that automatic resilvering can be configured in LVM by adding raid_fault_policy = "allocae" in lvm.conf.

  • Split mirror: Both mirroring technologies support splitting the mirror; however, LVM’s flexibility on what you can do wth that split mirror disk is orders of magnitude better than s/w RAID. Based on my tests and on questions posted to the CentOS platforms, it appears that there’s no way to effectively use the split disk in a software RAID setup.

  • Using the split mirror:

    1. Purposes for which the split disk could be used include:

      1. Recovery option when upgrading or patching an operating system.

      2. Warm backup of files for quicker recovery. (snapshots probably a better approach)

      3. Testing on a copy of data instead of the live data.

      4. etc

    2. Software RAID split disks seemingly can only be used for recovery

      1. Splitting the mirror, in s/w RAID, means ‘failing’ a drive and removing it from the array, then creating a second array using the split disk. You then have two arrays with the exact same information:

        # mdadm --detail --scan
        ARRAY /dev/md0 metadata=1.2 name=guest:0 UUID=53e000db:7723b759:ea37062a:db5d2fc1
        ARRAY /dev/md1 metadata=1.2 name=guest:1 UUID=20591d51:ecf7a5c9:2e7d494c:afd2a99f
        
      2. If LVM is used on top of the software RAID, the LVM processes will complain that two disks have the same VGID. Equally unfortunate, it doesn’t appear that Linux’s version of LVM has the HPUX vgchgid command so there doesn’t appear to be a way to change a VGID which is required before the second disk can be imported as a different VG.

      3. The constraint listed above isn’t applicable if filesystems are built using disk slices; however, with disks being hundreds of gigs if not terabytes in size, not using LVM isnt’ very practical.

    1. LVM, on the other hand, is able to split the mirrored logical volume (lvconvert), split the volume group (vgsplit), recombine the split volume groups (vgmerge), then remerge the logical volumes (lvconvert)

Summary

My research and tests concentrated on mirroring specifically - and, specifically, not of the OS. However, considering that Linux LVM can now support all RAID levels and is significantly more flexible than software RAID, there doesn’t seem much to argue for the use of software RAID to me.

The one potential caveat to this is the disks supporting the operating system. Linux still requires the boot device to be a hard partition. Most clients with whom I work have h/w raid controllers for their OS disks so this isn’t a large scale problem. For those people that don’t use h/w raid controllers, s/w raid may make sense for the OS. Your mileage may vary

I’d be more than happy to discuss this further particularly with anyone that may have a different opinion. Feel free to email me at the link in the header. Should any data prove this conclusion in doubt, I’ll post it here along with attribution.

Test Environment

  • The test system was a Kernel Virtual Machine running CentOS 6.3 configured with 2 gigs of RAM and 2 vcpus. I was able to create and destroy two 10-gig virtual disks at will providing an adequate test bed for adding, deleting, and manipulating disk mirrors. LVM was used on top of the software RAID as my production target devices will be multiple terabytes. Creating one filesystem on a hard partition for a disk that size doesn’t seem reasonable to me.

Setup:

  • guest shut down:

# virsh list --all
 Id    Name                           State
 ----------------------------------------------------
 9     guest1                         running
 10    python                         running
 -     guest                          shut off
  • Verify no pre-existing disks:

# virsh vol-list default
Name                 Path
-----------------------------------------
guest.img            /var/lib/libvirt/images/guest.img
guest1.img           /var/lib/libvirt/images/guest1.img
python.img           /var/lib/libvirt/images/python.img
  • Create volumes, add them to guest, and restart guest

for x in 1 2
do
echo virsh vol-create-as default guest-${x}.img 10g
virsh vol-create-as default guest-${x}.img 10g
done
virsh vol-create-as default guest-1.img 10g
Vol guest-1.img created

virsh vol-create-as default guest-2.img 10g
Vol guest-2.img created

# virsh vol-list default | grep guest
guest-1.img          /var/lib/libvirt/images/guest-1.img
guest-2.img          /var/lib/libvirt/images/guest-2.img
guest.img            /var/lib/libvirt/images/guest.img
guest1.img           /var/lib/libvirt/images/guest1.img

# virsh attach-disk guest /var/lib/libvirt/images/guest-1.img hdb --persistent
Disk attached successfully

# virsh attach-disk guest /var/lib/libvirt/images/guest-2.img hdc --persistent
Disk attached successfully

# virsh domblklist guest
Target     Source
------------------------------------------------
hda        /var/lib/libvirt/images/guest.img
hdb        /var/lib/libvirt/images/guest-1.img
hdc        /var/lib/libvirt/images/guest-2.img
.
# virsh start guest
Domain guest started
  • Verify new disks on guest

# h
guest
# grep sd /proc/partitions | sort -k 4
   8        0   20971520 sda
   8        1     512000 sda1
   8        2   20458496 sda2
   8       16   10485760 sdb
   8       32   10485760 sdc
  • LVM tests were all successful:

    1. Creating a volume group.

    2. Simulating disk failure and recovery

    3. Creating a mirrored logical volume.

    4. Splitting the mirrored logical volume

    5. Splitting the volume group

    6. Merging the volume group

    7. Remirroring the original logical volume.

  • S/W RAID tests: Obviously, I wouldn’t be able to split a logical as the mirroring would be happening below the level of LVM. I was surprised, however, to find that I couldn’t really do anything with the software RAID split disk after it was split.

    1. Creating a mirrored array (successful)

    2. Simulating disk failure and recovery (successful)

    3. Splitting the mirror (successful w/caveats expressed above)

    4. Attempting to use the split mirror as a backup/test bed (failed)