=======================================================
RHEL6: thinly provisioned luns
=======================================================
:Title:        RHEL6: thinly provisioned luns
:Author:       Douglas O'Leary <dkoleary@olearycomputers.com>
:Description:  Process for using thinly provisioned luns
:Date created: 08/2013
:Date updated: 08/2013
:Disclaimer:   Standard: Use the information that follows at your own risk.  If you screw up a system, don't blame it on me...

Overview
=========

Thin provisioning of disk space is a method of *virtually* allocating disk 
space.  It is a method to help avoid the overallocation of disk space that 
tends to happen.  I'm sure any of us that have been doing this job for more 
than a few days have seen the filesystems with 100s of gigs if not terabytes 
of disk space free - or the volume groups with that much space unallocated.  
Disk space has gotten significantly cheaper over the years. At one point, I
calculated the cost of the unallocated disk space for a rather large SAP 
environment and realized it cost more than my house.  While disk space is 
cheaper, it still costs money, though.  Go figure, bean counters (and anyone 
responsible for a budget) want to avoid that cost.  

Enter thin provisioning.

As you may imagine, there are differences between the way the various disk 
vendors implement thin provisioning and differences in how visible those 
differences are when implemented on physical systems vs virtuals.

Differences, in no particular order, that I've noticed to date:

*   HDS thinly provisioned disks presented to rhel environments **prior** to 
    6.4 dont' appear to act any differently than thickly provisioned disks.  
    To be completely open this is more hearsay than established fact, although,
    I have no reason to suspect the SAN manager of lying when he said that 
    they'd been thinly provisioning HDS disks for months.
*   Regardless of vendor, thinly provisioned disks presented to a vmware 
    server, act exactly as thickly provisioned disks to the guests.  This 
    makes sense as the whole point of vmware is to virtualize the resources.  
    I'd be curious about the performance hit associated with having to allocate
    more disk space on the vmware server when the guest starts filling up a 
    filesystem, though.  
*   Thinly provisioned 3-par disks are apparently visible to rhel6.4 installed 
    on physical hardware **as** thinly provisioned disks resulting in the 
    lessons learned below.  We haven't been able to determine yet if this is 
    the case with versions of the OS prior to 6.4

So, the steps that follow are applicable to rhel/centos 6.4 physical systems 
to which thinly provisioned disks have been presented.

Detail
======

Official details of how to manipulate thinly provisioned disks on rhel6.4 are 
available here_  This is a condensed vesion of that document and it provides 
some examples.

.. _here: https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html-single/Logical_Volume_Manager_Administration/index.html#thinly_provisioned_volume_creation  

You will get I/O errors when attempting to create a filesystem on a thinly 
provisioned disk using the standard commands:

::

    # vgcreate appvg /dev/mapper/mpathv
      Volume group "appvg" successfully created
    # lvcreate -L 2g -n temp appvg
      Logical volume "temp" created
    # mkfs.ext4 /dev/appvg/temp
    mke2fs 1.41.12 (17-May-2010)
    Discarding device blocks: failed - Input/output error
    [[snip]]

And, you can't mount it:

::

    # mount -t ext4 /dev/appvg/temp /mnt
    mount: wrong fs type, bad option, bad superblock on /dev/mapper/appvg-temp,
           missing codepage or helper program, or other error
           In some cases useful info is found in syslog - try
           dmesg | tail  or so

The short version is that we need to create a *thin provisioned pool* and then 
create LVs from that.  Example follows:

::

    # lvcreate -L 10g -T appvg/appvg
      Logical volume "appvg" created
    # lvs appvg
      LV    VG    Attr      LSize  Pool Origin Data%  Move Log Cpy%Sync Convert
      appvg appvg twi-a-tz- 10.00g               0.00
    # lvcreate -V 2g -T appvg/appvg -n temp
      Logical volume "temp" created
    # lvs appvg
      LV    VG    Attr      LSize  Pool  Origin Data%  Move Log Cpy%Sync Convert
      appvg appvg twi-a-tz- 10.00g                0.00
      temp  appvg Vwi-a-tz-  2.00g appvg          0.00
    # mkfs.ext4 /dev/appvg/temp
    mke2fs 1.41.12 (17-May-2010)
    Discarding device blocks: done
    Filesystem label=
    [[snip]]
    # mount /dev/appvg/temp /mnt

Now, to see what happens when we fill the filesystem up:

::

    # dd if=/dev/zero of=/mnt/dkoleary bs=1024k count=2064208
    dd: writing `/mnt/dkoleary': No space left on device
    1949+0 records in
    1948+0 records out
    2043461632 bytes (2.0 GB) copied, 4.15765 s, 491 MB/s
    # lvs appvg
      LV    VG    Attr      LSize  Pool  Origin Data%  Move Log Cpy%Sync Convert
      appvg appvg twi-a-tz- 10.00g               19.91
      temp  appvg Vwi-aotz-  2.00g appvg         99.53

Summary
=======

Not overly difficult; however, I can see lots of places where this can bite 
you in the ass quite hard.  What happens, for instance, if you extend a 
filesystem on a thickly provisioned lun with space from a thinly provisioned 
lun?  Is there a way, from the OS, to know that a specific lun is thick or 
thin or, do we just rely on the existance or absence of I/O errors when 
creating a filesystem?  NOTE:  That seems like an incredibly bad idea to me...

So, given that a thinly provisioned lun is presented to an rhel6.4 system, the 
steps are very simple:

1.  Create the vg as normal
2.  Create a thin provisioned pool.  My suggestion, pending further experience
    is to use all of the space assigned to the VG as the pool. For instance:
    ``lvcreate -L 10g -T appvg/appvg``
3.  Create *virtual* logical volumes using the thin provisioned pool:
    ``lvcreate -V 2g -T appvg/appvg -n temp``
4.  Use as you otherwise would.

Hope that helps.

Doug O'Leary