Back to the main page

RedHat Linux SAN Add - LVM

Examine a current system

First, get some info of the LVM on the system. In examples below, used storages can be XP, EVA, 3PAR (manufactured by HP and Hitachi).
So you'll need to have RPMs, like xpinfo, evainfo and HP3PARInfo.

# pvs ; vgs ; lvs

Check files system type (ext3 or etx4) from /etc/fstab file. Example:
# cat /etc/fstab | grep vg01
/dev/vg01/lvol01        /app       ext3    defaults        1 2
/dev/vg01/lvol02        /data      ext3    defaults        1 2

Use xplist to see current LUNs from XP storage.
# xplist -h
usage: xplist [-adhnvVl]
  where:
    -H             dont display header
    -a             show all devs
    -d             dump in comma seperated format
    -n             sort devs by name
    -v             display version
    -V             display volume
    -l             display lunid
    -h             this listing

# xplist -aVl 
Device     Volume  Size    LUN   Port   CU:LDev  Type       Serial #
====================================================================
/dev/sda    vg01   65536   000   CL3E   08:c8    OPEN-V     00018595
/dev/sdb    vg01   65536   000   CL4E   08:c8    OPEN-V     00018595

Note: The CU:LDev in this example tells you this is same LUN, so no understanding of multipath.
Use xplist for more details of LUN, also to determine XP Array name, model.
 
# xplist
Device               Size        Port   CU:LDev  Type       Serial #
====================================================================
/dev/sdb             65536       CL4E   08:c8    OPEN-V     00018595

# xpinfo

Device File : /dev/sda              Model : XP24000
       Port : CL3E                  Serial # : 00018595
Host Target : 00                    Code Rev : 6008
  Array LUN : 00                    Subsystem : 000c
    CU:LDev : 08:c8                 CT Group : ---
       Type : OPEN-V                CA Volume : SMPL
       Size : 65536 MB              BC0 (MU#0) : SMPL
       ALPA : cc                    BC1 (MU#1) : SMPL
    Loop Id : 11                    BC2 (MU#2) : SMPL
    SCSI Id : ---
 RAID Level : RAID5                  RAID Type  : ---
 RAID Group : 1-7                   ACP Pair : 1
 Disk Mechs : R0006   R0106   R0206   R0306
     FC-LUN : 000048a3000008c8        Port WWN : 50060e800548a324
HBA Node WWN: 200000e08b8e75a8     HBA Port WWN: 210000e08b8e75a8
 Vol Group  : ---                  Vol Manager : ---
Mount Points: ---
  DMP Paths : ---
  SLPR : 0                        CLPR : 0

If Veritas file system is used, native RHEL multipath can be disabled, but with LVM let's use it. Say you want to backup /etc/multipath.conf

Also backup LVM metadata for specific VG
Example (backup metadata for vg01 in the ASCII file)
 
# vgcfgbackup -v -f /etc/lvm/backup/vg01.Aug.02.2013 vg01

The backup can be viewing ASCII file or using vgcfgrestore command with .list option.
  
# vgcfgrestore -f /etc/lvm/backup/vg01.Aug.02.2013 --list vg01

Discovery of new LUNs

Once new LUNs are presented to a system, discover them by rescanning the system.
 
#  for i in `ls -la /sys/class/fc_host | grep host| sort | awk '{print $9}' | cut -d\t -f2` 
> do
> echo "Perform Loop Inialization Protocol LIP FC adapter host${i} to rescan fabric."
>  echo "1" > /sys/class/fc_host/host${i}/issue_lip 
> done

Perform Loop Inialization Protocol LIP FC adapter host3 to rescan fabric.
Perform Loop Inialization Protocol LIP FC adapter host4 to rescan fabric.
Perform Loop Inialization Protocol LIP FC adapter host5 to rescan fabric.
Perform Loop Inialization Protocol LIP FC adapter host6 to rescan fabric.

Or install sg3_utils RPM (this is utilities for Linux's SCSI generic driver devices + raw devices). Installations gets you script /usr/bin/rescan-scsi-bus.sh so run it to scan for new LUNs.

With HP Proliant server, you may have and use below tool:
 
# /usr/local/bin/hp_rescan -h
hp_rescan: rescans LUNs on device mapper managed FC adapters
Usage: hp_rescan -a|-i|-l
-a: rescan all adapters
-i: rescan a specific adapter instance
-l: lists all FC adapters
-h: help

#  hp_rescan -a 
Issuing LIP to FC adapter host0 to rescan fabric...
Issuing LIP to FC adapter host1 to rescan fabric...
Issuing LIP to FC adapter host2 to rescan fabric...
Issuing LIP to FC adapter host3 to rescan fabric...
Issuing LIP to FC adapter host4 to rescan fabric...
Issuing LIP to FC adapter host5 to rescan fabric...

# /usr/local/bin/hp_rescan -l
----------------------------------------------------------------
Adapter        WWN                    Speed          LinkState
----------------------------------------------------------------
host0          0x210000e08b8e75a8     2 Gbit         Online
host1          0x210100e08bae75a8     unknown        Online
host2          0x210000e08b8e26ad     2 Gbit         Online
host3          0x210100e08bae26ad     unknown        Online
host4          0x210000e08b8ebcb7     unknown        Online
host5          0x210100e08baebcb7     unknown        Online
----------------------------------------------------------------

What is your RHEL system is VMware guest

In this case, a virtual or raw LUN is added using vSphere Client. To scan both RDM (Raw Device Mapping) or Virtual disks:
 
# for d in /sys/class/scsi_host/host*/scan; do echo $d ; done
# for d in /sys/class/scsi_host/host*/scan; do echo "- - -" > ${d} ; done

After increasing Virtual disk via vSphere Client, run:
 
# for d in /sys/block/sd*/device/rescan; do echo "1" > $d; done  

If adding dedicated virtual disk, use fdisk to see new virtual local disks. Also use below command to scan for all devices visible to LVM2
 
#  lvmdiskscan 
...
/dev/sdb  [50.00 GiB] LVM physical volume ----> existing 
/dev/sdd  [210.00 GiB] ----->  just added dedicated virtual disk
..
9 disks
23 partitions
2 LVM physical volume whole disks
1 LVM physical volume

The dedicated virtual disk for VM can be also increased via vSphere Client (instead of adding new one). After increasing size do:
 
# for d in /sys/block/sd*/device/rescan; do echo "1" > $d; done  

# pvdisplay  
	
Example: pvdisplay /dev/sdc

#  pvresize /dev/sdc 
  Physical volume "/dev/sdc" changed
  1 physical volume(s) resized / 0 physical volume(s) not resized

Adding new LUNs to existing Volume Group (VG)

Add new LUNs to multipath, example of multipaths section from /etc/multipath.conf
 
multipaths {
        multipath {
                wwid    360060e800548a300000048a3000003dc
                alias   dedicated-64G-000
                path_grouping_policy multibus
        }

After multipath is configured, the block device that represents LUN is located in directory /dev/mapper
Reload the multipath daemon:
 
# service multipathd reload

May have to Flush Multipath and rerun restart command:
 
# multipathd -F ; service multipathd restart

Now prepare new LUNs for LVM use (create Physical Volume).

[ Local & S2S Cluster ] - Do on each node

Example for physical server:
#  pvcreate  /dev/mapper/dedicated-16G-001

Example for VMware guest:
#  pvcreate  /dev/sdX
# pvs

Now, a VG can be extended.

[Local & S2S Cluster] - Do ONLY on master node (where corresponding file system is mounted)

Example for physical server:
#  vgextend vg01 /dev/mapper/dedicated-16G-001 

Example for VMWare gust:
#  vgextend vg01 /dev/sdX 

Verify:
# vgs 

Growing a LUN

For Site to Site (Metrocluster) Cluster So once LUN has been grown on Storage side, rescan SAN layer
#/usr/bin/rescan-scsi-bus.sh  

OR

#  for i in `ls -la /sys/class/fc_host | grep host| sort | awk '{print $9}' | cut -d\t -f2` 
> do
> echo "Perform Loop Inialization Protocol LIP FC adapter host${i} to rescan fabric."
>  echo "1" > /sys/class/fc_host/host${i}/issue_lip 
> done

# xplist
Device       Size    Port   CU:LDev  Type       Serial #
=============================================================
/dev/sdh     71680   CL4J   21:72    OPEN-V     00066681 <--LUN extended and new size is 70G
/dev/sdi     69632   CL4L   21:73    OPEN-V     00066681
/dev/sdf     69632   CL6L   21:74    OPEN-V     00066681
/dev/sdg     69632   CL6J   21:75    OPEN-V     00066681

Multipath also needs to see new size of grown LUN.
#  multipath -ll 
mpatha (3600508b1001cda26cffc03380cdfbfce) dm-3 HP,LOGICAL VOLUME
size=279G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  `- 0:0:0:0 sda 8:0   active ready running
dedicated-68G-003 (360060e80160479000001047900002175) dm-2 HP,OPEN-V
size=68G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 2:0:1:0 sdc 8:32  active ready running
  `- 3:0:1:0 sdg 8:96  active ready running
dedicated-68G-002 (360060e80160479000001047900002174) dm-1 HP,OPEN-V
size=68G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 2:0:0:0 sdb 8:16  active ready running
  `- 3:0:0:0 sdf 8:80  active ready running
dedicated-68G-001 (360060e80160479000001047900002173) dm-4 HP,OPEN-V
size=68G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 2:0:3:0 sde 8:64  active ready running
  `- 3:0:3:0 sdi 8:128 active ready running
dedicated-68G-000 (360060e80160479000001047900002172) dm-0 HP,OPEN-V <- this is extended, multipath still sees old 68G  
size=68G features='1 queue_if_no_path' hwhandler='0' wp=rw 
`-+- policy='round-robin 0' prio=1 status=active 
  |- 2:0:2:0 sdd 8:48  active ready running  <- first SAN path
  `- 3:0:2:0 sdh 8:112 active ready running  <- second SAN path

Rescan SCSI layer
# echo "1" > /sys/block/sdd/device/rescan
# echo "1" > /sys/block/sdh/device/rescan

Resize multipath
#  multipathd -k 
multipathd> help
multipath-tools v0.4.9 (04/04, 2009)
CLI commands reference:
 list|show paths
 list|show paths format $format
 list|show status
 list|show daemon
 list|show maps|multipaths
 list|show maps|multipaths status
 list|show maps|multipaths stats
 list|show maps|multipaths format $format
 list|show maps|multipaths topology
 list|show topology
 list|show map|multipath $map topology
 list|show config
 list|show blacklist
 list|show devices
 list|show wildcards
 add path $path
 remove|del path $path
 add map|multipath $map
  remove|del map|multipath $map
 switch|switchgroup map|multipath $map group $group
 reconfigure
 suspend map|multipath $map
 resume map|multipath $map
 resize map|multipath $map
 disablequeueing map|multipath $map
 restorequeueing map|multipath $map
 disablequeueing maps|multipaths
 restorequeueing maps|multipaths
 reinstate path $path
 fail path $path
 paths count
 forcequeueing daemon
 restorequeueing daemon
 quit|exit

multipathd> resize multipath dedicated-68G-000 
ok
multipathd> quit

Verify new size of multipath
# multipath -ll dedicated-68G-000
dedicated-68G-000 (360060e80160479000001047900002172) dm-0 HP,OPEN-V
size=70G features='1 queue_if_no_path' hwhandler='0' wp=rw  <- Now shows 70G
`-+- policy='round-robin 0' prio=1 status=active
  |- 2:0:2:0 sdd 8:48  active ready running
  `- 3:0:2:0 sdh 8:112 active ready running

Resize Physical Volume
# pvresize /dev/mapper/dedicated-68G-000 
  Physical volume "/dev/mapper/dedicated-68G-000" changed
  1 physical volume(s) resized / 0 physical volume(s) not resized

# pvdisplay /dev/mapper/dedicated-68G-000 
  --- Physical volume ---
  PV Name          /dev/mapper/dedicated-68G-000
  VG Name          vg01
  PV Size          70.00 GiB / not usable 3.31 MiB  <- shows 70G
  Allocatable      yes
  PE Size          4.00 MiB
  Total PE         17919
  Free PE          639
  Allocated PE     17280
  PV UUID          xlYZ7n-majJ-AMsS-LgMD-0dKu-47dY-svbIR0

Verify new size using fdisk:
# fdisk -l /dev/mapper/dedicated-68G-000

Disk /dev/mapper/dedicated-68G-000: 70.0 GB, 73014444032 bytes <-new size 70G
255 heads, 63 sectors/track, 10704 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Backup /etc/multipath.conf and change multipath alias (to show 70G, not 68G), then reload multipath daemon
# service multipathd reload
Reloading multipathd:            [  OK  ]

# pvs
  PV                            VG   Fmt  Attr PSize   PFree
  /dev/mapper/dedicated-68G-001 vg01 lvm2 a--  68.00g 508.00m
  /dev/mapper/dedicated-68G-002 vg01 lvm2 a--  68.00g 508.00m
  /dev/mapper/dedicated-68G-003 vg01 lvm2 a--  68.00g 508.00m
  /dev/mapper/dedicated-70G-000 vg01 lvm2 a--  70.00g   2.50g <--looks good, shows 70G
  /dev/mapper/mpathap5          vg00 lvm2 a--  246.62g 218.81g

#  vgs
  VG   #PV #LV #SN Attr   VSize   VFree
  vg00   1   6   0 wz--n- 246.62g 218.81g
  vg01   4   5   0 wz--n- 273.98g   3.98g  <--- shows extra free space

Extension of LVM Logical Volumes (lvol) - online

[Local & S2S Cluster] - Do ONLY on master node (where corresponding file system is mounted)

Example of extending volume to 60G:
# lvextend -L60G /dev/vg01/lvol01

Example of add 40G to lvol01
# lvextend -L+40G /dev/vg01/lvol01

Resizing of file system is also needed. Resize should work on mounted filesystem with kernel 2.6 and ext3.
# resize2fs /dev/vg01/lvol1

Note: Logical volume can be reduced only if its corresponding file system is un-mounted.

Mirroring of Logical Volumes and releasing old LUNs

Examples are for physical server, since multipath is configured.

[Local & S2S Cluster] - Do ONLY on master node (where corresponding file system is mounted)

This mirrors current data to provided PVs (may run this in background):
# lvconvert -m1 /dev/$VG/$LV --corelog /dev/mapper/$DISK_00 /dev/mapper/$DISK_01

This removes provided PVs from mirror:
# lvconvert -m0 /dev/$VG/$LV /dev/mapper/disk(s)_to_be_removed 

Removing old PVs from VG
# vgreduce vg01 /dev/mapper/disk(s)_to_be_removed

# vgreduce vg01 /dev/mapper/dedicated-4G-000 /dev/mapper/dedicated-4G-001
  Removed "/dev/dm-0" from volume group "vg01"
 Removed "/dev/dm-1" from volume group "vg01"

Removing old LUNs from LVM
# pvremove /dev/mapper/dedicated-4G-000 /dev/mapper/dedicated-4G-001 
  Labels on physical volume "/dev/mapper/dedicated-4G-000" successfully wiped
  Labels on physical volume "/dev/mapper/dedicated-4G-001" successfully wiped

Finally remove old LUNs from /etc/multipath.conf and reload the multipath daemon
The SAN team can reclaim old LUNs.

Creating new VG from new LUN

Create Physical Volume:
Physical
# pvcreate /dev/mapper/device_name_1

Vmware client:
# pvcreate /dev/device_name_1

Create Volume group
# vgcreate vg# /dev/mapper/device_name_1 

Create Logical Volume
#lvcreate vg# --name lvol# --size size_#

Note: If you see error:
Not activating vg03/lvol01 since it does not pass activation filter.
Failed to activate new LV.

Make sure vg03 exist in the file /etc/lvm/lvm.conf in the line volume_list = [ "vg00", "vg01", "vg02", "vg03" ]

Create file system, and have entries in /etc/fstab
# mkfs -t ext4 /dev/vg#/lvol# 



Back to the main page