1. How do I reduce LVM logical voulme in Red Hat Enterprise linux?
First of all make sure to have sufficient disk space available before reducing logical volume (otherwise it would result in data loss). Also, make sure to have valid data backup before going forward and making any changes.It’s important to shrink the filesystem before reducing it to prevent the data loss/corruption. The resize2fs program will resize ext2, ext3 or ext4 file systems. It can be used to enlarge or shrink an unmounted file system located on device. Refer following steps to reduce the logical volume by 500GB for example,1) Unmount the filesystem# umount /dev/VG00/LV00
2) Scan and check the filesystem to be at safer side:# e2fsck /dev/VG00/LV00
3) Shrink the filesystem with resize2fs as follows:# resize2fs /dev/VG00/LV00 500M
where 500M is amount of disk space to which you wish to shrink the disk data.4) Reduce the logical volume by 500GB with lvreduce:# lvreduce -L -500G VG00/LV00
It will reduces the size of logical volume LV00 in volume group VG00 by 500GB.5) Mount the filesystem and check the disk space with df -h command.
2. What is the difference between “Linux” and “Linux LVM” partition types?
It does not have any specific advantage. both partition type can work with LVM.The type id is only for informative purposes. Logical volumes don’t have a concept of a “type”, they’re just block devices. They do not have a partition ID or type. They are composed of physical extents (PE) which may be spread over multiple physical volumes (PV), each of which could be a partition or a complete disk. LVM logical volumes are normally treated like individual partitions, not as disks, so there’s no partition table and therefore no partition type id to look for.
3. How do we Log all LVM commands, that we execute on the machine?
The default LVM configuration do not log the commands that are used in a shell or in a GUI (e.g system-config-lvm) environment. But it’s possible to active the log in the lvm.confTo active the log follow the following steps.Make a copy of the original lvm.conf file
# cp /etc/lvm/lvm.conf /root
Edit the lvm.conf file and find the log section. It starts as ‘log {‘. The default configuration comes like the following:
log {
verbose = 0
syslog = 1
#file = “/var/log/lvm2.log”
overwrite = 0
level = 0
indent = 1
command_names = 0
prefix = ” “
# activation = 0
}
It’s necessary only 2 modifications to active the log of the LVM:– Uncomment the line # file = “/var/log/lvm2”
– Change the level = 0 to a value between 2 and 7.Remind that the 7 is more verbose than 2.Save and exit the file.
It’s not necessary restart any service, the file /var/log/lvm2.log will be created when any command from lvm run (e.g lvs, lvextend, lvresize etc).
4. How do I create LVM-backed raw devices with udev in RHEL6?
Edit /etc/udev/rules.d/60-raw.rules and add lines similar to the following:ACTION==”add”, ENV{DM_VG_NAME}==”VolGroup00″, ENV{DM_LV_NAME}==”LogVol00″, RUN+=”/bin/raw /dev/raw/raw1 %N”where VolGroup00 is your Volume Group name, and LogVol00 is your Logical Volume name.To set permissions on these devices, we can do so as per usual:ACTION==”add”, KERNEL==”raw*”, OWNER==”username”, GROUP==”groupname”, MODE==”0660″
5. What is optimal stripe count for better performance in LVM?
The maximum number of stripes in LVM is 128. The “optimal number of stripes” depends on the storage devices used for the LVM logical volume.In the case of local physical disks connected via SAS or some other protocol, the optimal number of stripes is equal to the number of disks.In the case of SAN storage presented to the Linux machine in the form of LUNs, there may be no advantage to a striped logical volume (a single LUN may provide optimal performance), or there may be an advantage. With a LUN coming from SAN storage, the LUN is often a chunk of storage which comes from a SAN volume, and the SAN volume often is a RAID volume with some sort of striping and/or parity. In this case there is no advantage.In some cases where SAN characteristics are unknown or changing, performance testing of two LVM volumes with a differing number of stripes may be worthwhile. For simple sequential IO performance, “dd” can be used (for random another tool will be needed). Make the second LVM logical volume containing twice the number of stripes as the first, and compare performance. Continue increasing stripes and comparing performance in this manner until there is no noticeable performance improvement.
6. LVM commands are failing with this error: Can’t open exclusively. Mounted filesystem?
We often face this problem that we cannot create a logical volume on a disk that is part of multipathed storage, after creating a new disk partition created with parted.Multipathing uses the mpath* name to refer to storage rather than the sd* disk name, since multiple sd* names can refer to the same storage.The below error message appears because the multipath daemon is accessing the sd* device, and the lvm commands cannot open the device exclusively. Commands such as “pvcreate /dev/sd*” fail with theError: Can’t open exclusively. Mounted Filesystem?To resolve the issue:Run “# fuser -m -v /dev/sd*” to see what processes are accessing the device.If multipathd appears, run “multipath -ll” to determine which mpath* device maps to that disk.Run the command “pvcreate /dev/mapper/mpath*” to successfully create a physical volume on the device.
Continue creating the volume group, logical volume, and filesystem using the correct path to the disk.
7. Recommended region size for mirrored lvm volumes ?
Region size can impact performance, generally for larger region sizes, there will be fewer writes to the log device – this could increase performance. Smaller region sizes lead to faster recovery times after a machine crashes. The default region size of 512KB balances these considerations pretty fairly.A change in the region size for mirror lvm volume would not result in much performance gain, but if you could simulate the workload, then please try a few different variants for region size to ensure that.Also, there is a limitation in the cluster infrastructure, cluster mirrors greater than 1.5TB cannot be created with the default region size of 512KB. Users that require larger mirrors should increase the region size from its default to something larger. Failure to increase the region size will cause LVM creation to hang and may hang other LVM commands as well.As a general guideline for specifying the region size for mirrors that are larger than 1.5TB, you could take your mirror size in terabytes and round up that number to the next power of 2, using that number as the -R argument to the lvcreate command. For example, if your mirror size is 1.5TB, you could specify -R 2. If your mirror size is 3TB, you could specify -R 4. For a mirror size of 5TB, you could specify -R 8. For more information about the same please refer to 4.4.3. Creating Mirrored VolumesAbove calculation could be used to decide the region size for large mirror lvm volumes of size 16 – 20 TB also. For example, when creating the cluster mirror lvm volume with size of 20 TB, please set a region size of 32 using -R 32 argument with lvcreate command as shown below:$ lvcreate -m1 -L 20T -R 32 -n mirror vol_group
8. Recreating a partition table that was accidentally deleted that contains an LVM Physical Volume on Red Hat Enterprise Linux?
NOTE: This is a very difficult procedure and does not guarantee that data can be recovered. You may wish to try this procedure on a snapshot of the data first (where possible). Alternatively, seek a data recovery company to assist you with restoring the data.The location of the LVM2 label can be found on the disk by using “hexedit -C /dev/ | grep LABELONE” (be sure to locate the correct label, not another one that might have been added by mistake). Using the location of the label, we can discover the cylinder where the partition that holds that LVM PV starts.Recreating the partition at the correct location will allow LVM2 tools to find the LVM PV and the volume group can be reactivated. If you cannot locate the LVM2 label, this procedure will not be useful to you. It is possible to use the same procedure for other data located on the disk (such as ext3 filesystems).If the following symptoms are observed, this solution may apply to you:When scanning for the volume group, it is not found:# vgchange -an vgtest
Volume group “vgtest” not foundLooking through LVM volume group history (in /etc/lvm/archive/.-), the PV for this volume group used to contain partitions:$ grep device /etc/lvm/archive/vgtest_00004-313881633.vg
device = “/dev/vdb5” # Hint onlyDevice that should contain the LVM PV now does not have any partitions:Try using parted rescue first as it may be able to detect the start of other partitions on the device and restore the partition table.If parted rescue does not work, the following procedure can help you restore the partition tableUsing hexdump, try to locate the LVM label on the device that had the partition table removed (in the data below, the location of the LABELONE label is {hex} 0fc08000 bytes into the device):# hexdump -C /dev/vdb | grep LABELONE
0fc08000 4c 41 42 45 4c 4f 4e 45 01 00 00 00 00 00 00 00 |LABELONE……..|
Converting the byte-address of the LVM2 label to decimal:
0x0fc08000 = 264273920Run fdisk -l against the device to find out how many bytes per cylinder:# fdisk -l /dev/vdbDisk /dev/vdb: 2113 MB, 2113929216 bytes
16 heads, 63 sectors/track, 4096 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes <— 516096 bytes per cylinder
The byte location into the disk that the partition for the LVM PV starts is:
(byte position of LVM label (decimal) = 264273920
number of bytes per cylinder) = 516096
(264273920 / 516096) = 512.063492063 <– round this down to cylinder 512Add one cylinder because they start at 1, not zero: 512 + 1 = Starting cylinder: 513Create partition table with a partition starting at cylinder 513:# fdisk /dev/vdb
…
Command (m for help): n
Command action
e extended
p primary partition (1-4)
e
Partition number (1-4): 4
First cylinder (1-4096, default 1): 513
Last cylinder or +size or +sizeM or +sizeK (513-4096, default 4096):
Using default value 4096Command (m for help): n
Command action
l logical (5 or over)
p primary partition (1-4)
l
First cylinder (513-4096, default 513):
Using default value 513
Last cylinder or +size or +sizeM or +sizeK (513-4096, default 4096): 1024Command (m for help): pDisk /dev/vdb: 2113 MB, 2113929216 bytes
16 heads, 63 sectors/track, 4096 cylinders
Units = cylinders of 1008 * 512 = 516096 bytesDevice Boot Start End Blocks Id System
/dev/vdb4 513 4096 1806336 5 Extended
/dev/vdb5 513 1024 258016+ 83 LinuxCommand (m for help): w
The partition table has been altered!Calling ioctl() to re-read partition table.
Syncing disks.Rescan and activate the volume group:# pvscan
PV /dev/vdb5 VG vgtest lvm2 [248.00 MB / 0 free]
Total: 1 [8.84 GB] / in use: 1 [8.84 GB] / in no VG: 0 [0 ]# vgchange -ay vgtest
1 logical volume(s) in volume group “vgtest” now active
9. LVM2 volume group in partial mode with physical volumes marked missing even though they are available in RHEL
Sometimes, attempting to modify a volume group or logical volume fails due to missing devices that are not actually missing:# lvextend -l+100%PVS /dev/myvg/lv02 /dev/mapper/mpath80WARNING: Inconsistent metadata found for VG myvg – updating to use version 89
Missing device /dev/mapper/mpath73 reappeared, updating metadata for VG myvg to version 89.
Device still marked missing because of alocated data on it, remove volumes and consider vgreduce –removemissing.Any attempt to change a VG or LV claims PVs are missing:Cannot change VG myvg while PVs are missing.
Consider vgreduce –removemissing.LVM physical volumes are marked with the missing (m) flag in pvs output even though they are healthy and available:PV VG Fmt Attr PSize PFree
/dev/mapper/mpath24 myvg lvm2 a-m 56.20G 0Volume group is marked as ‘partial’ and causes lvm commands to fail:VG #PV #LV #SN Attr VSize VFree
myvg 42 10 0 wz-pn- 2.31T 777.11GRestore each missing physical volume with:# vgextend –restoremissing
# vgextend –restoremissing myvg /dev/mapper/mpath24
Volume group “myvg” successfully extended
10. Mount LVM partitions on SAN storage connected to a newly-built server?
Scan for Physical Volumes, scan those PVs for Volume Groups, scan those VGs for Logical Volumes, change Volume Groups to active.
# pvscan
# vgscan
# lvscan
# vgchange -ayVolumes are now ready to mount as per usual with mount command and/or add to /etc/fstab file.
Check Multipath storage is actually available to host:
[root@host ~]# multipath -l
mpath1 (350011c600365270c) dm-8 HP 36.4G,ST336754LC
[size=34G][features=1 queue_if_no_path][hwhandler=0][rw]
_ round-robin 0 [prio=0][active]
_ 0:0:1:0 sda 8:0 [active][undef]
mpath5 (3600508b4001070510000b00001610000) dm-11 HP,HSV300
[size=15G][features=1 queue_if_no_path][hwhandler=0][rw]
_ round-robin 0 [prio=0][active]
_ 4:0:0:3 sde 8:64 [active][undef]
_ 5:0:0:3 sdo 8:224 [active][undef]
_ round-robin 0 [prio=0][enabled]
_ 4:0:1:3 sdj 8:144 [active][undef]
_ 5:0:1:3 sdt 65:48 [active][undef]
mpath11 (3600508b4000f314a0000400001600000) dm-13 HP,HSV300
[size=500G][features=1 queue_if_no_path][hwhandler=0][rw]
_ round-robin 0 [prio=0][active]
_ 4:0:0:5 sdg 8:96 [active][undef]
_ 5:0:0:5 sdq 65:0 [active][undef]
_ round-robin 0 [prio=0][enabled]
_ 4:0:1:5 sdl 8:176 [active][undef]
_ 5:0:1:5 sdv 65:80 [active][undef]
mpath4 (3600508b4001070510000b000000d0000) dm-10 HP,HSV300
[size=750G][features=1 queue_if_no_path][hwhandler=0][rw]
_ round-robin 0 [prio=0][active]
_ 4:0:0:2 sdd 8:48 [active][undef]
_ 5:0:0:2 sdn 8:208 [active][undef]
_ round-robin 0 [prio=0][enabled]
_ 4:0:1:2 sdi 8:128 [active][undef]
_ 5:0:1:2 sds 65:32 [active][undef]
mpath10 (3600508b4000f314a0000400001090000) dm-12 HP,HSV300
[size=350G][features=1 queue_if_no_path][hwhandler=0][rw]
_ round-robin 0 [prio=0][active]
_ 4:0:0:4 sdf 8:80 [active][undef]
_ 5:0:0:4 sdp 8:240 [active][undef]
_ round-robin 0 [prio=0][enabled]
_ 4:0:1:4 sdk 8:160 [active][undef]
_ 5:0:1:4 sdu 65:64 [active][undef]
mpath3 (3600508b4001070510000b000000a0000) dm-9 HP,HSV300
[size=750G][features=1 queue_if_no_path][hwhandler=0][rw]
_ round-robin 0 [prio=0][active]
_ 4:0:0:1 sdc 8:32 [active][undef]
_ 5:0:0:1 sdm 8:192 [active][undef]
_ round-robin 0 [prio=0][enabled]
_ 4:0:1:1 sdh 8:112 [active][undef]
_ 5:0:1:1 sdr 65:16 [active][undef]
Check Physical Volumes are being scanned by LVM and seen:
[root@host ~]# pvdisplay
— Physical volume —
PV Name /dev/dm-14
VG Name VolG_CFD
PV Size 750.00 GB / not usable 4.00 MB
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 191999
Free PE 0
Allocated PE 191999
PV UUID 0POViC-2Pml-AmfI-W5Mh-s6fC-Ei18-hCOOoJ— Physical volume —
PV Name /dev/dm-13
VG Name VolG_CFD
PV Size 500.00 GB / not usable 4.00 MB
Allocatable yes
PE Size (KByte) 4096
Total PE 127999
Free PE 1
Allocated PE 127998
PV UUID RcDER4-cUwa-sDGF-kieA-44q9-DLm2-1CMOh4— Physical volume —
PV Name /dev/dm-15
VG Name VolG_FEA
PV Size 750.00 GB / not usable 4.00 MB
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 191999
Free PE 0
Allocated PE 191999
PV UUID 5DprQD-OOs9-2vxw-MGT1-13Nl-YTTt-BnxGhq— Physical volume —
PV Name /dev/dm-12
VG Name VolG_FEA
PV Size 350.00 GB / not usable 4.00 MB
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 89599
Free PE 0
Allocated PE 89599
PV UUID uQIqyq-0PiC-XT2e-J90h-tRBk-Nb8L-MdleF5— Physical volume —
PV Name /dev/dm-11
VG Name vgnbu
PV Size 15.00 GB / not usable 4.00 MB
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 3839
Free PE 0
Allocated PE 3839
PV UUID aZNgCY-eRYe-3HmZ-bnAD-4kGN-9DhN-8I5R7D— Physical volume —
PV Name /dev/sdb4
VG Name vg00
PV Size 33.24 GB / not usable 16.86 MB
Allocatable yes (but full)
PE Size (KByte) 32768
Total PE 1063
Free PE 0
Allocated PE 1063
PV UUID PKvqoX-hWfx-dQUv-9NCL-Re78-LyIa-we69rm— Physical volume —
PV Name /dev/dm-8
VG Name vg00
PV Size 33.92 GB / not usable 12.89 MB
Allocatable yes
PE Size (KByte) 32768
Total PE 1085
Free PE 111
Allocated PE 974
PV UUID GnjUsb-NxJR-aLgC-fga8-Ct1q-cf89-xhaaFs
11. How to Grow an LVM Physical Volume after resizing the disk?
Note : This procedure does have the potential to lose data on the disk if done improperly, we strongly recommend a backup be performed before proceeding.For example, we have resized a disk from 50Gb to 120Gb# pvs
PV VG Fmt Attr PSize PFree
/dev/sdb1 VolGroup02 lvm2 a– 50.00g 2.00gWhilst the underlying storage (eg: /dev/sdb) may have been resized, the partition we are using as a physical volume (eg: /dev/sdb1) remains at the smaller size.We’ll need to resize the partition, then resize the Physical Volume before you can proceed.First we would confirm the actual storage size with “fdisk -ul /dev/sdb” and observe the increased disk size. Depending on how the storage is presented, we may need to reboot for this to appear.We will then need to resize the partition on the disk. We can achieve this by observing the starting sector in fdisk -ul /dev/sdb, then removing the partition with fdisk and re-creating it with the same starting sector but the (default) last sector of the drive as the ending sector. Then write the partition table and confirm the change (and the correct starting sector) with fdisk -ul /dev/sdb.Now We are ready to pvresize /dev/sdb1 to grow the PV onto the rest of the expanded partition.This will create free extents within the Volume Group which we can then grow a Logical Volume into.If we run your LV resize with lvresize -r, it will grow the filesystem we have within the Logical Volume as well.
12. Delete a LVM partition
Delete LVM partition from the /etc/fstab For example:/dev/sda2 / ext3 defaults 1 1
/dev/sda1 /boot ext3 defaults 1 2
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
/dev/sd3 swap swap defaults 0 0
/dev/volumegroup/lvm /var ext3 defaults 0 0umount LVM partition
# umount /dev/volumegroup/lvmDisable lvm
# lvchange -an /dev/volumegroup/lvmDelete lvm volume
# lvremove /dev/volumegroup/lvmDisable volume group
# vgchange -an volumegroupDelete volume group
# vgremove volumegroupDelete phisical Volume
# pvremove /dev/sdc1 /dev/sdc2
13. Restore a volume group in Red Hat Enterprise Linux if one of the physical volumes that constitutes the volume group has failed.
NOTE : These commands have the potential to corrupt data, should be executed at one’s own discretionThis procedure requires a recent backup of the LVM configuration. This can be generated with the command vgcfgbackup and is stored in the file /etc/lvm/backup/. The /etc/lvm/archive directory also contains recent configurations that are created when modifications to the volume group metadata are made. It is recommended that these files be regularly backed up to a safe location so that they will be available if required for recovery purposes.Assuming a physical volume has been lost that was a part of a volume group the following procedure may be followed. The procedure will replace the failed physical volume with a new disk rendering any remaining logical volumes accessible for recovery purposes.The procedure for recovery is as follows:1. Execute the following command to display information about the volume group in question:# vgdisplay –partial –verboseThe output will be similar to the following (note that the –partial flag is required to activate or manipulate a volume group having one or more physical volumes missing and that use of this flag with LVM2 activation commands (vgchange -a) will force volumes to be activated in a read-only state):Partial mode. Incomplete volume groups will be activated read-only.
Finding all volume groups
Finding volume group “volGroup00”
Couldn’t find device with uuid ƏeWicl-1HSB-Fkcz-wrMf-DzMd-Dgx2-Kyc11j’.
Couldn’t find device with uuid ƏeWicl-1HSB-Fkcz-wrMf-DzMd-Dgx2-Kyc11j’.
Couldn’t find device with uuid ƏeWicl-1HSB-Fkcz-wrMf-DzMd-Dgx2-Kyc11j’.
Couldn’t find device with uuid ƏeWicl-1HSB-Fkcz-wrMf-DzMd-Dgx2-Kyc11j’.
— Volume group —
VG Name volGroup00
System ID
Format lvm2
Metadata Areas 4
Metadata Sequence No 33
VG Access read
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 5
Act PV 5
VG Size 776.00 MB
PE Size 4.00 MB
Total PE 194
Alloc PE / Size 194 / 776.00 MB
Free PE / Size 0 / 0
VG UUID PjnqwZ-AYXR-BUyo-9VMN-uSRZ-AFlj-WOaA6z
— Logical volume —
LV Name /dev/volGroup00/myLVM
VG Name volGroup00
LV UUID az6REi-mkt5-sDpS-4TyH-GBj2-cisD-olf6SW
LV Write Access read/write
LV Status available
# open 0
LV Size 776.00 MB
Current LE 194
Segments 5
Allocation inherit
Read ahead sectors 0
Block device 253:0
— Physical volumes —
PV Name /dev/hda8
PV UUID azYDV8-e2DT-oxGi-5S9Q-yVsM-dxoB-DgC4qN
PV Status allocatable
Total PE / Free PE 48 / 0
PV Name /dev/hda10
PV UUID SWICqb-YIbb-g1MW-CY60-AkNQ-gNBu-GCMWOi
PV Status allocatable
Total PE / Free PE 48 / 0
PV Name /dev/hda11
PV UUID pts536-Ycd5-kNHR-VMZY-jZRv-nTx1-XZFrYy
PV Status allocatable
Total PE / Free PE 48 / 0
PV Name /dev/hda14
PV UUID OtIMPe-SZK4-arxr-jGlp-eiHY-2OA6-kyntME
PV Status allocatable
Total PE / Free PE 25 / 0
PV Name unknown device
PV UUID 9eWicl-1HSB-Fkcz-wrMf-DzMd-Dgx2-Kyc11j
PV Status allocatable
Total PE / Free PE 25 / 0Note the PV UUID line:PV UUID 9eWicl-1HSB-Fkcz-wrMf-DzMd-Dgx2-Kyc11jThis line contains the universally unique identifier (UUID) of the physical volume that failed and will be needed in the next step.2. If the physical volume failed, it must be replaced with a disk or partition that is equal in size or larger than the failed volume. If the disk did not fail but was overwritten or corrupted, the same volume can be re-used. Run the following command to re-initialize the physical volume:# pvcreate –restorefile /etc/lvm/backup/–uuid In the above command the UUID is the value taken from the output in step 1. In this example the full command would be:# pvcreate –restorefile /etc/lvm/backup/volGroup00 –uuid 9eWicl-1HSB-Fkcz-wrMf-DzMd-Dgx2-Kyc11j /dev/hda15Couldn’t find device with uuid 9eWicl-1HSB-Fkcz-wrMf-DzMd-Dgx2-Kyc11j.
Physical volume “/dev/hda15” successfully createdNote that when overwriting a previously-used LVM2 physical volume (for example when recovering from a situation where the volume had been inadvertently overwritten) the -ff option must be given to the pvcreate command.3. Now the new physical volume has been initialized with the UUID of the old physical volume. The volume group metadata may be restored with the following command:# vgcfgrestore –file /etc/lvm/backup/ Continuing the earlier example the exact command would be:# vgcfgrestore –file /etc/lvm/backup/volGroup00 volGroup00
Restored volume group volGroup004. To check that the new physical volume is intact and the volume group is functioning correctly execute vgdisplay -v.Note: This procedure will not restore any data lost from a physical volume that has failed and been replaced. If a physical volume has been partially overwritten (for example, the label or metadata regions have been damaged or destroyed) then user data may still exist in the data area of the volume and this may be recovered using standard tools after restoring access to the volume group using these steps.
14. Logical Volume Manager (LVM) snapshot and how do we use it?
Logical Volume Manager (LVM) provides the ability to take a snapshot of any logical volume for the purpose of obtaining a backup of a partition in a consistent state. Traditionally the solution has been to mount the partition read-only, apply table-level write locks to databases or shut down the database engine etc.; all measures which adversely impact availability (but not as much as data loss without a backup will). With LVM snapshots it is possible to obtain a consistent backup without compromising availability.The LVM snapshot works by logging the changes to the filesystem to the snapshot partition, rather than mirroring the partition. Thus when you create a snapshot partition you do not need to use space equal to the size of the partition that you are taking a snapshot of, but rather the amount of changes that it will undergo during the lifetime of the snapshot. This is a function of both how much data is being written to the partition and also how long you intend keeping the LVM snapshot.Below example shows about LVM snapshot creation. Here we create a logical volume of 500MB to use to take a snapshot. This will allow 500MB of changes on the volume we are taking a snapshot of during the lifetime of the snapshot.The following command will create /dev/ops/dbbackup as a snapshot of /dev/ops/databases.# lvcreate -L500M -s -n dbbackup /dev/ops/databases
lvcreate — WARNING: the snapshot must be disabled if it gets full
lvcreate — INFO: using default snapshot chunk size of 64 KB for “/dev/ops/dbbackup”
lvcreate — doing automatic backup of “ops”
lvcreate — logical volume “/dev/ops/dbbackup” successfully createdNow we create the mount point and mount the snapshot.# mkdir /mnt/ops/dbbackup
# mount /dev/ops/dbbackup /mnt/ops/dbbackup
mount: block device /dev/ops/dbbackup is write-protected, mounting read-onlyAfter performing the backup of the snapshot partition we release the snapshot. The snapshot will be automatically released when it fills up, but maintaining incurs a system overhead in the meantime.# umount /mnt/ops/dbbackup
# lvremove /dev/ops/dbbackup
lvremove — do you really want to remove “/dev/ops/dbbackup”? [y/n]: y
lvremove — doing automatic backup of volume group “ops”
lvremove — logical volume “/dev/ops/dbbackup” successfully removed
15. How do we Create New LVM volume from a LVM snapshot?
create a sparse file, providing room for the volumegroup# dd if=/dev/zero of=file bs=1 count=1 seek=3Gsetup the sparse file as block device# losetup -f filecreate a volumegroup and volume# pvcreate /dev/loop0
# vgcreate vgtest /dev/loop0
# lvcreate -l 10 -n lvoriginal vg00create a filesystem, create content on it# mkdir /mnt/tmp /mnt/tmp2
# mkfs.ext4 /dev/vg00/lvoriginal
# mount /dev/vg00/lvoriginal /mnt/tmp
# echo state1 >>/mnt/tmp/contentscreate the mirror – the volumegroup has to have enough free PE’s# lvconvert -m 1 /dev/vg00/lvoriginal
# lvconvert –splitmirrors 1 -n lvclone /dev/vg00/lvoriginalchange the contents on the original volume# echo state2 >>/mnt/tmp/contentsnow access the clone volume and verify it represents the originals old state# mount /dev/vgtest/lvolnew /mnt/tmp2
# cat /mnt/tmp2/contents
# cat /mnt/tmp/contents
16. How can I boot from an LVM snapshot on Red Hat Enterprise Linux?
The snapshot has to be in the same volume group as the original root logical volume.Often, other file systems should be snapshotted at the same time (eg. /var, /usr) if they are separate file systems to root.Procedure:Step 1 : Create a snapshot of any local filesystems (for RHEL6, it is recommended that you do not put a ‘-‘ in the name as it makes addressing the volume more complicated):# lvcreate -s -n varsnapshot -L 1G /dev/VolGroup00/var
# lvcreate -s -n rootsnapshot -L 2G /dev/VolGroup00/rootStep 2 : Mount the root snapshot so we can change the /etc/fstab of the snapshot version:# mkdir /mnt/snapshot
# mount /dev/VolGroup00/rootsnapshot /mnt/snapshot
# vi /mnt/snapshot/etc/fstabStep 3 : Change the entries in /mnt/etc/fstab to point to the snapshot volumes rather than the original devices:/dev/VolGroup00/rootsnapshot / ext3 defaults 1 1
/dev/VolGroup00/varsnapshot /var ext3 defaults 1 2Step 4 : Now unmount the snapshot:# cd /tmp
# umount /mnt/snapshotStep 5 : Add an entry in grub to boot into the snapshot:Step 5a For Red Hat Enterprise Linux 5, copy the current default grub.conf entry, and make a new entry pointing to the snapshot version:/boot/grub/grub.conf entry before:
…
default=0
…
title Red Hat Enterprise Linux 5 (2.6.18-194.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-194.el5 ro root=/dev/VolGroup00/root
initrd /initrd-2.6.18-194.el5.imgAfter:
…
default=0
…
title Snapshot (2.6.18-194.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-194.el5 ro root=/dev/VolGroup00/rootsnapshot
initrd /initrd-2.6.18-194.el5.img
title Red Hat Enterprise Linux 5 (2.6.18-194.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-194.el5 ro root=/dev/VolGroup00/root
initrd /initrd-2.6.18-194.el5.imgStep 5b For Red Hat Enterprise Linux 6, copy the default grub.conf entry, and maake a new entry pointing to the snapshot version:/boot/grub/grub.conf before:
…
default=0
…
title Red Hat Enterprise Linux Server (2.6.32-279.9.1.el6.x86_64)
root (hd0,0)
kernel /vmlinuz-2.6.32-279.9.1.el6.x86_64 ro root=/dev/mapper/VolGroup00-rootvol rd_NO_LUKS LANG=en_US.UTF-8 rd_NO_MD quiet SYSFONT=latarcyrheb-sun16 rhgb crashkernel=auto rd_LVM_LV=VolGroup00/rootvol KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM
initrd /initramfs-2.6.32-279.9.1.el6.x86_64.img/boot/grub/grub.conf after:
…
default=0
…
title Snapshot (2.6.32-279.9.1.el6.x86_64)
root (hd0,0)
kernel /vmlinuz-2.6.32-279.9.1.el6.x86_64 ro root=/dev/mapper/VolGroup00-rootsnapshot rd_NO_LUKS LANG=en_US.UTF-8 rd_NO_MD quiet SYSFONT=latarcyrheb-sun16 rhgb crashkernel=auto rd_LVM_LV=VolGroup00/rootvol KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM
initrd /initramfs-2.6.32-279.9.1.el6.x86_64.img
title Red Hat Enterprise Linux Server (2.6.32-279.9.1.el6.x86_64)
root (hd0,0)
kernel /vmlinuz-2.6.32-279.9.1.el6.x86_64 ro root=/dev/mapper/VolGroup00-rootvol rd_NO_LUKS LANG=en_US.UTF-8 rd_NO_MD quiet SYSFONT=latarcyrheb-sun16 rhgb crashkernel=auto rd_LVM_LV=VolGroup00/rootvol KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM
initrd /initramfs-2.6.32-279.9.1.el6.x86_64.imgNOTE: On the grub menu entry on RHEL6, change “root=” to point to the snapshot but DO NOT change rd_LVM_LV to point to the snapshot, because this will prevent both the real and snapshot devices from activating on boot. Snapshots cannot be activated without the real volume being activated as well.Step 6 : Now you can boot into the snapshot by choosing the correct grub menu entry. To boot back onto the real LVM device, just select the original grub menu entry.Step 7 : You can verify that you are booted into the snapshot version by checking which LVM device is mounted:# mount | grep VolGroup00
/dev/mapper/VolGroup00-rootsnapshot on / type ext4 (rw)
/dev/mapper/VolGroup00-varsnapshot on /var type ext4 (rw)You can remove the snapshot with the following procedure:Step 1) Remove the grub entry from /boot/grub/grub.conf for your snapshot volume.Step 2) Boot into (or ensure you are already booted into) the real LVM volume:# mount | grep VolGroup00
/dev/mapper/VolGroup00-root on / type ext4 (rw)
/dev/mapper/VolGroup00-var on /var type ext4 (rw)Step 3) Remove the snapshot volumes:# lvremove /dev/VolGroup00/rootsnapshot
# lvremove /dev/VolGroup00/varsnapshotSummaryTo boot into an LVM snapshot of the root filesystem, you must change only the following locations:/etc/fstab on the LVM snapshot volume (do not change fstab on the real volume)
/boot/grub/grub.conf to add an entry that points to the snapshot device as the root disk.
There is no need to rebuild initrd to boot into the snapshot.
17.How much memory is consumed by an LVM snapshot?
An LVM snapshot implements a “clone” of an existing logical volume (LV) by tracking “exceptions (a write request to an area on the origin volume will trigger the creation of an exception on the snapshot)” to the data on the origin volume. An exception tracks one or more “chunks” of data that has changed from the time the snapshot was taken. The size of a “chunk” of data is determined at the time of snapshot creation (see lvcreate man page, “–chunksize” option), and must be a power of 2, from 4kB to 512kB. The default chunk size is 4k.Each exception will consume a small amount of memory in the kernel. The memory consumed by one exception can be found by examining the dm-snapshot-ex slab cache statistics in /proc/slabinfo.A single object in this cache is a single exception.The LVM snapshot implementation is efficient, and if possible, it stores more than one chunk of data for each exception. In the worst case, only one chunk of data will be stored for each exception. In the best case, 255 consecutive chunks of data will be stored for each exception. The write I/O patterns determine whether LVM can store consecutive chunks of data in a single exception. If the I/O patterns are mostly sequential, more than one sequential chunk of data will be stored for each exception, and the memory usage will tend towards the best case memory usage. However, if the I/O pattern is more random, the memory usage will tend towards the worst case memory usage.
- i386 (32-bit) : 145 exceptions / page (assuming 4k page size, approximately 28 bytes / exception)
- x86_64 (64-bit) : 112 exceptions / page (assuming 4k page size, approximately 36.5 bytes / exception)
Approximate Calculation for Memory usage:Calculating memory needs of an LVM snapshot may be done by using a simple calculation which involves the number of exceptions (assume each exception stores only one chunk of data), the number of exceptions per page, and the page size, in particular:W = (N / E) * PwhereW is worst case memory overhead, in bytes
N is the worst case number of exceptions
E is the number of exceptions per page of memory
P is the page size, in bytesCalculating number of exceptions is based on the size of the snapshot logical volume, and the chunksize:In worst Case : N = S / C
In Best Case : N = S / (C * 255)where :S is the size of the snapshot logical volume
C is the chunksize
In the best case, 255 chunks can be stored per exceptionFor Example:Architecture: i386
Origin LV size: 100GB
Snapshot LV size: 10GB
Chunksize: 512kWorst case memory overhead : 10*1024*1024*1024 / 524288 / 145 * 4096 = 578,524 bytes
Best Case Memory Overhead : 10*1024*1024*1024 / (524288*255) / 145 * 4096 = 2,268 bytes
No comments:
Post a Comment