Here is the situation: We have a FC SAN with one volume dedicated for virtual machines’ disks. That SAN volume is available on the server via 2 paths (two storage controllers, two FC switches and two HBAs) and failover is handled by multipath-tools. This SAN volume has no partition table but is a LVM’s physical volume (PV), which contributes to LVM volume group (VG) named lvm-guests-shared-VG. LVM logical volumes (LV) in this VG are virtual disks for libvirt/kvm virtual machines.
The problem: suppose we need to expand disk of one virtual machine a lot; expand so much, that SAN volume has to be expanded too; and we need to do that online – without restarting anything.
Step 1 – Expand SAN Volume
There is no explanation needed as this is a basic function of probably all storage arrays.
Step 2 – Make the SAN Volume’s New Size Visible on the Server
First we have to rescan volume’s geometry. We use multipath -l
command to get block devices representing SAN volume available by each path. Example output:
lvm-guests-shared (360080e500023edb4000004a751662caf) dm-8 IBM,1746 FAStT
size=1.0T features=’3 queue_if_no_path pg_init_retries 50′ hwhandler=’1 rdac’ wp=rw
|-+- policy=’round-robin 0′ prio=-1 status=active
| `- 0:0:2:2 sde 8:64 active undef running
`-+- policy=’round-robin 0′ prio=-1 status=enabled
`- 0:0:1:2 sda 8:0 active undef running
Devices sda and sde are the one we where looking for. To make the kernel know about its new size we send char “1” to /sys/block/DEVICE/device/rescan file. For example:
$ echo 1 >/sys/block/sda/device/rescan; echo 1 >/sys/block/sde/device/rescan
The second, we let multipath know about the changes by running command multipath -r
.
Step 3 – Utilizing New Space in LVM
To make PV consume whole SAN volume’s space, we execute command pvresize -v /dev/DEVICE
. DEVICE can be found by looking at PV Name field in pvdisplay
command’s output. Of course if we have many PVs on the server we take into account only the one contributing to our lvm-guests-shared-VG. Example pvresize output looks like this:
# pvresize -v /dev/dm-8
Using physical volume(s) on command line
Archiving volume group “lvm-guests-shared-VG” metadata (seqno 16).
Resizing physical volume /dev/dm-8 from 262143 to 524287 extents.
Resizing volume “/dev/dm-8” to 4294966912 sectors.
Updating physical volume “/dev/dm-8”
Creating volume group backup “/etc/lvm/backup/lvm-guests-shared-VG” (seqno 17).
Physical volume “/dev/dm-8” changed
1 physical volume(s) resized / 0 physical volume(s) not resized
Now vgdisplay
will show VG’s new size.
The last LVM thing is to expand LV. We do this using command lvresize -L 800G /dev/lvm-guests-shared-VG/resized-virtual-machine
, where 800G is a new LV size, utilizing bigger VG and resized-virtual-machine is the name of LV with our virtual machine’s disk.
Step 4 – Refresh Disk Size on the Guest Machine
This is quite tricky, since VirtIO driver providing block device with guest’s disk has no refresh file under /sys/block/ tree. Refreshing virtual disk size has been implemented in libvirt 0.9.8 and can be triggered by block_resize command in qemu monitor. Command requires qemu disk name as an argument. To get this name issue: virsh qemu-monitor-command resized-virtual-machine --hmp "info block"
. The output should look like this:
# virsh qemu-monitor-command resized-virtual-machine –hmp “info block”
drive-virtio-disk0: removable=0 io-status=ok file=/dev/lvm-guests-shared-VG/resized-virtual-machine ro=0 drv=raw encrypted=0
drive-ide0-1-0: removable=1 locked=0 tray-open=0 io-status=ok [not inserted]
Of course resized-virtual-machine is the name of our virtual machine (domain in libvirt terminology).
So disk’s name we where looking for is drive-virtio-disk0. Now we can refresh disk size on the guest side by executing virsh qemu-monitor-command resized-virtual-machine --hmp "block_resize drive-virtio-disk0 800G"
Now we can login to the guest and utilize new space.