/dev/urandom

/dev/urandom

Pseudorandom thoughts generator.

I'm Niccolò Maggioni.
Student, geek and developer.

Reducing LVM-Thin volumes in Proxmox

I knew the day would come: the way overprovisioned (hey, it’s called homelab for a reason) LVM-Thin storage pool I use to back my LXC containers in Proxmox has filled up.

By looking at the actual disk usage with this handy snippet, I figured that I could easily reclaim some space though:

1
2
3
for vmid in $(pct list | cut -d' ' -f1 | grep -v VMID); do
echo "${vmid} - $(pct df ${vmid} | grep rootfs | awk '{ print "TOT:", $3, " USED:", $4 }')"
done
1
2
3
4
5
6
7
[...]
118 - TOT: 7.8G USED: 2.0G
119 - TOT: 62.5G USED: 17.1G
120 - TOT: 31.2G USED: 3.5G
121 - TOT: 7.8G USED: 1.3G
122 - TOT: 7.8G USED: 2.0G
[...]

It’s obvious that some containers actually needed way less space than what I initially expected when I created them, but it turns out Proxmox’s WebUI is only equipped to extend LVM “disks” (LVs), not shrink them - rightfully so I’d say, since the latter is a way more risky operation as we’ll see in a moment.

Having researched the issue in the past I remembered the resizing operations to be quite tedious, but luckily enough I quickly found an article on Yomi’s blog with the exact steps on how to do this swiftly. Yomi seems to be operating on a standard LVM pool though, not a thin one, so some adjustements needed to be made; specifically, the newly gained space needed to be reclaimed (discarded) to both actually free it and prevent the LVM logical volume to extend past the boundaries of its inner filesystem.

For the sake of posterity, the incriminated error that both pct start <ID> and lvs were throwing was roughly the same:

1
WARNING: Thin volume vmdata/vm-<ID>-disk-<N> maps 12386828288 while the size is only 4294967296

These two Proxmox forum threads pointed me at the solution on RedHat’s bugzilla. This is apparently a known issue since 2017 but given its mix of difficulty to solve and low priority it looks like we’ll have to make do with the mentioned workaround. The trick here is to:

  1. Shutdown the LXC container.
  2. Check the path of the container’s root LV.
  3. Check the container’s filesystem for integrity.
  4. Resize the filesystem itself.
  5. Reduce the LVM LV to the desired size.
  6. Check if lvs complains (if you don’t see any warnings like the one above the thin volume probably never needed to grow much and you can jump straight to step 11).
  7. Take note of the offset that lvs gave you and resize the LV to its original size.
  8. Discard the extra space at the end of the LV.
  9. Shrink the LV back down.
  10. Ensure that lvs is now happy.
  11. Edit Proxmox’s config to reflect the new disk size.
  12. Restart the container.

Here are the above steps translated to LVM commands, but beware: most of them are potentially highly destructive, so be sure to have a recent backup of your containers handy and proceed with caution, scrutinizing the output of each command since there can be subtle but important differences for each container:

1
2
3
4
5
6
7
8
9
10
11
12
pct shutdown <ID>
lvdisplay | grep "LV Path\|LV Size" | grep <ID>
e2fsck -fy /dev/vmdata/vm-<ID>-disk-<N>
resize2fs /dev/vmdata/vm-<ID>-disk-<N> <NEW_SIZE>G
lvreduce -L <NEW_SIZE>G /dev/vmdata/vm-<ID>-disk-<N>
lvs | head && echo -e "\nOffset is $({ lvs --units b | grep -Po '(?<=is only )[0-9]+' } || echo '0') bytes"
lvextend -L <OLD_SIZE>G /dev/vmdata/vm-<ID>-disk-<N>
blkdiscard -f --offset <OFFSET> /dev/vmdata/vm-<ID>-disk-<N>
lvreduce -L <NEW_SIZE>G /dev/vmdata/vm-<ID>-disk-<N>
lvs | head
vim /etc/pve/lxc/<ID>.conf # sed -i 's/\(^rootfs:.*,size=\)<OLD_SIZE>/\1<NEW_SIZE>/' /etc/pve/lxc/<ID>.conf
pct start <ID>

I made them purposefully ugly so that you couldn’t simply copy-paste them; you’ll be happy I did so when you won’t obliterate the wrong LV

For maximum clarity regarding step 8, the exact offset to start discarding from is the second size mentioned by lvs:

1
2
3
4
5
6
7
$ lvs --units b | head
[...]
WARNING: Thin volume vmdata/vm-<ID>-disk-<N> maps 12386828288 B while the size is only 4294967296 B .
[...]
# --------->--------->--------->--------->--------->--------->--------->--------->------!!!!!!!!!!----

$ blkdiscard -f --offset 4294967296 /dev/vmdata/vm-<ID>-disk-<N>

The last handy tip I have for whoever will cross these seas is a snippet to actually check if you’ve remembered to edit all the needed Proxmox LXC configs or if you’ve forgotten a couple along the way - obvious discrepancies will indicate that the config file hasn’t been updated to reflect the new disk size:

1
2
3
for vmid in $(pct list | cut -d' ' -f1 | grep -v VMID); do
echo "${vmid} - $(pct df ${vmid} | grep rootfs | awk '{ print $3 }') == $(grep -Po '(?<=size=)[0-9]+G' /etc/pve/lxc/${vmid}.conf)"
done
1
2
3
4
5
6
7
8
9
[...]
116 - 7.8G == 8G
117 - 11.7G == 12G
118 - 7.8G == 8G
119 - 31.0G == 32G
120 - 7.6G == 8G
121 - 3.9G == 8G # <-- !
122 - 7.8G == 8G
[...]

EDIT - What if this problem comes back up randomly?

If after moving some VMs/containers around, a system crash, or generally toying with the underlying storage, LVM starts complaining again that WARNING: Thin volume vmdata/vm-<ID>-disk-<N> maps <size1> B while the size is only <size2> B . , a handy and generally safe thing to try is to simply backup and restore them directly through Proxmox. This usually fixes issues with the thin allocations since the volumes are deleted and recreated basing on the (raw) contents of the backup archive.


Found this article useful? Crypto makes for an excellent tip!

1LZto1zhrzHgFssZRbgG4WB9oW1RfbSc7F
0x3587C66f56C6C696A031b50451Ae68fE34993405
GBA3DXBLWF4WN32XANZWU65WIYJJIV7FKXO4VDYLMEQ7HTPZJMVV4733

Share this