I recently started migrating servers with relatively low storage space requirements to SSDs. In many cases the HDDs that get replaced are much bigger than required and unfortunately the zpools have been configured to use all the available space. Since shrinking a pool is not supported by ZFS directly, the procedure is a little more elaborate.
Preconditions & caveats
This assumes that your running a ZFS root pool named oldpool using a "classic" disk layout, e.g.:
# gpart show => 34 488397101 ada0 GPT (232G) 34 128 1 freebsd-boot (64k) 162 8388608 2 freebsd-swap (4.0G) 8388770 480008365 3 freebsd-zfs (228G)
And a simple pool layout, e.g.:
NAME USED AVAIL REFER MOUNTPOINT oldpool 659M 224G 21K none oldpool/root 659M 224G 421M / oldpool/root/tmp 2.15M 224G 2.15M /tmp oldpool/root/usr 182M 224G 182M /usr oldpool/root/var 54.3M 224G 54.3M /var
It also assumes that you're replacing one disk with another (HDD to SSD in this case).
This is a single disk pool, the general principle works regardless of the type of pool used (mirror, raidz...). /dev/ada0 is the existing drive, /dev/ada1 the new SSD drive. This ignores trying to force 4K block sizes (ashift 12).
Creating a new, smaller pool
Create partitions and install ZFS boot code
gpart create -s GPT ada1 gpart add -t freebsd-boot -s 128 ada1 gpart add -t freebsd-swap -s 4G -l newswap ada1 gpart add -t freebsd-zfs -s 105G -l newdisk ada1 gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1
Create new zpool
zpool create -o cachefile=/tmp/zpool.cache newpool gpt/newdisk
Take a snapshot and transfer it new the new pool
zfs snapshot -r oldpool@shrink zfs send -vR oldpool@shrink | zfs receive -vFd newpool zfs destroy -r oldpool@shrink zfs destroy -r newpool@shrink
Make new zpool bootable
zpool set bootfs=newpool/root newpool
Export and re-import while preserving cache
This mounts the new pool at /mnt:
cp /tmp/zpool.cache /tmp/newpool.cache zpool export newpool zpool import -c /tmp/newpool.cache -R /mnt newpool zfs set mountpoint=/ newpool/root cp /tmp/newpool.cache /mnt/boot/zfs/zpool.cache
Getting rid of legacy mounts
In case your old pool still used legacy mounts this is your chance to get rid of them. This is done by:
Removing all mount entries for the pool from /mnt/etc/fstab
Making sure that zfs_enable="YES" is part of /mnt/etc/rc.conf
Making sure that mountpoints are inherited, e.g.:zfs inherit mountpoint newpool/root/tmp zfs inherit mountpoint newpool/root/var zfs inherit mountpoint newpool/root/usra
/mnt/etc/fstab should contain:
/dev/gpt/newswap.eli none swap sw 0 0
This will create an encrypted swap partition. If you prefer unencrypted swap simply remove .eli from the device name.
Modify loader configuration
Only do this if you want to keep the new pool name.
Change vfs.root.mountfrom in /mnt/boot/loader.conf so it points to the correct root partition:
If you decided to keep the new pool name, you're basically done now. Shutdown the computer, remove the old drive and boot up again.
Renaming the pool to its original name
To rename newpool to oldpool, boot from removable media. I prefer mfsBSD for this purpose, but any live FreeBSD image will do.
After booting the image do:
zpool import -o cachefile=/tmp/zpool.cache -R /mnt newpool oldpool cp /tmp/zpool.cache /mnt/boot/zfs/. zpool set bootfs=tank/root tank reboot
Shrinking a ZFS pool is quite some effort. Save yourself the hassle by not oversizing your pools. You can always expand them later easily using gpart modify and zpool online -e pool device.