- ( Linux Skill Level: Intermediate to Advanced )
- Linux OS: Ubuntu 14.04 64-Bit // VM Environment: Virtualbox for Linux v5.1
Recently, I had an issue with a P2V VM (physical PC converted to virtual machine) where /home was running out of space and needed to be expanded. My /home in this case was a completely separate virtual disk (sdb) that was 10GB, and I needed to restore a multiple-GB Thunderbird email archive.
I was originally going to expand the virtual drive and partition with Virtualbox tools and gparted, but I decided to do things the ZFS way so A) It wouldn’t be as much of an issue in the future (easier to add more/larger disks if needed); and B) I could take advantage of ZFS filesystem compression to save disk space as part of the deal; AND C) Get free filesystem snapshots. NOTE: For best results, the VM should be 64-bit and have at least 2GB of RAM if you intend to use ZFS. And as always, more RAM is better for performance – at least 4GB RAM on the VM host is a good start.
You can refer to my previous series of articles if you would like to learn more about ZFS on Linux and you can also read up on datasets here *(1) before proceeding.
*(1) https://pthree.org/2012/12/17/zfs-administration-part-x-creating-filesystems/
https://blogs.oracle.com/orasysat/entry/so_what_makes_zfs_so
Goal: Move /home partition that is running low on space, to a separate ZFS single-drive pool with compression – Without having to reboot
First of all, in my case I had /home on a separate virtual drive and needed to either expand the disk or add a new one. So while the VM was still powered on, I added a new SATA drive (sdc) that was 20GB in size to my Virtualbox VM:
This is a nice feature of working with VMs – and with Ubuntu Linux 14.04, you don’t need to issue any separate commands to re-read the SCSI bus, it should auto-detect the new drive right away!
If you are using another Linux distro (that is probably RPM-based) and need to rescan the SCSI bus to detect new disks without rebooting, refer to these: *(2)
*(2) https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Online_Storage_Reconfiguration_Guide/rescan-scsi-bus.html
Before going ahead, you need to have ZFS on Linux up and running. Switch from your non-admin user to root and issue:
sudo su - zpool status no pools available
That should be what you get if you haven’t defined any ZFS pools yet; if you get an error, try:
modprobe zfs
…and then redo the above zpool command. If you still don’t have a working ‘ zpool status ‘ then you can follow my previous article on how to get ZFS installed on Ubuntu Linux 14.04.
We need to partition the new disk; issue ‘ # fdisk -l ‘ as root to list all disks and then issue:
parted -s /dev/sdc mklabel gpt
fdisk -l /dev/sdc
Disk /dev/sdc: 21.5 GB, 21474836480 bytes 256 heads, 63 sectors/track, 2600 cylinders, total 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Device Boot Start End Blocks Id System /dev/sdc1 1 41943039 20971519+ ee GPT
You may need to ‘ # apt-get install parted ‘ if you get a command not found error.
We need to find the virtual HD’s long-form name to define the ZFS pool:
ls -l /dev/disk/by-id lrwxrwxrwx 1 root root 9 Apr 14 10:34 ata-VBOX_HARDDISK_VB7957d4c1-b4a9da8a -> ../../sda lrwxrwxrwx 1 root root 10 Apr 14 10:34 ata-VBOX_HARDDISK_VB7957d4c1-b4a9da8a-part1 -> ../../sda1 lrwxrwxrwx 1 root root 9 Apr 14 10:42 ata-VBOX_HARDDISK_VB8a5a6cd1-37a97479 -> ../../sdc lrwxrwxrwx 1 root root 9 Apr 14 10:34 ata-VBOX_HARDDISK_VBb8b78e91-6b4c1ca2 -> ../../sdb lrwxrwxrwx 1 root root 10 Apr 14 10:34 ata-VBOX_HARDDISK_VBb8b78e91-6b4c1ca2-part1 -> ../../sdb1 lrwxrwxrwx 1 root root 10 Apr 14 10:34 ata-VBOX_HARDDISK_VBb8b78e91-6b4c1ca2-part2 -> ../../sdb2
I already knew that disk “sda” is my linux root and “sdb” is /home, so I issued:
zp=zhome zpool create -o ashift=12 -o autoexpand=on -o autoreplace=on \ -O atime=off -O compression=lz4 \ $zp \ ata-VBOX_HARDDISK_VB8a5a6cd1-37a97479
zpool status |awk 'NF>0' pool: zhome state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM zhome ONLINE 0 0 0 ata-VBOX_HARDDISK_VB8a5a6cd1-37a97479 ONLINE 0 0 0 errors: No known data errors
This is a bit of a PROTIP on how to create a ZFS pool that some of the other articles won’t show you. The lower-case “-o” options set the ZFS pool properties to use 4K sector sizes regardless of physical sector size, and if you add another disk (or set of disks) it will auto-expand and use the extra space.
For a VM, you can probably go with the default ‘ ashift=9 ‘ for 512 sector sizes, but using ‘ ashift=12 ‘ everywhere translates into real-world usage and is good practice. It will also save you a lot of hassle when it comes time to either replace a failing disk, or want to expand the pool size in-place by using larger disks.
The upper-case “-O” options disable file/directory access-time updates for every READ on the entire pool (equivalent to “noatime” in /etc/fstab) and all future datasets, and fast+efficient LZ4 compression is enabled at the entire-pool level. If you need to, you can override these defaults at the dataset level later. therwise, they will be inherited. In my experience, setting “noatime” saves quite a bit of wear and tear on disks!
Now to create our new /home dataset on the single-disk pool (we don’t really need disk Mirroring since this is a VM; if doing this on a physical box, I would definitely recommend allocating at least 1 mirror disk of the same make/model/capacity to guard against disk failure):
zfs create -o sharesmb=off $zp/home
df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 24122780 12098696 10775676 53% / /dev/sdb1 10190136 5771220 4297676 58% /home zhome 20188288 0 20188288 1% /zhome zhome/home 20188288 0 20188288 23% /zhome/home
Since this is Ubuntu, if you haven’t defined a root password yet you should do so with this before proceeding: (NOTE – these instructions are for home/personal use and I assume you own the PC you’re doing this on – DON’T do this on a shared/production system!!)
sudo passwd root
At this point, since I was logged into X windows and we’re moving /home entirely, I needed to LOGOUT of my X window manager all the way before proceeding, and then verify that nothing was holding on to any files in /home. So in Virtualbox, I hit right-Ctrl+F1 and switched to text console TTY1. If doing this on a physical box, you would hit Ctrl+Alt+F1.
( On tty1, login as root with the root password, and issue: )
lsof |grep home
This will list any Open Files in home – you will need to logout of all non-root users and/or issue kill commands to make them let go of open files. You can find more info on how to do that here: *(3)
*(3) https://unix.stackexchange.com/questions/18043/how-do-i-kill-all-a-users-processes-using-their-uid
http://www.tecmint.com/how-to-kill-a-process-in-linux/
Once the above ” lsof ” command returned nothing, I migrated my /home to ZFS with Midnight Commander. If you’re familiar with Norton Commander or other 2-pane text-based file managers, then you already know that MC is a Godsend – I use it especially to delete entire directory trees safely without using ‘ rm -f ‘. Verify that MC is installed with:
apt-get update; apt-get install mc
## you only need to do this ONCE
mc /home /$zp/home
## copy everything over, using the Insert key to mark files/directories and the F5 key to copy
( Note the right pane in the screenshot should read ” /zhome/home ” at the top )
You can learn more about Midnight Commander here: *(4)
*(4) https://en.wikipedia.org/wiki/Midnight_Commander
-If you would rather do the file copying programmatically, without using MC:
cd /home time tar cpf - * |(cd /$zp/home; tar xpf - )
df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 24122780 12098712 10775660 53% / /dev/sdb1 10190136 5771220 4297676 58% /home zhome 15614336 128 15614208 1% /zhome zhome/home 20188288 4574080 15614208 23% /zhome/home
You can see the benefits of ZFS data compression right away there, in the Used column.
Now once all the files are copied over, we don’t need the old /home anymore:
cd /; mv home home—old ## this should work if your /home is just a directory in "/" /bin/mv: cannot move ‘home’ to ‘home--old’: Device or resource busy # pstree -A |-lightdm-+-Xorg | |-lightdm-+-fluxbox-+-gnome-terminal-+-bash---su---bash---screen | | | | |-bash | | | | |-gnome-pty-helpe | | | | `-3*[{gnome-terminal}] | | | `-ssh-agent | | `-{lightdm} | `-2*[{lightdm}]
Ah, the X window manager is still running.
service lightdm stop
And then I remembered /home is on a separate drive/partition:
umount /home
I then modified my /etc/fstab to no longer mount the original home (I still have a swap partition on sdb, but I can switch that to a swapfile and remove the disk from the VM later):
# (EXAMPLE fstab line) LABEL=home /home ext4 defaults,noauto,noatime,errors=remount-ro 0 1
Now to set things up permanently and move /home to ZFS:
zfs set mountpoint=/home $zp/home
zfs mount $zp/home
## this proved to be extraneous, but I’m including it for completeness; you don’t have to enter it in.
df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 24122780 12098712 10775660 53% / zhome 20188288 128 15614208 1% /zhome zhome/home 20188288 4574080 15614208 23% /home
For comparison – original ext4 home before ZFS:
Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdb1 10190136 5771220 4297676 58% /home
Home after migrating the original data to the compressed ZFS pool/dataset, before Thunderbird restore:
zhome/home 20188288 4574080 15614208 23% /home
Home pool AFTER the thunderbird restore, still coming in well under my original 10GB limit:
zhome/home 20187776 8912256 11275520 45% /home
Since my needs for this VM are light, I didn’t bother creating separate ZFS datasets for my individual users first, but you can certainly do that before copying data over. It will give you more fine-grained control over snapshots, and you can do per-dataset options like file sharing over Samba or NFS.
Five Cool things you can do with a ZFS-based /home now:
1) Replace existing 20GB disk in-place with 30GB (or arbitrarily larger size) and remove the old 20GB disk from the pool in TWO STEPS:
Add the new HD to the VM
ls -l /dev/disk/by-id ## to find the new device long-form name; or use ' fdisk -l '
time zpool replace $zp \ ata-VBOX_HARDDISK_VB8a5a6cd1-37a97479 \ ata-VBOX_HARDDISK_VB8afce7ed-7f6e6a7c real 0m16.749s
zpool status pool: zhome state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Fri Apr 14 15:30:35 2017 286M scanned out of 8.56G at 17.9M/s, 0h7m to go 286M resilvered, 3.27% done config: NAME STATE READ WRITE CKSUM zhome ONLINE 0 0 0 replacing-0 ONLINE 0 0 0 ata-VBOX_HARDDISK_VB8a5a6cd1-37a97479 ONLINE 0 0 0 ata-VBOX_HARDDISK_VB8afce7ed-7f6e6a7c ONLINE 0 0 0 (resilvering) errors: No known data errors
Before:
Filesystem 1K-blocks Used Available Use% Mounted on zhome/home 20187520 8975488 11212032 45% /home
After: (Once resilvering is complete; you can keep checking every minute or so with ‘ zpool status ‘)
pool: zhome state: ONLINE scan: resilvered 8.56G in 0h8m with 0 errors on Fri Apr 14 15:38:41 2017 config: NAME STATE READ WRITE CKSUM zhome ONLINE 0 0 0 ata-VBOX_HARDDISK_VB8afce7ed-7f6e6a7c ONLINE 0 0 0 errors: No known data errors
df Filesystem 1K-blocks Used Available Use% Mounted on zhome 21370112 0 21370112 0% /zhome zhome/home 30345600 8975488 21370112 30% /home
zfs list NAME USED AVAIL REFER MOUNTPOINT zhome 8.56G 20.4G 96K /zhome zhome/home 8.56G 20.4G 8.56G /home
Note that this increase in space has been done in real-time, with no “downtime”, and the new free space is available immediately without rebooting or having to resize disk partitions. 🙂
2) Add an additional 30GB disk and ‘ zpool add ‘ the extra space (NOT a mirror disk) to double the space pretty much instantly. NOTE you probably wouldn’t want to run this way on a physical box because it provides no Redundancy if any disk fails, but it’s OK in a VM (especially since I’m using mirrored ZFS disks for /home on the “host” side for Virtualbox):
( Example )
parted -s /dev/sde mklabel gpt ## on the NEW disk
zpool add -o ashift=12 $zp /dev/sde
You would of course want to use the “long-form” disk name in /dev/disk/by-id or /dev/disk/by-path, and MAKE SURE you’ve got the right disk – or risk data loss!
( As an example, I took a Virtualbox snapshot of the VM before doing this – creating an empty 1GB file and adding it to the pool for extra free space ):
cd /; time (dd if=/dev/zero of=zdisk2 bs=1M count=1024;sync) 1073741824 bytes (1.1 GB) copied, 38.5534 s, 27.9 MB/s real 0m53.252s / # zpool add -o ashift=12 $zp /zdisk2 invalid vdev specification use '-f' to override the following errors: mismatched replication level: pool uses disk and new vdev is file
zpool add -f -o ashift=12 zhome /zdisk2
zpool status pool: zhome state: ONLINE scan: resilvered 8.56G in 0h8m with 0 errors on Fri Apr 14 15:38:41 2017 config: NAME STATE READ WRITE CKSUM zhome ONLINE 0 0 0 ata-VBOX_HARDDISK_VB8afce7ed-7f6e6a7c ONLINE 0 0 0 /zdisk2 ONLINE 0 0 0 errors: No known data errors
zfs list NAME USED AVAIL REFER MOUNTPOINT zhome 8.56G 21.3G 96K /zhome zhome/home 8.56G 21.3G 8.56G /home
You can see the free space has increased. I reverted to the “before” VM snapshot because well, that configuration’s just not stable. 😉 It might not even survive a reboot; I’m just showing what’s possible.
Use the flexibility and capabilities of ZFS with solemn responsibility, and know the difference between ‘ zpool add ‘ and ‘ zpool attach ‘! If you issue the wrong command, you may have to backup your pool and recreate it to get it configured the way you intended!
3) Virtually Instant snapshot + rollback (restore) of ZFS datasets
4) Set disk Quotas on a per-dataset basis to avoid overfilling the pool / runaway user data
http://stackoverflow.com/questions/18143081/how-to-set-default-user-quota-in-zfs-filesystem
This comes in handy if, say, you want to set aside some space for burning Blu-Ray backups:
zfs create -o compression=off -o atime=off \ -o mountpoint=/mnt/bluraytemp25 \ -o quota=23.5G $zp/bluraytemp; chown yournonrootuser /mnt/bluraytemp25
cd /mnt/bluraytemp25 && truncate -s 23652352K bdiscimage.udf ## Quickly create the UDF image file of size ~23GB
mkudffs --vid="YOURLABEL20170414" bdiscimage.udf ## Format the file with a descriptive label, as UDF
mkdir /mnt/bluray-ondisk ## Only once!
mount -t udf -o loop /mnt/bluraytemp25/bdiscimage.udf /mnt/bluray-ondisk -onoatime
df /dev/loop0 24413000 1500 24411500 1% /mnt/bluray-ondisk
Now you can copy files over to /mnt/bluray-ondisk and not have to worry about over-burning.
MAKE SURE TO UMOUNT BLURAY-ONDISK BEFORE BURNING THE IMAGE TO DISC!!
5) Have separate uncompressed / Samba-shared datasets without having to set up a full Active Directory environment:
zfs create -o compression=off -o sharesmb=on $zp/nocomprshare; chown youruser /$zp/ nocomprshare
I hope this article has been useful in showing the possibilities of migrating something like /home to ZFS. The capabilities of the filesystem and possibilities for easier disk management and expansion are pretty much incredible, but I highly recommend doing some research into it. You do need to know what you’re doing to leverage the potential in the right way.
Leave a Reply