FS-ZFS
Jump To: Support > KB > NetManager > FS > ZFS
Working with ZFS
- Need NetBSD 9 minimum
- Ideally 1GB RAM per 1TB storage
- If a Xen DomU, kernel module must be built with
src/sys/modules/zfs/Makefile.zfsmod
containing (NetBSD 9 only, not needed in NetBSD 10):CPPFLAGS+= -DMAXPHYS=32768
- A zpool is an overarching data area. It can be subdivided into datasets which share the space (i.e. no size needs to be given, but watch out when copying between datasets)
- Make sure you have run
cd /dev; sh MAKEDEV all
on a machine that has been updated as if using full disks, ZFS wants you to specify devices nodes without partition letter on end (i.e. xbd1 not xbd1d). You may need to create nodes for extra devices (e.g. xbd4 to xbd7)
Docs
- https://www.ixsystems.com/documentation/freenas/11.2/zfsprimer.html
- https://docs.freebsd.org/en/books/handbook/zfs/
Getting started
Create pool (single disk or concatenated)
# zpool create tank xbd1 [ 237.7301090] WARNING: ZFS on NetBSD is under development [ 237.7700987] ZFS filesystem version: 5 [ 237.9601018] xbd1: WARNING: cache flush not supported by backend # df -t zfs Filesystem 1K-blocks Used Avail %Cap Mounted on tank 101072846 23 101072823 0% /tank # zpool status pool: tank state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 xbd1 ONLINE 0 0 0 errors: No known data errors
Specify extra disks to concatenate without any mirroring.
Create pool (RAID 10)
# zpool create tank mirror wd0 wd1 mirror wd2 wd3 [ 237.7700987] ZFS filesystem version: 5 # zpool status pool: tank state: ONLINE config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 wd0 ONLINE 0 0 0 wd1 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 wd2 ONLINE 0 0 0 wd3 ONLINE 0 0 0 errors: No known data errors
Create dataset
# zfs create tank/shares # zfs list NAME USED AVAIL REFER MOUNTPOINT tank 103K 96.4G 23K /tank tank/shares 23K 96.4G 23K /tank/shares
Or with compression:
# zfs create -o compression=lz4 tank/shares
Mount a dataset at a given location
# zfs create -o mountpoint=/usr/shares tank/shares # df -t zfs Filesystem 1K-blocks Used Avail %Cap Mounted on tank 15082716844 1526040 15081190804 0% /tank tank/shares 15081190892 88 15081190804 0% /usr/shares
Or with compression:
# zfs create -o mountpoint=/usr/shares -o compression=lz4 tank/shares
Turn on compress and test
gzip:
# zfs set compression=gzip tank/shares # dd if=/dev/zero of=/tank/shares/zero bs=1m count=100 100+0 records in 100+0 records out 104857600 bytes transferred in 0.401 secs (261490274 bytes/sec) # ls -l /tank/shares/zero -rw-r--r-- 1 root wheel 104857600 Feb 12 10:45 /tank/shares/zero # zfs list NAME USED AVAIL REFER MOUNTPOINT tank 103K 96.4G 23K /tank tank/shares 23K 96.4G 23K /tank/shares # dd if=/dev/zero of=/tank/zero bs=1m count=100 100+0 records in 100+0 records out 104857600 bytes transferred in 0.261 secs (401753256 bytes/sec) # df -t zfs Filesystem 1K-blocks Used Avail %Cap Mounted on tank 101072811 102435 100970376 0% /tank tank/shares 100970399 23 100970376 0% /tank/shares
lz4 is preferred to gzip for performance reasons, but gzip would offer higher compression ratios at the expense of speed:
# zfs set compression=lz4 tank/compressed # dd if=/dev/zero of=/tank/compressed/zero bs=1m count=1000 1000+0 records in 1000+0 records out 1048576000 bytes transferred in 1.761 secs (595443498 bytes/sec) # zfs get used,compressratio,compression,logicalused tank/compressed NAME PROPERTY VALUE SOURCE tank/compressed used 88K - tank/compressed compressratio 1.00x - tank/compressed compression lz4 local tank/compressed logicalused 36.5K -Viewing compression state:
# zfs get compression tank/backup NAME PROPERTY VALUE SOURCE tank/backup compression lz4 local # zfs get compressratio tank/backup NAME PROPERTY VALUE SOURCE tank/backup compressratio 1.19x -
Unmount and remount datasets
# zfs umount tank/shares # zfs list NAME USED AVAIL REFER MOUNTPOINT tank 100M 96.3G 100M /tank tank/shares 23K 96.3G 23K /tank/shares # ls /tank/shares/ # df -t zfs Filesystem 1K-blocks Used Avail %Cap Mounted on tank 101072811 102435 100970376 0% /tank # zfs mount tank/shares # df -t zfs Filesystem 1K-blocks Used Avail %Cap Mounted on tank 101072811 102435 100970376 0% /tank tank/shares 100970399 23 100970376 0% /tank/shares
Alter mount point of pool or dataset
# mount -t zfs tank on /tank type zfs (local) tank/shares on /data/shares type zfs (local) # zfs set mountpoint=/data tank # zfs set mountpoint=/usr/shares tank/shares # mount -t zfs tank on /data type zfs (local) tank/shares on /usr/shares type zfs (local)
Export and import
N.B. must have whole device drive nodes so cd /dev; sh MAKEDEV all
# zpool export tank # zpool import tank #
Resize when underlying devices resized
Need to specify the device which has been resized and must match the device path exactly
# zpool set autoexpand=on tank # zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT tank 99.5G 164K 99.5G 20G 0% 0% 1.00x ONLINE - # zpool status tank pool: tank state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 xbd1 ONLINE 0 0 0 errors: No known data errors # zpool online -e tank xbd1 # zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT tank 120G 160K 119G - 0% 0% 1.00x ONLINE -
Add additional disk to expand capacity
# df -h /tank tank 1.9T 23K 1.9T 0% /tank # zpool status tank pool: tank state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 xbd8 ONLINE 0 0 0 errors: No known data errors # zpool add tank xbd9 # zpool status tank pool: tank state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 xbd8 ONLINE 0 0 0 xbd9 ONLINE 0 0 0 errors: No known data errors # df -h /tank Filesystem Size Used Avail %Cap Mounted on tank 3.0T 23K 3.0T 0% /tank
See audit trail
# zpool history History for 'tank': 2021-02-12.11:34:30 zpool create tank /dev/xbd1 2021-02-12.11:48:31 zpool export tank 2021-02-12.11:49:03 zpool import tank 2021-02-12.11:59:43 zpool set autoexpand=on tank 2021-02-12.12:25:55 zpool online -e tank xbd1
Convert single disk into a mirrored pair (RAID1)
# zpool status pool: tank state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 xbd1 ONLINE 0 0 0 errors: No known data errors # zpool attach tank xbd1 xbd2 # zpool status pool: tank state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Thu Feb 18 10:00:42 2021 3.90G scanned out of 2.30T at 143M/s, 4h41m to go 3.90G resilvered, 0.17% done config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 xbd1 ONLINE 0 0 0 xbd2 ONLINE 0 0 0 (resilvering) errors: No known data errors
Scrubbing
# zpool scrub tank # zpool status pool: tank state: ONLINE scan: scrub repaired 0 in 0h0m with 0 errors on Fri Feb 12 12:59:15 2021 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 xbd1 ONLINE 0 0 0 xbd2 ONLINE 0 0 0 errors: No known data errors
Dealing with a failed component of a mirror and taking components offline
# zpool offline tank xbd2 # zpool status pool: tank state: DEGRADED status: One or more devices has been taken offline by the administrator. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Online the device using 'zpool online' or replace the device with 'zpool replace'. scan: resilvered 81.5K in 0h0m with 0 errors on Fri Feb 12 16:40:27 2021 config: NAME STATE READ WRITE CKSUM tank DEGRADED 0 0 0 mirror-0 DEGRADED 0 0 0 xbd1 ONLINE 0 0 0 10172050788259219391 OFFLINE 0 0 0 was /dev/xbd2 errors: No known data errors # zpool online tank xbd2 # zpool status pool: tank state: ONLINE scan: resilvered 4.50K in 0h0m with 0 errors on Fri Feb 12 16:44:22 2021 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 xbd1 ONLINE 0 0 0 xbd2 ONLINE 0 0 0 errors: No known data errors
Working with hot spares
Add a spare:
# zpool add tank spare xbd3 # zpool status pool: tank state: ONLINE scan: resilvered 4.50K in 0h0m with 0 errors on Fri Feb 12 16:44:22 2021 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 xbd1 ONLINE 0 0 0 xbd2 ONLINE 0 0 0 spares xbd3 AVAIL errors: No known data errors
Dealing with underlying devices changing
The underlying device names are stored in the zpool configuration. This can cause problems if device names change as the zpool will fail to configure. A way around this is to create a directory named after the zpool in /etc/zfs with symlinks to the devices, e.g.backup 1# ls -l /etc/zfs/tank total 0 lrwxr-xr-x 1 root wheel 9 Oct 27 11:46 xbd1 -> /dev/xbd1 lrwxr-xr-x 1 root wheel 9 Oct 27 11:46 xbd2 -> /dev/xbd2 lrwxr-xr-x 1 root wheel 9 Oct 27 11:46 xbd3 -> /dev/xbd3 lrwxr-xr-x 1 root wheel 9 Oct 27 11:46 xbd4 -> /dev/xbd4 lrwxr-xr-x 1 root wheel 9 Oct 27 11:46 xbd5 -> /dev/xbd5You can then use
zpool import -d /etc/zfs/tank tank
to import the pool and those device nodes will be used:backup 2# zpool status tank pool: tank state: ONLINE scan: scrub repaired 0 in 8h36m with 0 errors on Mon May 24 20:16:09 2021 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 /etc/zfs/tank/xbd1 ONLINE 0 0 0 /etc/zfs/tank/xbd2 ONLINE 0 0 0 /etc/zfs/tank/xbd3 ONLINE 0 0 0 /etc/zfs/tank/xbd4 ONLINE 0 0 0 /etc/zfs/tank/xbd5 ONLINE 0 0 0 errors: No known data errors
NetManager backup server (virtual)
Full list of commands for a 5-disk set:
echo zfs=YES > /etc/rc.conf.d/zfs mkdir /etc/zfs/tank ln -s /dev/xbd1 /etc/zfs/tank ln -s /dev/xbd2 /etc/zfs/tank ln -s /dev/xbd3 /etc/zfs/tank ln -s /dev/xbd4 /etc/zfs/tank ln -s /dev/xbd5 /etc/zfs/tank zpool import -d /etc/zfs/tank /etc/rc.d/zfs start cd /dev sh MAKEDEV xbd4 xbd5 xbd6 xbd7 zpool create tank xbd1 xbd2 xbd3 xbd4 xbd5 zpool export tank zpool import -d /etc/zfs/tank tank zpool status tank mkdir /data zfs set mountpoint=/data tank zfs create tank/backup zfs set compression=lz4 tank/backup zfs set mountpoint=/data/backup tank/backup