ZFS
- zpool — configure ZFS storage pools
- zpool-status — Display detailed health status for the given ZFS storage pools
- zpool-list — Lists ZFS storage pools along with a health status and space usage
- zpool-create — Creates a new ZFS storage pool
- zpool-destroy — Destroys the given ZFS storage pool, freeing up any devices for other use
- zpool-import — Lists ZFS storage pools available to import or import the specified pools
- zpool-export — Exports the given ZFS storage pools from the system
- zpool-get — Retrieves properties for the specified ZFS storage pool(s)
- zpool-set— Sets the given property on the specified pool
- zpool-resilver — resilver devices in ZFS storage pools
- zpool-scrub — begin or resume scrub of ZFS storage pools
- zpool-clear — clear device errors in ZFS storage pool
- zpool-history — inspect command history of ZFS storage pools
- zpoolprops — properties of ZFS storage pools
- zfs — configures ZFS file systems
- zfsprops — native and user-defined properties of ZFS datasets
- zfs list
- zfs clone
- zfs create/destroy
- zfs-set — set properties on ZFS datasets
- zfs rename
- zfs-snapshot — create snapshots of ZFS datasets
- zfs-rollback — roll ZFS dataset back to snapshot
- zfs send/receive
- zfs-allow — delegate ZFS administration permissions to unprivileged users
- Encryption
- zfs share/unshare
- zfs hold/holds
- zdb — display zpool debugging and consistency information
- zfs-auto-snapshot – take regular ZFS snapshots
- Documentation
- Installation
- Misc
- Random
zpool — configure ZFS storage pools
The zpool command configures ZFS storage pools. A storage pool is a collection of devices that provides physical storage and data replication for ZFS datasets. All datasets within a storage pool share the same space.
Virtual Devices (vdevs): A “virtual device” describes a single device or a collection of devices organized according to certain performance and fault characteristics.
zpool-status
— Display detailed health status for the given ZFS storage pools
zpool status
(also shows if a scrub is in progress)
zpool-list
— Lists ZFS storage pools along with a health status and space usage
zpool list
zpool list -v
zpool-create
— Creates a new ZFS storage pool
zpool-destroy — Destroys the given ZFS storage pool, freeing up any devices for other use
zpool create SAN200 /dev/mapper/mpathl /dev/mapper/mpathk /dev/mapper/mpathm /dev/mapper/mpathn
zpool create -o ashift=12 storage raidz1 /dev/sda /dev/sdb
zpool create -o ashift=12 wdExternPassport /dev/mapper/luks-5295ac32-4ce8-4ce3-9d54-c60d28522031
zpool destroy SAN200
zpool-import
— Lists ZFS storage pools available to import or import the specified pools
zpool-export
— Exports the given ZFS storage pools from the system
Rename a zpool:
zpool export [poolname]
zpool import [poolname] [newpoolname]
Sometimes, zpool-import does not find anything although there’s a ZFS pool right there.
ls -l /dev/disk/by-id/
Then using the right partition:
zpool import -a -d /dev/disk/by-id/wwn-0x5000c500ccdce51c-part1
Just weird…
Cannot export ‘mypool’: pool is busy
grep -i mydataset /proc/*/mounts
zpool-get
— Retrieves properties for the specified ZFS storage pool(s)
zpool-set
— Sets the given property on the specified pool
https://openzfs.github.io/openzfs-docs/man/5/zpool-features.5.html
zpool get all tank-s-p-w
zpool get size tank
zpool get feature@large_dnode tank
zpool-resilver
— resilver devices in ZFS storage pools
zpool resilver SAN300
zpool-scrub — begin or resume scrub of ZFS storage pools
zpool scrub [-s|-p] [-w] pool…
-s | Stop scrubbing. |
-p | Pause scrubbing. Scrub pause state and progress are periodically synced to disk. If the system is restarted or pool is exported during a paused scrub, even after import, scrub will remain paused until it is resumed. Once resumed the scrub will pick up from the place where it was last checkpointed to disk. To resume a paused scrub issue zpool scrub again. |
-w | Wait until scrub has completed before returning. |
zpool-clear
— clear device errors in ZFS storage pool
zpool clear SAN300
zpool-history — inspect command history of ZFS storage pools
zpool history wwn-0x5000c500ccdce51c | tail -50
zpoolprops — properties of ZFS storage pools
ashift=ashift | Pool sector size exponent, to the power of 2 (internally referred to as ashift). I/O operations will be aligned to the specified size boundaries. The typical case for setting this property is when performance is important and the underlying disks use 4KiB sectors but report 512B sectors to the OS (for compatibility reasons); in that case, set ashift=12 (which is 1<<12 = 4096). |
autoexpand=on|off | Controls automatic pool expansion when the underlying LUN is grown. |
https://openzfs.github.io/openzfs-docs/man/7/zpoolprops.7.html?highlight=zpoolprops
zfs — configures ZFS file systems
zfsprops — native and user-defined properties of ZFS datasets
https://openzfs.github.io/openzfs-docs/man/7/zfsprops.7.html
acltype=off|nfsv4|posix | Controls whether ACLs are enabled and if so what type of ACL to use. |
atime=on|off | Controls whether the access time for files is updated when they are read. |
compression=on|off|gzip|gzip-N|lz4|lzjb|zle|zstd|zstd-N|zstd-fast|zstd-fast-N | Controls the compression algorithm used for this dataset. When set to on (the default), indicates that the current default compression algorithm should be used. |
dnodesize=legacy|auto|1k|2k|4k|8k|16k | Specifies a compatibility mode or literal value for the size of dnodes in the file system. The default value is legacy. Setting this property to a value other than legacy requires the large_dnode pool feature to be enabled. |
recordsize=size | Specifies a suggested block size for files in the file system. This property is designed solely for use with database workloads that access files in fixed-size records. ZFS automatically tunes block sizes according to internal algorithms optimized for typical access patterns. |
relatime=on|off | Controls the manner in which the access time is updated when atime=on is set. |
sharesmb=on|off|opts | Controls whether the file system is shared by using Samba USERSHARES and what options are to be used. |
sharenfs=on|off|opts | Controls whether the file system is shared via NFS, and what options are to be used. A file system with a sharenfs property of off is managed with the exportfs(8) command and entries in the /etc/exports file. |
xattr=on|off|sa | Controls whether extended attributes are enabled for this file system. |
encryption=off|on|aes-128-ccm|aes-192-ccm|aes-256-ccm|aes-128-gcm|aes-192-gcm|aes-256-gcm | Controls the encryption cipher suite (block cipher, key length, and mode) used for this dataset. |
keyformat=raw|hex|passphrase | Controls what format the user’s encryption key will be provided as. This property is only set when the dataset is encrypted. |
keylocation=prompt|file://</absolute/file/path>|https://<address> |http://<address> | Controls where the user’s encryption key will be loaded from by default for commands such as zfs load-key and zfs mount -l. This property is only set for encrypted datasets which are encryption roots. If unspecified, the default is prompt. |
keylocation=prompt|file://</absolute/file/path>|https://<address> |http://<address>
zfs list
-H | Used for scripting mode. Do not print headers and separate fields by a single tab instead of arbitrary white space. |
-o property | A comma-separated list of properties to display. |
-r | Recursively display any children of the dataset on the command line. |
-s property | A property for sorting the output by column in ascending order based on the value of the property. |
-t type | A comma-separated list of types to display, where type is one of filesystem, snapshot, volume, bookmark, or all. |
zfs list
zfs list -t volume
zfs list -t snapshot
zfs list -t snapshot tank/dataset
zfs list -r -t snapshot tank/dataset
zfs list -H -r -t snapshot -o name
zfs clone
zfs clone [-p] [-o property=value]… snapshot filesystem|volume
Creates a clone of the given snapshot. See the Clones section for details. The target dataset can be located anywhere in the ZFS hierarchy, and is created as the same type as the original.
zfs clone tank/dataset@morning tank/dataset_morning
zfs create/destroy
-p | Creates all the non-existing parent datasets. |
-r | Destroy (or mark for deferred deletion) all snapshots with this name in descendent file systems. |
zfs create -p tank/backup/test
zfs destroy -r SAN200/backup
Create a ZVOL
-s | Creates a sparse volume with no reservation. See volsize in the Native Properties section for more information about sparse volumes. |
-V | creates a ZVOL |
zfs create -s -V 4GB tank/vol
zfs-set — set properties on ZFS datasets
zfs get compression
zfs set compression=lz4 SAN200
zfs set compression=on pool/home/anne
zfs set compression=lz4 pool/home/anne
zfs get all pool/home/bob
zfs get xattr,compression,atime,recordsize,acltype,dnodesize, localZFS
zfs get compression,xattr,acltype,dnodesize,atime tank/smbtest
zfs set compression=lz4 xattr=sa acltype=posixacl dnodesize=auto atime=on tank/smbtest
zfs get xattr,compression,atime,relatime,recordsize,acltype,dnodesize, localZFS
zfs set xattr=sa compression=lz4 atime=off relatime=on recordsize=1M acltype=posixacl dnodesize=auto localZFS
zfs get mountpoint storage/music
zfs set mountpoint=/home/bob/Music storage/music
inherit
zfs inherit [-rS] property filesystem|volume|snapshot…
Clears the specified property, causing it to be inherited from an ancestor, restored to default if no ancestor has the property set, or with the -S option reverted to the received value if one exists. See zfsprops(7) for a listing of default values, and details on which properties can be inherited.
-r | Recursively inherit the given property for all children. |
-S | Revert the property to the received value, if one exists; otherwise, for non-inheritable properties, to the default; otherwise, operate as if the -S option was not specified. |
sharenfs
zfs get sharenfs backup/proxmox/vm
zfs get -t filesystem -r sharenfs backup
zfs set sharenfs="rw=@192.168.0.0/24,rw=@10.0.0.0/24" pool-name/dataset-name
zfs set sharenfs="rw=@192.168.0.0/24,rw=@10.0.0.0/24,anonuid=1001,anongid=1001" pool-name/dataset-name
https://blog.programster.org/sharing-zfs-datasets-via-nfs
sharesmb
zfs get sharesmb tank/smbtest
zfs rename
-f | Force unmount any filesystems that need to be unmounted in the process. |
-p | Creates all the nonexistent parent datasets. |
-r (snapshots) | Recursively rename the snapshots of all descendant datasets. |
zfs rename -p tank/projects tank/data/projects
zfs-snapshot — create snapshots of ZFS datasets
All previous modifications by successful system calls to the file system are part of the snapshots.
zfs snapshot [-r] [-o property=value]… dataset@snapname…
-o property=value | Set the specified property; see zfs create for details. |
-r | Recursively create snapshots of all descendent datasets |
Create snapshot
(must be filesystem@snapname
or volume@snapname
)
zfs snapshot tank/backup/projects@version1
zfs destroy tank/backup@zfs-auto-snap_frequent-2021-04-23-1900
zfs list -t snapshot -o name | grep ^BACKUP/projects@zfs-auto-snap_frequent | xargs -n 1 zfs destroy
zfs list -r -t snapshot tank
Look into the snapshots
The info is available at the root of the dataset in .zfs/snapshot
zfs-rollback — roll ZFS dataset back to snapshot
-R | Destroy any more recent snapshots and bookmarks, as well as any clones of those snapshots. |
-f | Used with the -R option to force an unmount of any clone file systems that are to be destroyed. |
-r | Destroy any snapshots and bookmarks more recent than the one specified. |
Remove all modifications since snapshot
zfs rollback tank/backup/test@zfs-auto-snap_removeme-2021-04-30-1637
zfs rollback -r tank/backup/test@zfs-auto-snap_removeme-2021-04-30-1637
zfs send/receive
zfs-send — generate backup stream of ZFS dataset
-I snapshot | Generate a stream package that sends all intermediary snapshots from the first snapshot to the second snapshot. |
-R, --replicate | Generate a replication stream package, which will replicate the specified file system, and all descendent file systems, up to the named snapshot. |
-v, --verbose | Print verbose information about the stream package generated. This information includes a per-second report of how much data has been sent. |
-L, --large-block | Generate a stream which may contain blocks larger than 128KB. |
zfs-receive — create snapshot from backup stream
-F | Force a rollback of the file system to the most recent snapshot before performing the receive operation. If receiving an incremental replication stream (for example, one generated by zfs send -R [-i|-I]), destroy snapshots and file systems that do not exist on the sending side. |
-o property=value | Sets the specified property as if the command zfs set property=value was invoked immediately before the receive. |
-v | Print verbose information about the stream and the time required to perform the receive operation. |
zfs send [-DLPRbcehnpvw] [[-I|-i] snapshot] snapshot
zfs send [-LPcenvw] [-i snapshot|bookmark] filesystem|volume|snapshot
zfs receive [-Fhnsuv] [-o origin=snapshot] [-o property=value] [-x property] filesystem|volume|snapshot
zfs receive [-Fhnsuv] [-d|-e] [-o origin=snapshot] [-o property=value] [-x property] filesystem
first time:
zfs send tank/backup/projects@zfs-auto-snap_daily-2021-04-24-0425 | pv -rtab | zfs recv SAN200/backup/projects
after that:
zfs send -R -I @zfs-auto-snap_daily-2021-04-24-0425 tank/backup/projects@zfs-auto-snap_daily-2021-04-25-0425 | pv -rtab | zfs receive -F SAN200/backup/projects
Send snapshot between machines
dest
is a dataset and must exist- port 8000 is arbitrary but must be the same on both sides
nc -l 8000 | mbuffer -q -m 1G | pv -rtab | zfs receive -vF dest
- @base snapshot must exist on both incoming and destination datasets
- inc is a dataset and @transfer is the snapshot we copy to the dest
- dest_comp_ip is the IP of the receiving computer waiting for transfer
zfs send -R -I @base inc@transfer | mbuffer -q -m 1G | pv -b | nc dest_comp_ip 8000
src: https://www.polyomica.com/improving-transfer-speeds-for-zfs-sendreceive-in-a-local-network/
https://ithero.eu/documentation/networking/netcat/
zfs-allow — delegate ZFS administration permissions to unprivileged users
-d | Allow only for the descendent file systems. |
-u user[,user]… | Explicitly specify that permissions are delegated to the user. |
-g group[,group]… | Explicitly specify that permissions are delegated to the group. |
send | Allow to send |
receive | Must also have the mount and create ability |
hold | Allows adding a user hold to a snapshot |
mount | Allows mounting/umounting ZFS datasets |
snapshot | Must also have the mount ability |
create | Must also have the mount ability |
destroy | Must also have the mount ability |
rollback | Must also have the mount ability |
compression | allows to set this property |
If neither of the -dl options are specified, or both are, then the permissions are allowed for the file system or volume, and all of its descendants.
Displays permissions
zfs allow filesystem|volume
Give send permissions
zfs allow -u localuser send,hold,mount,snapshot,destroy rpool
Give receive permissions
zfs allow -u remoteuser compression,mountpoint,create,mount,receive,rollback,destroy tank/backup/rpool
Taken from: https://github.com/jimsalterjrs/sanoid/wiki/Syncoid
Encryption
encryption=off|on|aes-128-ccm|aes-192-ccm|aes-256-ccm|aes-128-gcm|aes-192-gcm|aes-256-gcm | Controls the encryption cipher suite (block cipher, key length, and mode) used for this dataset. |
keyformat=raw|hex|passphrase | Controls what format the user’s encryption key will be provided as. This property is only set when the dataset is encrypted. |
keylocation=prompt|file://</absolute/file/path>|https://<address> |http://<address> | Controls where the user’s encryption key will be loaded from by default for commands such as zfs load-key and zfs mount -l. This property is only set for encrypted datasets which are encryption roots. If unspecified, the default is prompt. |
zfs create -o encryption=on -o keyformat=passphrase mydataset
Enter new passphrase:
Re-enter new passphrase:
zfs get encryption,keyformat,keylocation mydataset
NAME PROPERTY VALUE SOURCE
mydataset encryption aes-256-gcm -
mydataset keyformat passphrase -
mydataset keylocation prompt local
/usr/sbin/syncoid --identifier=usbdrive --recvoptions="x encryption" mydataset usbdrive/mydataset
zfs load-key mydataset
zfs mount -a
zfs unmount mydataset
zfs unload-key mydataset
zpool export myzpool
zfs share/unshare
zfs share -a | filesystem
zfs unshare -a | filesystem|mountpoint
zfs hold/holds
cannot destroy snapshot [snapshot] : dataset is busy
z$ zfs holds [snapshot]
NAME TAG TIMESTAMP
[snapshot] .send-2964719-1 Mon Mar 14 15:17 2022
zfs share -a | filesystem
zfs unshare -a | filesystem|mountpoint
zdb — display zpool debugging and consistency information
-l device : Read the vdev labels from the specified device (obtained with zpool list -v
)
# zdb -l mpathp
-C : Display information about the configuration
# zdb -C tank
-d : Display information about datasets
# zdb -d tank/backup
sudo zdb -d backup | grep "backup/venus/KINGSTON_SA1000M8480G/wine/wow@autosnap_2022" | cut -d ' ' -f 2 | xargs -n 1 sudo zfs destroy
zfs-auto-snapshot – take regular ZFS snapshots
/sbin/zfs-auto-snapshot -q -g --label=hourly --keep=24 tank/backup/projects
Helper script to run after zfs-auto-snapshot (note that the –keep switch takes affect after the –post-snapshot):
#!/bin/sh
#echo "dataset = $1"
#echo "snapname = $2"
latest_synced_snapname=`zfs list -H -o name -S creation -t snapshot SAN200/test | head -1 | sed 's/.*\(@.*\)/\1/'`
zfs send -R -I "$latest_synced_snapname" "$1@$2" | zfs receive -F SAN200/test
Documentation
https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/index.html
https://openzfs.github.io/openzfs-docs/man/8/index.html
https://wiki.archlinux.org/title/ZFS
Installation
sudo apt install zfsutils-linux
Misc
zfs tuning: https://jrs-s.net/2018/08/17/zfs-tuning-cheat-sheet/
advanced format drives: https://wiki.lustre.org/Optimizing_Performance_of_SSDs_and_Advanced_Format_Drives
official FAQ: https://openzfs.github.io/openzfs-docs/Project%20and%20Community/FAQ.html
Record size: https://blog.programster.org/zfs-record-size
Root on ZFS: https://openzfs.github.io/openzfs-docs/Getting%20Started/Ubuntu/Ubuntu%2020.04%20Root%20on%20ZFS.html
Cheat sheet: https://www.thegeekdiary.com/solaris-zfs-command-line-reference-cheat-sheet/
System administration: https://openzfs.org/wiki/System_Administration
Random
$ more /etc/cron.hourly/zfs-auto-snapshot
#!/bin/sh
# Only call zfs-auto-snapshot if it's available
which zfs-auto-snapshot > /dev/null || exit 0
exec zfs-auto-snapshot --quiet --syslog --label=hourly --keep=24 --recursive --post-snapshot=/home/technician/zfs/myscript SAN300/projects
#exec zfs-auto-snapshot --quiet --syslog --label=hourly --keep=24 --recursive SAN300/projects
$ more /home/technician/zfs/myscript
#!/bin/sh
#echo "dataset = $1"
#echo "snapname = $2"
zfs create -p BACKUP/$1
latest_synced_snapname=`zfs list -H -o name -S creation -t snapshot BACKUP/$1 | head -1 | sed 's/.*\(@.*\)/\1/'`
if [ -z "$latest_synced_snapname" ]
then
zfs send -R "$1@$2" | zfs receive -F BACKUP/$1
else
zfs send -R -I "$latest_synced_snapname" "$1@$2" | zfs receive -F BACKUP/$1
fi
#zfs list -t snapshot -o name | grep ^BACKUP/$1@zfs-auto-snap_frequent | xargs -n 1 zfs destroy
return 0