Unterschiede zwischen den Revisionen 1 und 2
Revision 1 vom 2017-10-13 08:25:40
Größe: 2857
Autor: anonym
Kommentar:
Revision 2 vom 2017-10-13 08:33:10
Größe: 2953
Autor: anonym
Kommentar:
Gelöschter Text ist auf diese Art markiert. Hinzugefügter Text ist auf diese Art markiert.
Zeile 19: Zeile 19:
 * "mirror" is for raid1
 * -R /mnt/zfs will automount it there, control via /etc/default/zfs

ZFS On Debian 9

  • add "contrib" to strech sources in /etc/apt/sources.list, then:

aptitude install linux-headers-$(uname -r) zfs-dkms zfsutils-linux libkmod-dev
modprobe zfs
aptitude install gdisk
sgdisk --zap-all /dev/disk/by-id/...    ## remove existing partition table
sgdisk -a1 -n2:34:2047 -t2:EF02 /dev/disk/by-id/     ## create GPT entries
## never use /dev/sd* for pool creation use instead /dev/disk/by-id/*
zpool create -o ashift=12 pool1 mirror -R /mnt/zfs /dev/disk/by-id/... /dev/disk/by-id/...
zpool status
zpool list
zfs set compression=lz4 pool1    ## enable compression for this pool
zfs get compressratio pool1
zfs create pool1/dataset1    ## create new filesystem in pool which is mounted on the fly
  • The use of ashift=12 is recommended here because many drives today have 4KiB (or larger) physical sectors, even though they present 512B logical sectors. Also, a future replacement drive may have 4KiB physical sectors (in which case ashift=12 is desirable) or 4KiB logical sectors (in which case ashift=12 is required).
  • "mirror" is for raid1
  • -R /mnt/zfs will automount it there, control via /etc/default/zfs
  • qemu AIO should be used to maximize IOPS when using files for guest storage.
  • https://wiki.ubuntuusers.de/ZFS_on_Linux/

Using Snapshots

  • use often and regulary

zfs set snapdir=visible pool1/dataset1
zfs snap pool1/dataset1@first-snapshot ## create first snapshot
zfs snap pool1/dataset1@second-snapshot ##  create secodn snapshot
zfs rollback pool1/dataset1@first-snapshot ## rollback first snapshot
zfs destroy pool1/dataset1@second-snapshot ## delete second snapshot
zfs clone pool1/dataset1@first-snapshot pool1/firstclone ## create clone
  • automatic snapshots via cronjob

aptitude install zfs-auto-snapshot
zfs-auto-snapshot --quiet --syslog --label=daily --keep=31 pool1/dataset1 ## make daily snapshots, keep for 31 days
zfs-auto-snapshot --quiet --syslog --label=monthly --keep=12 pool1/dataset1

Transfer ZFS Snapshot Over Network

zfs send mypool/testarea@first-snapshot | gzip > /mnt/backup/snapshot.img.gz ## save snapshot as image
zfs send mypool/testarea@first-snapshot | ssh host "zfs receive remotepool/blabla" ## send snaphsot to remote host
zfs send -p -R ... ## transmit settings like compression
zfs send mypool/testarea@first-snapshot mypool/testarea@second-snapshot ... ## send just incremental changes
## speedup receive
zfs send -i mypool/testarea@first-snapshotman s | ssh host "mbuffer -s 128k -m 1G | zfs receive -F tank/pool
## speedup send & receive:
# Start the receiver first. This listens on port 9090, has a 1GB buffer,
    and uses 128kb chunks (same as zfs):
mbuffer -s 128k -m 1G -I 9090 | zfs receive data/filesystem
# Now we send the data, also sending it through mbuffer:
zfs send -i data/filesystem@1 data/filesystem@2 | mbuffer -s
    128k -m 1G -O 10.0.0.1:9090

Creative Commons Lizenzvertrag
This page is licensed under a Creative Commons Attribution-ShareAlike 2.5 License.