=================================================== FREEBSD ZFS DISASTER RECOVERY by o1 =================================================== =================================================== overview --------------------------------------------------- This disaster recovery procedure will restore a FreeBSD system onto a new vm or hardware to the state of the latest zfs snapshot. Take snapshots using cron every week, day, hour, etc. and name them accordingly. Save the snapshots to .capR & .capI files using nfs or your favorite method to transfer those files onto a storage device. The disaster recovery process basically involves restoring from the first .capR snapshot up to the latest .capI snapshot. This example uses snapshots taken every hour of every day. There will be snapshots labelled 01_00 through 31_23. The steps are laid out for clarity but you will probably want to create a cron script to automate the backup process. The storage device will be remote storage mounted on /export/backup. NOTE: This procedure can also be adapted to migrate a FreeBSD system from hardware onto new hardware or to a vm or vice versa. NOTE: This procedure can also be adapted to migrate a zfs file system onto a different zpool structure. =================================================== backup first snapshot to a .capR file --------------------------------------------------- # at the beginning of the month send the first snapshot to a .capR file upon creation # NOTE perhaps the .capR can be created on the 31st for a 60 day sycle instead of 30 # zroot@day_hour > zroot_day.capR zfs snapshot -r zroot@01_00 zfs send -R zroot@01_00 > /export/backup/condo_zroot_01.capR zfs snapshot -r tank@01_00 zfs send -R tank@01_00 > /export/backup/condo_tank_01.capR =================================================== backup subsequent snapshots to .capI files --------------------------------------------------- # every hour send subsequent snapshots to .capI files upon creation # snapshots on the 31st will not be destroyed for 60 days # NOTE we will use zfs send from @01_00 thru "now" to include intermediaries in the # zroot_day.capI to reduce the amount of zfs recv required later when recovering # # zroot@day_hour > zroot_day.capI zfs snapshot -r zroot@01_01 zfs send -R -I zroot@01_00 zroot@01_01 > /export/backup/condo_zroot_01.capI zfs snapshot -r tank@01_01 zfs send -R -I tank@01_00 tank@01_01 > /export/backup/condo_tank_01.capI ... zfs snapshot -r zroot@31_23 zfs send -R -I zroot@31_00 zroot@31_23 > /export/backup/condo_zroot_31.capI zfs snapshot -r tank@31_23 zfs send -R -I tank@31_00 tank@31_23 > /export/backup/condo_tank_31.capI =================================================== recovery --------------------------------------------------- # rebuild a new vm or hardware server # option 1: install freebsd from the installer and then "zfs destroy -r zroot" and skip to "restore zroot". - allows you to use the installer to prepare the disks for zfs # option 2: boot from latest freebsd installer cd and select [LIVE CD] and login as root and continue to "prepare zroot". - requires you to manually prepare the disks for zfs =================================================== prepare zroot --------------------------------------------------- # check disk device names camcontrol devlist ls /dev # create zroot gpart create -s GPT da0 gpart create -s GPT da1 gpart create -s GPT da2 # create efi partition gpart add -t efi -l efi_s0 -b 40 -a 4K -s 256M da0 gpart add -t efi -l efi_s1 -b 40 -a 4K -s 256M da1 gpart add -t efi -l efi_s2 -b 40 -a 4K -s 256M da2 # create freebsd-boot partition gpart add -t freebsd-boot -l freebsd-boot_s0 -a 4K -s 512K da0 gpart add -t freebsd-boot -l freebsd-boot_s1 -a 4K -s 512K da1 gpart add -t freebsd-boot -l freebsd-boot_s2 -a 4K -s 512K da2 gpart add -t freebsd-swap -l freebsd-swap_s0 -a 4K -s 4G da0 gpart add -t freebsd-swap -l freebsd-swap_s1 -a 4K -s 4G da1 gpart add -t freebsd-swap -l freebsd-swap_s2 -a 4K -s 4G da2 gpart add -t freebsd-zfs -l freebsd-zfs_s0 -a 4K da0 gpart add -t freebsd-zfs -l freebsd-zfs_s1 -a 4K da1 gpart add -t freebsd-zfs -l freebsd-zfs_s2 -a 4K da2 gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 2 da0 gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 2 da1 gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 2 da2 # create zpool for zroot # NOTE check freebsd-zfs partition number if you created both freebsd-boot and efi partitions mount -t tmpfs tmpfs /mnt mkdir /mnt/zroot zpool create \ -O compression=zstd -O checksum=sha512 -O atime=on \ -o compatibility=openzfs-2.0-freebsd -o autoexpand=off \ -o autoreplace=on -o failmode=continue -o listsnaps=off \ -m none -R /mnt/zroot \ zroot mirror da0p4 da1p4 da2p4 =================================================== restore zroot --------------------------------------------------- # option 1: prepare the network to access the capR with zfs recv ifconfig em1 netmask up mkdir /mnt/backup mount :/export/backup /mnt/backup ls -laR /mnt/backup # option 2: use your favorite method to access the capR with zfs recv # restore zroot from the .capR first and then up to the most recent .capI # therefore, if the most recent .capI is 13... zfs recv -F -u zroot < /mnt/backup/condo_zroot_01.capR zfs recv -F -u zroot < /mnt/backup/condo_zroot_01.capI zfs recv -F -u zroot < /mnt/backup/condo_zroot_02.capI ... zfs recv -F -u zroot < /mnt/backup/condo_zroot_12.capI zfs recv -F -u zroot < /mnt/backup/condo_zroot_13.capI =================================================== make it bootable --------------------------------------------------- # set bootfs zpool set bootfs=zroot/ROOT/default zroot # if the original server did not use mirrored swap you really should add it later... https://www.genunix.com/o1/freebsd_mirror_swap.txt # if the original server used mirrored swap # NOTE: check freebsd-swap partition number if you created both freebsd-boot and efi partitions gmirror load gmirror label -v -b prefer -F swap da0p2 da1p2 da2p2 gmirror status # restore efi boot loader if needed # NOTE: check efi partition number if you created both freebsd-boot and efi partitions newfs_msdos /dev/da0p2 newfs_msdos /dev/da1p2 newfs_msdos /dev/da2p2 mkdir /mnt/uefi_0 mkdir /mnt/uefi_1 mkdir /mnt/uefi_2 mount_msdosfs /dev/da0p2 /mnt/uefi_0 mount_msdosfs /dev/da1p2 /mnt/uefi_1 mount_msdosfs /dev/da2p2 /mnt/uefi_2 mkdir -p /mnt/uefi_0/EFI/BOOT mkdir -p /mnt/uefi_1/EFI/BOOT mkdir -p /mnt/uefi_2/EFI/BOOT zfs mount zroot/ROOT/default ls /mnt/zroot/boot/loader.efi cp /mnt/zroot/boot/loader.efi /mnt/uefi_0/EFI/BOOT/BOOTX64.EFI cp /mnt/zroot/boot/loader.efi /mnt/uefi_1/EFI/BOOT/BOOTX64.EFI cp /mnt/zroot/boot/loader.efi /mnt/uefi_2/EFI/BOOT/BOOTX64.EFI =================================================== prepare tank --------------------------------------------------- # create tank zpool create \ -O compression=zstd -O checksum=sha512 -O atime=on \ -o compatibility=openzfs-2.0-freebsd -o autoexpand=off \ -o autoreplace=on -o failmode=continue -o listsnaps=off \ -m none \ tank raidz1 da3 da4 da5 da6 =================================================== restore tank --------------------------------------------------- # restore tank from .capR first and then up to the latest .capI # therefore, if the most recent .capI is 13... zfs recv -F -u tank < /mnt/backup/condo_tank_01.capR zfs recv -F -u tank < /mnt/backup/condo_tank_01.capI zfs recv -F -u tank < /mnt/backup/condo_tank_02.capI ... zfs recv -F -u tank < /mnt/backup/condo_tank_12.capI zfs recv -F -u tank < /mnt/backup/condo_tank_13.capI =================================================== cleanup --------------------------------------------------- # first reboot shutdown -r now # turn zpool compatibility off zpool set compatibility=off zroot zpool set compatibility=off tank # upgrade zpool zpool upgrade -a # final reboot shutdown -r now =================================================== :0) ===================================================