The pfSense Store

Author Topic: HOW TO: 2.4.0 ZFS Install, RAM Disk, Hot Spare, Snapshot, Resilver Root Drive  (Read 9894 times)

0 Members and 1 Guest are viewing this topic.

Offline TS_b

  • Jr. Member
  • **
  • Posts: 43
  • Karma: +2/-0
    • View Profile
Is it possible to restore a config from a UFS-based system to a ZFS-based one?

I'd like to switch to ZFS once 2.4.0 is released, which I know will require a reinstall, but I've been having a hard time finding whether restoring my old config would cause issues or whether it would be better to do a manual config from scratch.  Does anybody have any information on doing that?


To answer your question in the words of the almighty OP  ;)-


EDIT: I don't recommend setting a second zpool as it can cause issues with booting. If you want to send snapshots on a separate device, try a UFS filesystem on it. People smarter than myself can probably get around this, if anyone has a solution please share and I'll add it here!
 To use UFS:
After partitioning the drive follow the instructions here:
https://www.freebsd.org/doc/handbook/disks-adding.html

To send your snapshot to a UFS partition you can modify this for your mount point and copy and paste:
Code:
Code: [Select]
zfs snapshot -r yourpoolname@`date "+%d.%b.%y.%H00"` && zfs send -Rv yourpoolname@`date "+%d.%b.%y.%H00"` | gzip > /mnt/sshot/sshot`date "+%d.%b.%y.%H00."`gz && zfs destroy -r yourpoolname@`date "+%d.%b.%y.%H00"` && zfs list -r -t snapshot -o name,creation && du -hs /mnt/sshot/sshot`date "+%d.%b.%y.%H00."`gz

I would imagine that if you could restore a snapshot from UFS to ZFS then you could restore from the config. Config file is just an .xml file full of your system configuration settings. The underlying FS shouldn't matter.

Offline stilez

  • Full Member
  • ***
  • Posts: 101
  • Karma: +4/-2
    • View Profile
If you are smarter than me I'm betting you could automate this with a script, I would think something running frequently in cron along the lines of:
Code: [Select]
check if pool is degraded
if no, exit
if yes, check if resilver complete
if no, exit
if yes, detach baddisk

If anyone does write such a script, please share! ;)
Added to feature requests, see https://redmine.pfsense.org/issues/7812

Offline madmaxed

  • Newbie
  • *
  • Posts: 1
  • Karma: +0/-0
    • View Profile
First of all GREAT post.  Thanks pfBasic.

I've been using a 6 disk ZFS raidz2 array on my FreeNAS server for a couple of years.

I just wanted to point out, that ZFS can do more than a two disk mirror.  It is technically nearly unlimited.  But for pfSense I think have a ZFS three disk mirror is another option, and less setup, less disks, and still offers 2 drive failure protection.

Just wanted to throw that out there for home users looking for ZFS with only 3 disks and dual failure redundancy.

Offline beedix

  • Newbie
  • *
  • Posts: 17
  • Karma: +0/-0
    • View Profile
Re: HOW TO: 2.4.0 ZFS Install, RAM Disk, Hot Spare, Snapshot, Resilver Root Drive
« Reply #18 on: September 29, 2017, 10:30:21 pm »
Appreciate this post. 

I'm using 2.4RC and have a mirrored boot drive setup with ZFS.

I was wanting to partion a new SSD (ada1) with ZFS for general file system use, specifically mounting the disk in /var/squid/cache.  What are the steps for partitioning the disk with ZFS so that it can be mounted into the existing file system structure? 

Offline beedix

  • Newbie
  • *
  • Posts: 17
  • Karma: +0/-0
    • View Profile
Re: HOW TO: 2.4.0 ZFS Install, RAM Disk, Hot Spare, Snapshot, Resilver Root Drive
« Reply #19 on: September 29, 2017, 11:36:13 pm »
I probably should have researched a bit more before asking, but man I love ZFS.  Here is how I setup my new drive.
Code: [Select]
gpart create -s gpt ada1
gpart add -b 2048 -t freebsd-zfs -l gpt2 ada1
zpool create -f zdata /dev/gpt/gpt2
zfs set checksum=on zdata
zfs set compression=lz4 zdata
zfs set atime=off zdata
zfs set recordsize=64K zdata
zfs set primarycache=metadata zdata
zfs set secondarycache=none zdata
zfs set logbias=latency zdata
zfs create -o mountpoint=/var/squid/cache zdata/cache

chown -R squid:squid /var/squid/cache
chmod -R 0750 /var/squid/cache


There are specific ARC and ZIL caching features which I didn't setup which could be a benefit for squid, but as best I can tell, it wouldn't work out well in my situation.  Here is a link from squid regarding ZFS:
https://wiki.squid-cache.org/SquidFaq/InstallingSquid#Is_it_okay_to_use_ZFS_on_Squid.3F
« Last Edit: September 29, 2017, 11:40:59 pm by beedix »

Offline kevindd992002

  • Sr. Member
  • ****
  • Posts: 401
  • Karma: +5/-0
    • View Profile
I'm using a PC Engines APU2C4 for my pfsense box. I just upgraded to 2.4 and read about ZFS. I'm using a 16GB single SSD and I'm wanting to use ZFS. Which of the steps in the OP should I follow? I read through them and they're targetted for multiple flash drives in the system. I'm not really sure which ones are applicable in a single disk setup only.

Also, can I backup the config file that I have now, reinstall pfsense with ZFS, and just restore that same config file without any adverse effects?

Offline sdf_iain

  • Newbie
  • *
  • Posts: 10
  • Karma: +0/-0
    • View Profile
In short, if you didn't already have a reason to use ECC, then ZFS on pfSense shouldn't change your mind. But if you want to be convinced otherwise just ask the same question on the FreeNAS forums and I'm sure you'll be flamed for acknowledging that such a thing as non-ECC exists.

The point of ECC RAM on a ZFS based fileserver is simple.  ZFS provides checksumming of all files at rest (i.e. on disk) and ECC provides the same protections for data in motion.  It isn't that a pool could be lost without ECC, it's actually much more sinister.  Data that seems fine, data with valid checksums that passes every scrub, could have "bit rot" and, in extreme cases, be unreadable.  Everything looks fine, but nothing is!

pfSense is in a different boat.  A firewall absolutely shouldn't be storing any critical or irreplaceable data so 100% corruption prevention isn't necessary.  99% (or whatever the chances of bit rot in the relatively tiny memory footprint of a firewall) corruption prevention is more than sufficient and ECC isn't at all necessary (it is nice to have). 

TL;DR: Just go download config.xml, enable copies=2, and setup '/sbin/zpool scrub zroot' to run periodically via cron

Offline kevindd992002

  • Sr. Member
  • ****
  • Posts: 401
  • Karma: +5/-0
    • View Profile
Anybody can hrmelp me with my question?

Offline sdf_iain

  • Newbie
  • *
  • Posts: 10
  • Karma: +0/-0
    • View Profile
Anybody can hrmelp me with my question?

Yes, backup config.xml and reinstall from scratch.  The underlying file system will not affect anything except (possibly) a few system tunables that you probably wouldn’t have set.

You should be fine, but as with any change: allow for extra downtime in case things don’t go as planned/expected.

Offline kevindd992002

  • Sr. Member
  • ****
  • Posts: 401
  • Karma: +5/-0
    • View Profile
Anybody can hrmelp me with my question?

Yes, backup config.xml and reinstall from scratch.  The underlying file system will not affect anything except (possibly) a few system tunables that you probably wouldn’t have set.

You should be fine, but as with any change: allow for extra downtime in case things don’t go as planned/expected.

Yes , I get that. But which guide should I follow for the setup of the ZFS filesystem? The guide here is more for a multi-disk setup.

Offline sdf_iain

  • Newbie
  • *
  • Posts: 10
  • Karma: +0/-0
    • View Profile
I let the installer do everything (it was mostly self explanatory).  Once everything was installed and it offered me the option to go to a command prompt and make final changes I did.  I ran this:
Code: [Select]
zfs set copies=2 zroot
That sets the default zpool to make two copies of files and allow a regular scrub to not only find corrupted files, but also fix them (using the second copy).

Other than that I installed cron and set it to do a regular (weekly) scrub of zroot.  It's so small that the scrub will run quickly.

Offline kpa

  • Hero Member
  • *****
  • Posts: 1177
  • Karma: +131/-6
    • View Profile
I let the installer do everything (it was mostly self explanatory).  Once everything was installed and it offered me the option to go to a command prompt and make final changes I did.  I ran this:
Code: [Select]
zfs set copies=2 zroot
That sets the default zpool to make two copies of files and allow a regular scrub to not only find corrupted files, but also fix them (using the second copy).

Other than that I installed cron and set it to do a regular (weekly) scrub of zroot.  It's so small that the scrub will run quickly.

Second copies are not made retroactively, only new files and changed files get stored with two copies after you set copies=2.

Offline kevindd992002

  • Sr. Member
  • ****
  • Posts: 401
  • Karma: +5/-0
    • View Profile
I let the installer do everything (it was mostly self explanatory).  Once everything was installed and it offered me the option to go to a command prompt and make final changes I did.  I ran this:
Code: [Select]
zfs set copies=2 zroot
That sets the default zpool to make two copies of files and allow a regular scrub to not only find corrupted files, but also fix them (using the second copy).

Other than that I installed cron and set it to do a regular (weekly) scrub of zroot.  It's so small that the scrub will run quickly.

Second copies are not made retroactively, only new files and changed files get stored with two copies after you set copies=2.

But that's basically the whole process of installing with ZFS on a single SSD, correct?

Offline Grimson

  • Full Member
  • ***
  • Posts: 138
  • Karma: +24/-1
    • View Profile
Second copies are not made retroactively, only new files and changed files get stored with two copies after you set copies=2.

You can do a:
Code: [Select]
pkg upgrade -f

after setting copies to "2". This is clunky and will still not get all files, but a good chunk of them.

Offline sdf_iain

  • Newbie
  • *
  • Posts: 10
  • Karma: +0/-0
    • View Profile
Second copies are not made retroactively, only new files and changed files get stored with two copies after you set copies=2.

You can do a:
Code: [Select]
pkg upgrade -f

after setting copies to "2". This is clunky and will still not get all files, but a good chunk of them.

kevindd992002, that is the process.

I might be mistaken, but updating the file should cause ZFS to rewrite it.  The fastest/easiest way to update all of the files would be
Code: [Select]
find / -exec touch {} \;
On a fresh install, that should not take long at all.  And before first boot it won't really change any timestamps by much either.  The right answer would be to change the ZFS defaults, but I didn't go that far into the installer.