Netgate SG-1000 microFirewall

Author Topic: Pfsense 2.4 ZFS File System  (Read 10440 times)

0 Members and 1 Guest are viewing this topic.

Offline chrcoluk

  • Sr. Member
  • ****
  • Posts: 407
  • Karma: +22/-50
    • View Profile
Re: Pfsense 2.4 ZFS File System
« Reply #15 on: January 31, 2017, 04:02:44 pm »
the size of the flash storage is also important for durability and 4 gig of it is a very small amount.

e.g. my pfsense unit is doing 5+ gig writes a day to my ssd (yes I am going to investigate why is so high), if I was using those usb sticks that would be an erase cycle every single day.

Also do usb devices have robust wear levelling tech which requires a decent controller.   If you willing to use usb ports for the primary storage, then its probably better to get a couple of ssd's and then connect them via a usb to sata adaptor.

I also concur on the memory usage for zfs.

on my pfsense unit the ZFS ARC is only using 438meg of ram.
« Last Edit: January 31, 2017, 04:30:59 pm by chrcoluk »
pfSense 2.4
Qotom Q355G4 or Braswell N3150 with Jetway mini pcie 2x intel i350 lan - 4 gig Kingston 1333 C11 DDR3L
 - 60 gig kingston ssdnow ssd - ISP Sky UK

Offline pfBasic

  • Hero Member
  • *****
  • Posts: 1021
  • Karma: +139/-22
    • View Profile
Re: Pfsense 2.4 ZFS File System
« Reply #16 on: January 31, 2017, 04:45:47 pm »
the size of the flash storage is also important for durability and 4 gig of it is a very small amount.

e.g. my pfsense unit is doing 5+ gig writes a day to my ssd (yes I am going to investigate why is so high), if I was using those usb sticks that would be an erase cycle every single day.

Also do usb devices have robust wear levelling tech which requires a decent controller.

Well if you were to use SLC, you have somewhere in the neighborhood of 30,000 r/w cycles compared to about 500 for TLC which is what a normal USB drive probably uses. https://media.kingston.com/pdfs/MKF_283.1_Flash_Memory_Guide_EN.pdf CTRL+F "SLC"

So if those numbers are to be believed, one SLC drive will last as long as 60 TLC drives, and an SLC drives costs ~12x more than a TLC drive (at $60/4GB SLC v $5/4GB TLC) obviously these numbers are ballpark and you can pay a lot more and a lot less for either option but you get the point. There could be a case to be made for using SLC drives, but probably not for many people.

Your average person would probably get years of use way more capacity out of 2 or 4 16GB SanDisk Cruzer Fits.
https://www.amazon.com/dp/B005FYNSZA/?tag=ozlp-20
At 2 for $18 or 4 for $36 setup in mirrors you have either 16 or 32GB of storage with either 1 or 2 redundant drives.

Writes will be very slow at 0.475 or 0.2375 MB/s for 4k writes and about 5x as fast for sequential.
Reads will be way better at 9.14 or 18.28 MB/s for 4k reads and about 4.8x faster for sequential.
http://usb.userbenchmark.com/SpeedTest/2402/SanDisk-Cruzer-Fit

These numbers are based off of a slow USB 2.0 drive, obviously if you get better drives you'll get better performance.
Mirrors will get writes at 50% performance for 2 drives, 25% for 4.
Reads will be at 200% for 2 drives and 400% for 4. In theory at least.

Ultimately I doubt much of this matters since pfsense is usually just writing logs to the boot drive and doesn't typically rebot often in most scenarios.
I do know that FreeNAS, also based on FreeBSD commonly recommends the SanDisk Cruzer USB 2.0 drives as ZFS boot drives.

I'm just wondering if the same setup will also work well on pfsense, or will it for some reason not do well?

Two redundant drives in ZFS for low power draw and <$40 buy in sounds great for a system you'll setup somewhere else and probably never physically see again.

Offline chrcoluk

  • Sr. Member
  • ****
  • Posts: 407
  • Karma: +22/-50
    • View Profile
Re: Pfsense 2.4 ZFS File System
« Reply #17 on: January 31, 2017, 05:15:23 pm »
By the way I have found the cause of the writes.

7.2megabytes written every minute in /var/db/rrd to update graphing data, thats 480meg an hour.  ZFS will reduce the impact tho with compression.

If comparing to ssd's which I advised then many consumer ssd's are MLC not TLC based.

The erase cycle efficiency plummets if there is not decent wear levelling in place.

For the price of those USB sticks one can get a 60gig MLC drive.  So I think thats a better comparison.

The SLC usb sticks should be quite fast tho :) I own some fast usb sticks, I suspect they are at least MLC flash and that is how the speed increased was achieved over my cheaper usb sticks which are almost certainly TLC.
« Last Edit: January 31, 2017, 05:24:47 pm by chrcoluk »
pfSense 2.4
Qotom Q355G4 or Braswell N3150 with Jetway mini pcie 2x intel i350 lan - 4 gig Kingston 1333 C11 DDR3L
 - 60 gig kingston ssdnow ssd - ISP Sky UK

Offline Gentle Joe

  • Jr. Member
  • **
  • Posts: 44
  • Karma: +2/-0
    • View Profile
Re: Pfsense 2.4 ZFS File System
« Reply #18 on: February 07, 2017, 10:48:30 pm »
Going from a pre 2.4 version to 2.4, to use ZFS, does that require a fresh install, or can the file system be converted during the 2.3.3 to 2.4 process?

Offline kpa

  • Hero Member
  • *****
  • Posts: 1230
  • Karma: +138/-6
    • View Profile
Re: Pfsense 2.4 ZFS File System
« Reply #19 on: February 08, 2017, 06:12:55 am »
Going from a pre 2.4 version to 2.4, to use ZFS, does that require a fresh install, or can the file system be converted during the 2.3.3 to 2.4 process?

There are no UFS to ZFS conversion tools that I know of, at least for FreeBSD so you very likely will have to do a clean install and restore the config.

Offline chrcoluk

  • Sr. Member
  • ****
  • Posts: 407
  • Karma: +22/-50
    • View Profile
Re: Pfsense 2.4 ZFS File System
« Reply #20 on: February 08, 2017, 12:44:47 pm »
Going from a pre 2.4 version to 2.4, to use ZFS, does that require a fresh install, or can the file system be converted during the 2.3.3 to 2.4 process?

You would need a second storage device and it would be a manual process, there is no automated tool.

So the process would be something like this.

Connect new storage
Load zfs kernel module
Configure zfs on new storage, remembering to also do bootloader and enable zfs in loader.conf modify fstab etc.
Migrate data to new storage.
Boot off new storage.

Done

Probably easier to just reinstall pfsense given the backup and restore feature makes it a whole lot quicker.
pfSense 2.4
Qotom Q355G4 or Braswell N3150 with Jetway mini pcie 2x intel i350 lan - 4 gig Kingston 1333 C11 DDR3L
 - 60 gig kingston ssdnow ssd - ISP Sky UK

Offline Jailer

  • Sr. Member
  • ****
  • Posts: 413
  • Karma: +54/-2
    • View Profile
    • Bored Guy Blog
Re: Pfsense 2.4 ZFS File System
« Reply #21 on: February 08, 2017, 01:57:14 pm »
I'm wondering if ZFS on a flash drive will produce similar results that it did when FreeNAS switched to 9.3 and ZFS for the boot drive. Scrubs will show any errors and it uncovered just how inherently unreliable USB flash drives really are.

I would think a sensible move would be to invest in a cheap SSD for a boot drive if you want to run ZFS and 2.4 and have a reliable system. But it's just a guess at this point. 

Offline Gentle Joe

  • Jr. Member
  • **
  • Posts: 44
  • Karma: +2/-0
    • View Profile
Re: Pfsense 2.4 ZFS File System
« Reply #22 on: February 08, 2017, 02:08:50 pm »
Going from a pre 2.4 version to 2.4, to use ZFS, does that require a fresh install, or can the file system be converted during the 2.3.3 to 2.4 process?

There are no UFS to ZFS conversion tools that I know of, at least for FreeBSD so you very likely will have to do a clean install and restore the config.

Thanks very much.

I will perform a backup then install on another fresh drive. A good excuse to use the small 60GB SSD I have.

Offline pvoigt

  • Full Member
  • ***
  • Posts: 251
  • Karma: +1/-0
    • View Profile
Re: Pfsense 2.4 ZFS File System
« Reply #23 on: February 08, 2017, 02:46:18 pm »
I'm wondering if ZFS on a flash drive will produce similar results that it did when FreeNAS switched to 9.3 and ZFS for the boot drive. Scrubs will show any errors and it uncovered just how inherently unreliable USB flash drives really are.

I would think a sensible move would be to invest in a cheap SSD for a boot drive if you want to run ZFS and 2.4 and have a reliable system. But it's just a guess at this point.

I am using FreeNAS for about 3 years now and remember the switch to ZFS. I am using a moderate priced 32 GB mSATA for the root file system. I perform regular scrubbing both on the ZFS root file system and on my RAID1 based zpool for my data. Srub did not detect any errors on any of my zpools so far.

My pfSense installation is currently hosted on the same mSATA and I will try ZFS as soon as 2.4 is released.

Offline Harvy66

  • Hero Member
  • *****
  • Posts: 2320
  • Karma: +213/-12
    • View Profile
Re: Pfsense 2.4 ZFS File System
« Reply #24 on: February 14, 2017, 12:31:01 pm »
Except for write-amplification cases, even TLC SSDs are so durable to writes, that they're about the same as a mechanical HD. The only difference is the SSD is about 10x faster and allows you to kill it 10x faster. Even companies like Google have started to go SLC because the number of writes is the least cause of their failures. They've gone so far and have said SLC drivers are worse for their work loads where data is rarely changed once written, that the density gains from the TCL allows fewer drives, which reduced the number of failures per unit of storage.

Offline pfBasic

  • Hero Member
  • *****
  • Posts: 1021
  • Karma: +139/-22
    • View Profile
Re: Pfsense 2.4 ZFS File System
« Reply #25 on: February 18, 2017, 04:37:57 pm »
I just ordered five 8GB Sandisk Cruzer Blades for $30, I'm planning on installing the latest 2.4 Beta on four of them in raidz2 and using the fifth as a spare for when one fails. I'm doing this to get off of the 640GB HDD that came with my eBay system as it wastes power to use it (I utilize less than 4GB), and also I'm just curious as to how durable consumer USB drives will be on pfSense in zfs. I've read about a lot of FreeNAS users getting years out of single consumer grade usb drives, if that translates to pfSense, then raidz2 flash drives could be a great solution to low cost boxes you don't ever want to touch again.

On my system I use PfBlockerNG w/ DNSBL & Suricata, 4 OpenVPN clients and one server.

https://smile.amazon.com/gp/product/B00E80LONK/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1

I'm interested in your recommendations to get the most out of this:

1. RAM Disk, recommend using one or not? There is no UPS on this system. I have 8GB RAM that I see cap out max in the 50-60% when doing stuff with Suricata, almost always around 30% though. If you do recommend using it, I was thinking 500MB for ea and backing up data every 6 hours?

2. What Swap size do you recommend? My current system has 16GB and is currently using a little under 500MB. Obviously not going to use 16GB, what would you go with here?

And finally I have a question about how the disks appear in pfSense, I've attached two screens from my VM running latest 2.4 Beta and installed on 4x4GB virtual drives.
Both df -H and the webconfigurator show 4 different fields for my zpool.

/ using 7% of 6.6GB available

/temp >
/var    > all using 0% of either 6.6GB in df -H or 6.1GB in the webconfigurator
/zroot >

So 6.6GB available in the pool makes sense to me for 4x4GB in raidz2, but why the difference in 6.6 to 6.1 between df -H and webconfigurator?
6.1 in the webconfigurator makes more sense to me since / is already using about 500MB of 6.6GB?
« Last Edit: February 18, 2017, 04:42:52 pm by pfBasic »

Offline pfBasic

  • Hero Member
  • *****
  • Posts: 1021
  • Karma: +139/-22
    • View Profile
Re: Pfsense 2.4 ZFS File System
« Reply #26 on: February 20, 2017, 08:37:01 pm »
I went ahead and installed the latest 2.4.0 BETA today in a raidz2 with 4x8GB Sandisk Cruzer Blades.

I opted to use a RAM disk for 1.7GB /var and 750MB /tmp.

I didn't use any swap at all.

RRD backs up @ 6 hours 24, logs at 12 24 and DHCP 24 never.

Currently everything is working very well, pfBNG, Suricata, OpenVPN.

On the status monitor RAM appears to be holding steady at about 35% (2.8GB).

/var is at 31% right now
/tmp 0%

zpool is at 5% (650MB) used.

The fifth drive is added as a hot spare with autoreplace=on

Power usage is down ~7W (replaced a HDD).

Code: [Select]
zpool status pfsense
  pool: pfsense
 state: ONLINE
  scan: scrub repaired 0 in 0h0m with 0 errors on Tue Feb 21 01:11:15 2017
config:

        NAME        STATE     READ WRITE CKSUM
        pfsense     ONLINE       0     0     0
          raidz2-0  ONLINE       0     0     0
            da2p2   ONLINE       0     0     0
            da3p2   ONLINE       0     0     0
            da0p2   ONLINE       0     0     0
            da1p2   ONLINE       0     0     0
        spares
          da4       AVAIL
Code: [Select]
zpool get all pfsense
NAME     PROPERTY                       VALUE                          SOURCE
pfsense  size                           28.8G                          -
pfsense  capacity                       5%                             -
pfsense  altroot                        -                              default
pfsense  health                         ONLINE                         -
pfsense  guid                           9366339498345966656            default
pfsense  version                        -                              default
pfsense  bootfs                         pfsense/ROOT/default           local
pfsense  delegation                     on                             default
pfsense  autoreplace                    on                             local
pfsense  cachefile                      -                              default
pfsense  failmode                       wait                           default
pfsense  listsnapshots                  off                            default
pfsense  autoexpand                     off                            default
pfsense  dedupditto                     0                              default
pfsense  dedupratio                     1.00x                          -
pfsense  free                           27.2G                          -
pfsense  allocated                      1.53G                          -
pfsense  readonly                       off                            -
pfsense  comment                        -                              default
pfsense  expandsize                     -                              -
pfsense  freeing                        0                              default
pfsense  fragmentation                  4%                             -
pfsense  leaked                         0                              default
pfsense  feature@async_destroy          enabled                        local
pfsense  feature@empty_bpobj            active                         local
pfsense  feature@lz4_compress           active                         local
pfsense  feature@multi_vdev_crash_dump  enabled                        local
pfsense  feature@spacemap_histogram     active                         local
pfsense  feature@enabled_txg            active                         local
pfsense  feature@hole_birth             active                         local
pfsense  feature@extensible_dataset     enabled                        local
pfsense  feature@embedded_data          active                         local
pfsense  feature@bookmarks              enabled                        local
pfsense  feature@filesystem_limits      enabled                        local
pfsense  feature@large_blocks           enabled                        local
pfsense  feature@sha512                 enabled                        local
pfsense  feature@skein                  enabled                        local



I'd still be very interested in hearing your educated opinions on these settings. /tmp seems to be way too big, /var is also too big if it isn't going to grow, but I don't know?
I sized /tmp and /var by running du -hs on both /var and /tmp on my old install right before I reinstalled, they were at about 1.6GB & 600MB respectively, so I aimed a little higher to be safe.

Using swap on a system with too much RAM installed and using thumbdrives as storage didn't seem like a good idea to me, my normal install had hardly anything in the swap, but I don't know how often it's written to?

All is well as of latest update to this post. Monthly scrubs +occasional scrub after power outage.
« Last Edit: June 14, 2017, 01:57:21 pm by pfBasic »

Offline rpht

  • Newbie
  • *
  • Posts: 2
  • Karma: +0/-0
    • View Profile
Re: Pfsense 2.4 ZFS File System
« Reply #27 on: February 21, 2017, 12:33:54 pm »
I have a SG-2440 w/128GB msata SSD and have been trying to install 2.4 with ZFS File System. I selected Auto ZFS with non-redundant strip. If will not proceed saying not enough drives selected. How would I get ZFS to installed?

Offline pfBasic

  • Hero Member
  • *****
  • Posts: 1021
  • Karma: +139/-22
    • View Profile
Re: Pfsense 2.4 ZFS File System
« Reply #28 on: February 21, 2017, 12:42:33 pm »
After you selected stripe it should take you to a screen listing your disks, you have to select your disk (press spacebar when your disk is highlighted) an asterisk will appear between the brackets "[ * ]" for your disk, then press Enter on OK to proceed, if you just press enter without selecting a disk, then you are trying to install onto 0 disks when there is a 1 disk minimum :).

Offline rpht

  • Newbie
  • *
  • Posts: 2
  • Karma: +0/-0
    • View Profile
Re: Pfsense 2.4 ZFS File System
« Reply #29 on: February 21, 2017, 01:01:55 pm »
Thanks I figured it would be something simple.