Anyone using freebsd ZFS for large storage servers?
oscar.hodgson at gmail.com
Fri Jun 1 15:48:50 UTC 2012
Thank you for bringing up this topic.
We've got significant experience with both UFS and ZFS (as delivered
by Solaris). Our internal testing has shown that UFS provides
significantly better throughput, roughly 20% to 30% higher in general,
and 50% higher in some specific use cases.
My internal customer prefers ZFS's 'end-to-end' reliability guarantee
to the higher throughput of UFS.
(What's really interesting though is that he's really a long-time
Linux fan at heart ... and we are now discussing FreeBSD due to its
ZFS support. A Linux hardware RAID solution is not presently on the
ZFS really is entirely different than UFS. In 2005 Bonwick put out a
presentation roughly titled, "ZFS the last word in file systems".
I've used those materials on a number of occasions to help explain the
difference between ZFS and prior generation file systems.
On Fri, Jun 1, 2012 at 4:59 AM, Wojciech Puchar
<wojtek at wojtek.tensor.gdynia.pl> wrote:
>> 48TB each, roughly. There would be a couple of units. The pizza
>> boxes would be used for computational tasks, and nominally would have
>> 8 cores and 96G+ RAM.
>> Obvious questions are hardware compatibility and stability. I've set
>> up small FreeBSD 9 machines with ZFS roots and simple mirrors for
>> other tasks here, and those have been successful so far.
>> Observations would be appreciated.
> you idea of using disks in JBOD style (no "hardware" RAID) is good, but of
> using ZFS is bad.
> i would recommend you to do some real performance testing of ZFS on any
> config under real load (workload doesn't fit cache, there are many
> different things done by many users/programs) and compare it to PROPERLY
> done UFS config on such config (with the help of gmirror/gstripe)
> if you will have better result you certainly didn't configure the latter
> case (UFS,Gmirror,gstripe) properly :)
> in spite of large scale hype and promotion of this free software (which by
> itself should be red alert for you), i strongly recommend to stay away from
> and definitely do not use it if you will not have regular backups of all
> data, as in case of failures (yes they do happen) you will just have no
> chance to repair it.
> There is NO fsck_zfs! And ZFS is promoted as it "doesn't need" it.
> Assuming that filesystem doesn't need offline filesystem check utility
> because it "never crash" is funny.
> In the other hand i never ever heard of UFS failsystem failure that was not
> a result of physical disk failure and resulted in bad damage.
> in worst case some files or one/few subdirectory landed in lost+found, and
> some recently (minutes at most) done things wasn't here.
> if you still like to use it, do not forget it uses many times more CPU power
> than UFS in handling filesystem, leaving much to computation you want to do.
> As of memory you may limit it's memory (ab)usage by adding proper statements
> to loader.conf but still it uses enormous amount of it.
> with 96GB it may not be a problem for you, or it may depends how much memory
> you need for computation.
> if you need help in properly configuring large storage with UFS and
> gmirror/gstripe tools then feel free to ask
More information about the freebsd-questions