ZFS info WAS: new backup server file system options

Arthur Chance freebsd at qeng-ho.org
Fri Dec 21 17:28:56 UTC 2012


On 12/21/12 14:06, Paul Kraus wrote:
> On Dec 21, 2012, at 7:49 AM, yudi v wrote:
>
>> I am building a new freebsd fileserver to use for backups, will be using 2
>> disk raid mirroring in a HP microserver n40l.
>> I have gone through some of the documentation and would like to know what
>> file systems to choose.
>>
>> According to the docs, ufs is suggested for the system partitions but
>> someone on the freebsd irc channel suggested using zfs for the rootfs as
>> well
>>
>> Are there any disadvantages of using zfs for the whole system rather than
>> going with ufs for the system files and zfs for the user data?
>
> 	First a disclaimer, I have been working with Solaris since 1995 and managed
 > lots of data under ZFS, I have only been working with FreeBSD for 
about the past
 > 6 months.
>
> 	UFS is clearly very stable and solid, but to get redundancy you need to use
 > a separate "volume manager".

Slight correction here - you don't need a volume manager (as I 
understand the term), you'd use the GEOM subsystem, specifically gmirror 
in this case. See "man gmirror" for details

> 	ZFS is a completely different way of thinking about managing storage (not
 > just a filesystem). I prefer ZFS for a number of reasons:
>
> 1) End to end data integrity through checksums. With the advent of 1 TB plus
 > drives, the uncorrectable error rate (typically  10^-14 or 10^-15) 
means that
 > over the life of any drive you *are* now likely to run into 
uncorrectable errors.
 > This means that traditional volume managers (which rely on the drive 
reporting an
 > bad reads and writes) cannot detect these errors and bad data will be 
returned to
 > the application.
>
> 2) Simplicity of management. Since the volume management and filesystem layers
 > have been combined, you don't have to manage each separately.
>
> 3) Flexibility of storage. Once you build a zpool, the filesystems that reside
 > on it share the storage of the entire zpool. This means you don't 
have to decide
 > how much space to commit to a given filesystem at creation. It also 
means that all
 > the filesystems residing in that one zpool share the performance of 
all the drives
 > in that zpool.
>
> 4) Specific to booting off of a ZFS, if you move drives around (as I tend to do in
 > at least one of my lab systems) the bootloader can still find the 
root filesystem
 > under ZFS as it refers to it by zfs device name, not physical drive 
device name.
 > Yes, you can tell the bootloader where to find root if you move it, 
but zfs does
 > that automatically.
>
> 5) Zero performance penalty snapshots. The only cost to snapshots is the space
 > necessary to hold the data. I have managed systems with over 100,000 
snapshots.
>
> 	I am running two production, one lab, and a bunch of VBox VMs all with ZFS.
 > The only issue I have seen is one I have also seen under Solaris with 
ZFS. Certain
 > kinds of hardware layer faults will cause the zfs management tools 
(the zpool and
 > zfs commands) to hang waiting on a blocking I/O that will never 
return. The data
 > continuos to be available, you just can't manage the zfs 
infrastructure until the
 > device issues are cleared. For example, if you remove a USB drive 
that hosts a
 > mounted ZFS, then any attempt to manage that ZFS device will hang 
(zpool export
 > -f <zpool name> hangs until a reboot).
>
> 	Previously I had been running (at home) a fileserver under OpenSolaris using
 > ZFS and it saved my data when I had multiple drive failures. At a 
certain client
 > we had a 45 TB configuration built on top of 120 750GB drives. We had 
multiple
 > redundancy and could survive a complete failure of 2 of the 5 disk 
enclosures (yes,
 > we tested this in pre-production).
>
> 	There are a number of good writeups on how setup a FreeBSD system to boot off
 > of ZFS, I like this one the best
 > http://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE , but I do 
the zpool/zfs
 > configuration slightly differently (based on some hard learned 
lessons on Solaris).
 > I am writing up my configuration (and why I do it this way), but it 
is not ready yet.
>
> 	Make sure you look at all the information here: http://wiki.freebsd.org/ZFS ,
 > keeping in mind that lots of it was written before FreeBSD 9. I would 
NOT use ZFS,
 > especially for booting, prior to release 9 of FreeBSD. Some of the 
reason for this
 > is the bugs that were fixed in zpool version 28 (included in release 9).

I would agree with all that. My current system uses UFS filesystems for 
the base install, and ZFS with a raidz zpool for everything else, but 
that's only because I started using ZFS in REL 8.0 when it was just out 
of experimental status, and I didn't want to risk having an unbootable 
system. (That last paragraph suggests I was wise in that decision.) My 
next machine I'm specing out now will be pure ZFS so I get the boot 
environment stuff.



More information about the freebsd-questions mailing list