gmirror/gstripe or ZFS?
matthew at FreeBSD.org
Thu Dec 15 15:05:36 UTC 2016
On 2016/12/15 14:10, Mario Lobo wrote:
> I'll be building an area on an existing server that runs 10.3-STABLE
> (will upgrade to 11-STABLE), which is going to be used basically as a
> work/storage area for graphic design files (lots and lots of image
> editing, etc ...) that are extremely critical for the company and need
> to be up and ready all the time.
> A backup system is already in place and running.
> The OS runs off of its own ufs formatted drive and I acquired 4x 4Tb
> drives (sata), which I plan to gmirror 1&2/3&4, stripe the two mirrors
> into an 8Tb volume, and share it via samba. Network is Gbit.
> It comes to mind doing the same thing through ZFS. I've never used it
> before, which is the opposite of gmirror/gstripe, which I have used
> Given what this volume is going to be used for, in terms of
> performance/reliability/sharing, which one is best?
> I have replaced defective drives in gmirror many times without any
> problem. Is that just as easy with ZFS?
> Is sharing a dataset through samba as straight forward as sharing a
> I am reading as much as I can about ZFS but most of what I found is
> mainly technical implementation, not so much about how the user is
> experiencing it compared to other options.
If your data is at all important to you and you aren't constrained by
running on tiny little devices with very limited system resources, then
it's a no-brainer: use ZFS.
Creating a ZFS striped over two mirrored vdevs is not particularly
difficult and gives a result about equivalent to RAID10:
zpool create tank -m /somewhere mirror ada0p3 ada1p3 mirror ada2p3 ada3p3
will create a new zpool called 'tank' and mount it at /somewhere.
There's a number of properties to fiddle with for tuning purposes, and
you'll want to create a heirarchy of ZFSes under zroot to suit your
purposes, but otherwise that's about it.
Replacing drives in a ZFS is about as hard as replacing them in a
gmirror / gstripe setup. Swap out the physical device, create an
appropriate partitioning scheme on the new disk if needed[*], then run
'zpool replace device-name' and wait for the pool to resilver.
There are only two commands you need to achieve some familiarity with in
order to manage a ZFS setup -- zfs(8) and zpool(8). Don't be put off by
the length of the man pages: generally it's pretty obvious what
subcommand you need and you can just jump to that point in the manual to
find your answers.
[*] The installer will create a zpool by using gpart partitions, so it
can also add bootcode and a swap area to each disk. If you're not going
to be booting off this pool and you have swap supplied elsewhere, then
all that is unnecessary. You can just tell ZFS to use the raw disk devices.
Problems you may run into:
* Not having enough RAM -- ZFS eats RAM like there's no tomorrow.
That's because of the agressive caching it employs: many IO requests
will be served out of RAM rather than having to go all the way to disk.
Sprinkling RAM liberally into your server will help performance.
* Do turn on compression, and use the lz4 algorithm. Compression is a
win in general due to reducing the size of IO requests, which gains more
than you lose in the extra work to compress and decompress the data.
lz4 is preferred because it gives pretty good compression for
compressible data, but can detect and bale out early for incompressible
data, like many image formats (JPG, PNG, GIF) -- in which case the data
is simply stored without compression at the ZFS level.
* Don't enable deduplication. It sounds really attractive, but for
almost all cases it leads to vastly increased memory requirements,
performance slowing to a near crawl, wailing, and gnashing of teeth. If
you have to ask, then you *don't* want it.
* ZFS does a lot more processing than most filesystems -- calculating
all of those checksums, and doing all those copy-on-writes takes its
toll. It's the price you pay for being confident your data is
uncorrupted, but it does mean ZFS is harder on the system than many
other FSes. For a modern server, the extra processing cost is generally
not a problem, and swallowed in the time it takes to access the spinning
rust. It will hurt you if your IO characteristics are a lot of small
reads / writes randomly scattered around your storage, typical of eg. a
* You can add a 'SLOG' (Separate LOG) device to improve performance --
this is typically a fast SSD. Doesn't have to be particularly big: all
it does is move some particularly hot IO caches off the main drives onto
the faster hardware. Can be used for ARC (reading data) or ZIL (writing
data) or both. However, you can add this on the fly without any
interruption of service, so I'd recommend starting without and only
adding one if it seems you need it.
* Having both UFS and ZFS on the same machine. This is not
insurmountably bad, but the different memory requirements of the two
filesystems can lead to performance trouble. It depends on what your
server load levels are like. If it's lightly loaded, then no problem.
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 972 bytes
Desc: OpenPGP digital signature
More information about the freebsd-questions