to gmirror or to ZFS

aurfalien aurfalien at gmail.com
Fri Jul 19 18:25:33 UTC 2013


On Jul 16, 2013, at 11:42 AM, Warren Block wrote:

> On Tue, 16 Jul 2013, aurfalien wrote:
>> On Jul 16, 2013, at 2:41 AM, Shane Ambler wrote:
>>> 
>>> I doubt that you would save any ram having the os on a non-zfs drive as
>>> you will already be using zfs chances are that non-zfs drives would only
>>> increase ram usage by adding a second cache. zfs uses it's own cache
>>> system and isn't going to share it's cache with other system managed
>>> drives. I'm not actually certain if the system cache still sits above
>>> zfs cache or not, I think I read it bypasses the traditional drive cache.
>>> 
>>> For zfs cache you can set the max usage by adjusting vfs.zfs.arc_max
>>> that is a system wide setting and isn't going to increase if you have
>>> two zpools.
>>> 
>>> Tip: set the arc_max value - by default zfs will use all physical ram
>>> for cache, set it to be sure you have enough ram left for any services
>>> you want running.
>>> 
>>> Have you considered using one or both SSD drives with zfs? They can be
>>> added as cache or log devices to help performance.
>>> See man zpool under Intent Log and Cache Devices.
>> 
>> This is a very interesting point.
>> 
>> In terms if SSDs for cache, I was planning on using a pair of Samsung Pro 512GB SSDs for this purpose (which I haven't bought yet).
>> 
>> But I tire of buying stuff, so I have a pair of 40GB Intel SSDs for use as sys disks and several Intel 160GB SSDs lying around that I can combine with the existing 256GB SSDs for a cache.
>> 
>> Then use my 36x3TB for the beasty NAS.
> 
> Agreed that 256G mirrored SSDs are kind of wasted as system drives.  The 40G mirror sounds ideal.


Update;

I went with ZFS as I didn't want to confuse the toolset needed to support this server.  Although gmirror is not hard to figure out, I wanted consistency in systems.

So I've a booted 9.1 rel using a mirrored ZFS system disk.

The drives do support TRIM but am unsure how this plays with ZFS.  I did the standard partition scheme of;

root at kronos:/root # gpart show
=>      34  78165293  da0  GPT  (37G)
        34       128    1  freebsd-boot  (64k)
       162         6       - free -  (3.0k)
       168   8388608    2  freebsd-swap  (4.0G)
   8388776  69776544    3  freebsd-zfs  (33G)
  78165320         7       - free -  (3.5k)

=>      34  78165293  da1  GPT  (37G)
        34       128    1  freebsd-boot  (64k)
       162         6       - free -  (3.0k)
       168   8388608    2  freebsd-swap  (4.0G)
   8388776  69776544    3  freebsd-zfs  (33G)
  78165320         7       - free -  (3.5k)

At any rate, thank you for the replies, very much appreciate it.

Especially since building a rather large production worthy NAS not knowing a lick of freeBSD.

The reasons going with freeBSD are 2 fold;

ZFS stability,seems a better marriage then ZOL.
Correctly provides NFS pre attributes on write reply; mtime.  Linux does not.

While its a steep learning curve, the 2 points above require the use of freeBSD or alike.

- aurf


More information about the freebsd-questions mailing list