Problems with ZFS file servers
healer at rpi.edu
Mon Sep 28 02:51:21 UTC 2015
My plan, which won't be implemented before 6/9/16 is to set "-tso4 -tso
-txcsum -rxcsum -vlanhwtso -lro" on all the nics, and in loader.conf,
vfs.zfs.arc_max="75% of physical mem"
Any other options I may have overlooked? I've got 9 months before I get
an outage window. Academia demands no downtime not caused by acts of
god or facilities shutdowns.
These machines have no purpose in life other than nfs, occasional rsync,
and sometimes samba 3.
Biocomputation and Bioinformatics Constellation
healer at rpi.edu
On 9/26/2015 7:04 AM, Fabian Keil wrote:
> kpneal at pobox.com wrote:
>> On Fri, Sep 25, 2015 at 09:24:45PM +0100, Matthew Seaman wrote:
>>> On 25/09/2015 20:42, Bob Healey wrote:
>>>> I've got another machine acting up, this machine is a Sun X2250,
>>>> originally was running Solaris until this summer when the owner dropped
>>>> the support contract. A zpool export, reinstall to FreeBSD 10.1, and
>>>> zpool import and it was back in business. Most of the requested info
>>>> can be found at http://origami.phys.rpi.edu/~healer/lepton. I am
>>>> working on getting cacti installed. I am currently trying to rsync a
>>>> workstation to this pool so I can reload the OS. If I use
>>>> --bwlimit=10240 or lower, I have no issues, but if I don't rsync freezes
>>>> on me on the client side.
>>> I've found, through bitter experience, that you need to apply some
>>> tunings to ZFS machines, and quite possibly some kernel patches too.
>>> When you're pumping wads of data into a ZFS machine at high speed, it is
>>> all too easy to get it to lock up.
>>> First up, the default setting where ZFS grabs all but 1GB of available
>>> RAM for use by the ARC is nuts. You need to chop that down and give the
>>> rest of the OS a fair share of RAM to play with by setting
>>> vfs.zfs.arc_max in /boot/loader.conf. What you set it to depends on the
>>> application mix on your server, but somewhere around 50% of available
>>> RAM seems reasonable to me. Reboot to enable that, obviously.
>> My personal experience:
>> I have swap space configured. My box has 8GB of memory, and when I save
>> large mailboxes with mutt I see up to 4GB of swap space used.
>> It may be the case that adding swap space will eliminate the need to
>> limit the ARC manually in /boot/loader.conf.
> Without the ARC patches or manual tuning the swap use could be the result
> of the ARC failing to adapt to the memory pressure in which case even
> active processes may start paging that otherwise wouldn't have to.
> In that situation, adding more swap space may degrade performance even
> further as it allows the ARC to hold on to the memory even longer.
> Thus I wouldn't recommend it without analysing the cause of the swap
> use first.
More information about the freebsd-questions