[patch] giant-less quotas for UFS
Nicolas.Kowalski at imag.fr
Mon Apr 10 18:19:45 UTC 2006
Eric Anderson <anderson at centtech.com> writes:
> Nicolas KOWALSKI wrote:
>> Eric Anderson <anderson at centtech.com> writes:
>>> Nicolas KOWALSKI wrote:
>>>> Yes, this is exactly what is happening. To add some precision, some
>>>> students here use calculation applications
>>>> that allocate a lot of disk space, ususally more than their allowed
>>>> home quotas; when by error they launch these apps in their home
>>>> directories, instead of their workstation dedicated space, it makes
>>>> the server go to its knees on the NFS client side.
>>> When you say 'to it's knees' - what do you mean exactly? How many
>>> clients do you have, how much memory is on the server, and how many
>>> nfsd threads are you using? What kind of load average do you see
>>> during this (on the server)?
>> Sorry for the imprecision.
>> The server is a Dual-Xeon 2.8Ghz, 2GB of RAM, using SCSI3 Ultra320
>> 76GB disks and controller. It is accessed by NFS from ~100 Unix
>> (Linux, Solaris) clients, and by Samba from ~15 Windows XP. The
>> network connection is GB ethernet.
>> During slowdowns, it's only from a NFS client view that the server
>> does not respond. For example, a simple 'ls' in my home directory is
>> almost immediate, but when it slows down, it can take up to 2 minutes.
>> On the server, the load average goes to 0.5, compared to a default
>> maximum of 0.15-0.20. The nfsd processus shows them in the state
>> "biowr" in top, but nothing is really written, because the quotas
>> system block any further writes to the user exceeding her/his quotas.
> In this case (which is what I suspected), try bumping up your nfsd
> threads to 128. I set mine very high (I have around 1000 clients),
> and I can say there aren't really ill-effects besides a bit of memory
> usage (which you have plenty of). I suspect increasing the threads
> will neutralize this problem for you.
Thanks for your suggestion.
However, I am apparently not able to change the default. I stopped the
nfsd master process (kill -USR1 as written in the manpage), then
pave# nfsd -t -u -n 128
nfsd: nfsd count 128; reset to 4
What am I forgetting here ?
More information about the freebsd-fs