NFS server bottlenecks

Nikolay Denev ndenev at gmail.com
Sat Oct 20 18:58:10 UTC 2012


On Oct 20, 2012, at 4:00 PM, Nikolay Denev <ndenev at gmail.com> wrote:

> 
> On Oct 20, 2012, at 3:11 PM, Ivan Voras <ivoras at freebsd.org> wrote:
> 
>> On 20 October 2012 13:42, Nikolay Denev <ndenev at gmail.com> wrote:
>> 
>>> Here are the results from testing both patches : http://home.totalterror.net/freebsd/nfstest/results.html
>>> Both tests ran for about 14 hours ( a bit too much, but I wanted to compare different zfs recordsize settings ),
>>> and were done first after a fresh reboot.
>>> The only noticeable difference seems to be much more context switches with Ivan's patch.
>> 
>> Thank you very much for your extensive testing!
>> 
>> I don't know how to interpret the rise in context switches; as this is
>> kernel code, I'd expect no context switches. I hope someone else can
>> explain.
>> 
>> But, you have also shown that my patch doesn't do any better than
>> Rick's even on a fairly large configuration, so I don't think there's
>> value in adding the extra complexity, and Rick knows NFS much better
>> than I do.
>> 
>> But there are a few things other than that I'm interested in: like why
>> does your load average spike almost to 20-ties, and how come that with
>> 24 drives in RAID-10 you only push through 600 MBit/s through the 10
>> GBit/s Ethernet. Have you tested your drive setup locally (AESNI
>> shouldn't be a bottleneck, you should be able to encrypt well into
>> Gbyte/s range) and the network?
>> 
>> If you have the time, could you repeat the tests but with a recent
>> Samba server and a CIFS mount on the client side? This is probably not
>> important, but I'm just curious of how would it perform on your
>> machine.
> 
> The first iozone local run finished, I'll paste just the result here, and also the same test over NFS for comparison:
> (This is iozone doing 8k sized IO ops, on ZFS dataset with recordsize=8k)
> 
> NFS:
>                                                            random  random    bkwd   record   stride                                   
>              KB  reclen   write rewrite    read    reread    read   write    read  rewrite     read                                   
>        33554432       8    4973    5522     2930     2906    2908    3886                                          
> 
> Local:
>                                                            random  random    bkwd   record   stride                                   
>              KB  reclen   write rewrite    read    reread    read   write    read  rewrite     read                                   
>        33554432       8   34740   41390   135442   142534   24992   12493                                          
> 
> 
> P.S.: I forgot to mention that the network is with 9K mtu.


Here are the full results of the test on the local fs :

http://home.totalterror.net/freebsd/nfstest/local_fs/

I'm now running the same test on NFS mount over the loopback interface on the NFS server machine.



More information about the freebsd-hackers mailing list