NFS Performance issue against NetApp

Rick Macklem rmacklem at uoguelph.ca
Thu May 2 22:11:58 UTC 2013


Marc G. Fournier wrote:
> On 2013-05-02, at 13:52 , "Lawrence K. Chen, P.Eng." <lkchen at ksu.edu>
> wrote:
> 
> > Yeah, I didn't have any problems with FreeBSD 9.0 on G7, the boss
> > didn't like the lack of passthru and having to configure a bunch of
> > raid 0 luns for each disk with the SmartArray P410i...so he was
> > going through everything putting in the LSI SAS 2008s, and decided
> > while he was at it to switch to all Intel EXPI9402PT cards....it
> > might be because of the G7's that are doing SmartOS. He swapped out
> > all the memory….
> 
> I tried Intel vs Broadcom, and didn't notice any difference … New NFS
> is slower then Old NFS, but that's just a difference of a 5m start up
> vs a 4m start up … even OpenBSD is faster by ~25% "out of the box" …
> 
> The thing is, I'm not convinced it is a NFS related issue … there are
> *so* many other variables involved … it could be something with the
> network stack … it could be something with the scheduler … it could be
> … hell, it could be like the guy states in that blog posting
> (http://antibsd.wordpress.com/) and be the compiler changes …
> 
> I found this in my searches that talks about how much CPU on the
> NetAPP side is used when using a FreeBSD client over Linux:
> 
> http://www.makingitscale.com/2012/freebsd-linux-nfs-and-the-attribute-cache.html
> 
A little off topic, but this guy reports the client as doing Access RPCs.
There is a sysctl called vfs.nfs.prime_access_cache. If you set that to 0,
the client will use Getattr RPCs instead of Access RPCs.

This was put in specifically for Netapp Filers, since their server implementation
for Access results in much higher overheads than Getattr.
(An Access reply includes attributes and access stuff that can be used to
 prime both caches, so it makes sense to do Access instead of Getattr when
 the server overheads are about the same for both.)

rick

> My big question is why is Linux so much less aggressive then FreeBSD
> in this guys tests? Is the Linux implementation "skipping" something
> in their processing? Are we doing something that is "optional", but
> for completeness, we've implemented it while they've chosen to leave
> it out?
> 
> There has to be something to explain such dramatic differences … :(
> 
> 
> >
> > Joked that he was replacing everything except for the case....
> >
> > ----- Original Message -----
> >>
> >> Am 24.04.2013 um 23:29 schrieb "Lawrence K. Chen, P.Eng."
> >> <lkchen at ksu.edu>:
> >>
> >>> Hmmm, I guess all our Gen8's have been for the new vCloud project.
> >>> But, a few months ago boss had gone to putting LSI SAS 2008 and
> >>> Intel EXPI9402PT cards into our other Proliants (DL380 G7's and
> >>> DL180 G6's). Currently the only in production FreeBSD server
> >>> (9.1) is on a DL180 G6. I was working on a DL380 G7, but I lost
> >>> that hardware to a different project.
> >>>
> >>
> >>
> >> G6 and G7 is no problem. At least DL360 + DL380, which we use
> >> (almost) exclusively.
> >> The onboard-NICs are supposed to be swappable for something else -
> >> but there aren't any useful modules yet (a 10G module is
> >> available).
> >>
> >>
> >>
> >>
> 
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"


More information about the freebsd-fs mailing list