Performance difference between UFS and ZFS with NFS

aurfalien aurfalien at gmail.com
Tue Nov 19 18:12:47 UTC 2013


On Nov 19, 2013, at 5:12 AM, Rick Macklem wrote:

> Eric Browning wrote:
>> Some background:
>> -Two identical servers, dual AMD Athlon 6220's 16 cores total @ 3Ghz,
>> -64GB ram each server
>> -Four Intel DC S3700 800GB SSDs for primary storage, each server.
>> -FreeBSD 9 stable as of 902503
>> -ZFS v28 and later updated to feature flags (v29?)
>> -LSI 9200-8i controller
>> -Intel I350T4 nic (only one port being used currently) using all four
>> in
>> LACP overtaxed the server's NFS queue from what we found out making
>> the
>> server basically unusable.
>> 
>> There is definitely something going on between NFS and ZFS when used
>> as a
>> file server (random workload) for mac home directories.  They do not
>> jive
>> well at all and pretty much drag down these beefy servers and cause
>> 20-30
>> second delays when just attempting to list a directory on Mac 10.7,
>> 10.8
>> clients although throughput seems fast when copying files.
>> 
>> This server's NFS was sitting north of 700% (7+ cores) all day long
>> when
>> using ZFSv28 raidz1. I have also tried stripe, compression on/off,
>> sync
>> enabled/disabled, and no dedup with 56GB of ram dedicated to ARC.
>> I've
>> tried just 100% stock settings in loader.conf and and some
>> recommended
>> tuning from various sources on the freebsd lists and other sites
>> including
>> the freebsd handbook.
>> 
>> This is my mountpoint creation:
>> zfs create -o mountpoint=/users -o sharenfs=on -o
>> casesensitivity=insensitive -o aclmode=passthrough -o compression=lz4
>> -o
>> atime=off -o aclinherit=passthrough tank/users
>> 
>> This last weekend I switched one of these servers over to a UFS raid
>> 0
>> setup and NFS now only eats about 36% of one core during the initial
>> login
>> phase of 150-ish users over about 10 minutes and sits under 1-3%
>> during
>> normal usage and directories all list instantly even when drilling
>> down 10
>> or so directories on the client's home files. The same NFS config on
>> server
>> and clients are still active.
>> 
>> Right now I'm going to have to abandon ZFS until it works with NFS.
>> I
>> don't want to get into a finger pointing game, I'd just like to help
>> get
>> this fixed, I have one old i386 server I can try things out on if
>> that
>> helps and it's already on 9 stable and ZFS v28.
>> 
> Btw, in previous discussions with Eric on this, he provided nfsstat
> output that seemed to indicate most of his RPC load from the Macs
> were Access and Getattr RPCs.
> 
> I suspect the way ZFS handles VOP_ACCESSX() and VOP_GETATTR() is a
> significant part of this issue. I know nothing about ZFS, but I believe
> it does always have ACLs enabled and presumably needs to check the
> ACL for each VOP_ACCESSX().
> 
> Hopefully someone familiar with how ZFS handles VOP_ACCESSX() and
> VOP_GETATTR() can look at these?

Indeed.  However couldn't one simply disable ACL mode via;

zfs set aclinherit=discard pool/dataset
zfs set aclmode=discard pool/dataset

Eric, mind setting these and see?

Mid/late this week I'll be doing a rather large render farm test amongst our Mac fleet against ZFS.

Will reply to this thread with outcome when I'm done.  Should be interesting.

- aurf
 
> 
> rick
> 
>> Thanks,
>> --
>> Eric Browning
>> Systems Administrator
>> 801-984-7623
>> 
>> Skaggs Catholic Center
>> Juan Diego Catholic High School
>> Saint John the Baptist Middle
>> Saint John the Baptist Elementary
>> _______________________________________________
>> freebsd-fs at freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>> 
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"



More information about the freebsd-fs mailing list