Status of NFS4.1 FS_RECLAIM in FreeBSD 10.1?

Rick Macklem rmacklem at uoguelph.ca
Thu May 21 12:20:47 UTC 2015


Mahmoud Al-Qudsi wrote:
> On May 20, 2015, at 8:57 PM, Rick Macklem <rmacklem at uoguelph.ca>
> wrote:
> > Only the global RECLAIM_COMPLETE is implemented. I'll be honest
> > that
> > I don't even really understand what the "single fs
> > reclaim_complete"
> > semantics are and, as such, it isn't implemented.
> 
> Thanks for verifying that.
> 
> > I think it is meant to be used when a file system is migrated from
> > one server to another (transferring the locks to the new server) or
> > something like that.
> > Migration/replication isn't supported. Maybe someday if I figure
> > out
> > what the RFC expects the server to do for this case.
> 
> I wasn’t clear on if this was lock reclaiming or block reclaiming.
> Thanks.
> 
> >> I can mount and use NFSv3 shares just fine with ESXi from this
> >> same
> >> server, and
> >> can mount the same shares as NFSv4 from other clients (e.g. OS X)
> >> as
> >> well.
> >> 
> > This is NFSv4.1 specific, so NFSv4.0 should work, I think. Or just
> > use NFSv3.
> > 
> > rick
> 
> For some reason, ESXi doesn’t do ESXi 4.0, only v3 or v4.1.
> 
> I am using NFS v3 for now, but unless I’m mistaken, since FreeBSD
> supports
> neither “nohide” nor “crossmnt” there is no way for a single
> export(/import)
> to cross ZFS filesystem boundaries.
> 
> I am using ZFS snapshots to manage virtual machine images, each
> machine
> has its own ZFS filesystem so I can snapshot and rollback
> individually. But
> this means that under NFSv3 (so far as I can tell), each “folder”
> (ZFS fs)
> must be mounted separately on the ESXi host. I can get around
> exporting
> them each individually with the -alldirs parameter, but client-side,
> there does
> not seem to be a way of traversing ZFS filesystem mounts without
> explicitly
> mounting each and every one - a maintenance nightmare if there ever
> was one.
> 
> The only thing I can think of would be unions for the top-level
> directory, but I’m
> very, very leery of the the nullfs/unionfs modules as they’ve been a
> source of
> system instability for us in the past (deadlocks, undetected lock
> inversions, etc).
> That and I really rather a maintenance nightmare than a hack.
> 
> Would you have any other suggestions?
> 
Well, if you are just doing an NFSv4.1 mount, you could capture
packets during the failed mount attaempt with tcpdump and then
email me the raw packet capture, I can take a look at it.
(tcpdump doesn't handle nfs packets well, but wireshark will accept
 a raw packet capture) Something like:
# tcpdump -s 0 -w <file>.pcap host <nfs-client>
should work.

When I read RFC-5661 around page #567, it seems clear that the
client should use RECLAIM_COMPLETE with the fs arg false after
acquiring a noew clientid, which is what a fresh mount would normally be.
(If the packet capture shows an EXCHANGEID followed by a RECLAIM_COMPLETE
 with the fs arg true, I think ESXi is broken, but I can send you a patch
 that will just ignore the "true", so it works.)
I think the "true" case is only used when a file system has been "moved"
by a server cluster, indicated to the client via a NFS4ERR_MOVED error
when it is accessed at the old server, but the working in RFC-5661 isn't
very clear.

rick

> Thanks,
> 
> Mahmoud
> 
> _______________________________________________
> freebsd-stable at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to
> "freebsd-stable-unsubscribe at freebsd.org"


More information about the freebsd-stable mailing list