[Bug 260011] Unresponsive NFS mount on AWS EFS

From: <bugzilla-noreply_at_freebsd.org>
Date: Thu, 25 Nov 2021 16:41:04 UTC
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=260011

--- Comment #5 from Rick Macklem <rmacklem@FreeBSD.org> ---
Mount options are "negotiated" with the NFS server and
other tunables in the system.
For example, to increase rsize/wsize to 128K, you must
set vfs.maxbcachebuf=131072 in /boot/loader.conf.

To increase rsize/wsize to 1Mbyte, you must
set vfs,maxbcachebuf=1048576 in /boot/loader.conf
and set kern.ipc.maxsockbuf=4737024 (or larger)
in /etc/sysctl.conf.
--> This assumes you have at least 4Gbytes of ram on the
    system.  The further you move away from defaults,
    the less widely tested your configuration is.
Also, in the case rszie/wsize, the system will use the
largest size that is "negotiable" given other tuning.
The use of the rsize/wsize options is mainly to reduce
the size below the maximum negotiable.
--> From my limited testing, sizes above 256K do not
    perform better, but what works best for EFS?
    I have no idea.

If a server restarts, clients should recover. If a client
is hung like you describe, either due to an unresponsive server,
a broken server (that generates bogus replies or no replies to
certain RPCs) or a client bug:
# umount -N <mnt_path>
is your best bet at getting rid of the mount.

-- 
You are receiving this mail because:
You are the assignee for the bug.