RFC: NFS server handling of negative f_bavail?
Rick Macklem
rmacklem at uoguelph.ca
Mon May 2 20:58:16 UTC 2011
I just ran a little test where I ran an FFS volume on
a FreeBSD-current server out of space so that it showed
negative avail and then mounted it on Solaris10. Here
are the dfs for the server and client.
FreeBSD server (nfsv4-newlap):
Filesystem 1K-blocks Used Avail Capacity Mounted on
/dev/ad4s3a 2026030 671492 1192456 36% /
devfs 1 1 0 100% /dev
/dev/ad4s3e 4697030 4544054 -222786 105% /sub1
/dev/ad4s3d 5077038 641462 4029414 14% /usr
and for the Solaris10 client:
Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c0d0s0 3870110 2790938 1040471 73% /
/devices 0 0 0 0% /devices
ctfs 0 0 0 0% /system/contract
proc 0 0 0 0% /proc
mnttab 0 0 0 0% /etc/mnttab
swap 975736 624 975112 1% /etc/svc/volatile
objfs 0 0 0 0% /system/object
/usr/lib/libc/libc_hwcap1.so.1 3870110 2790938 1040471 73% /lib/libc.so.1
fd 0 0 0 0% /dev/fd
swap 975112 0 975112 0% /tmp
swap 975140 28 975112 1% /var/run
/dev/dsk/c0d0s7 5608190 4118091 1434018 75% /export/home
nfsv4-newlap:/sub1 4697030 4544054 18014398509259198 1% /mnt
You can see that the Solaris10 client thinks there is lottsa
avail. I think sending the field as 0 over the wire would
provide better interoperability.
rick
More information about the freebsd-fs
mailing list