(in)appropriate uses for MAXBSIZE

Andriy Gapon avg at freebsd.org
Fri Apr 9 14:40:38 UTC 2010


on 09/04/2010 16:53 Rick Macklem said the following:
> 
> 
> On Fri, 9 Apr 2010, Andriy Gapon wrote:
> 
>>
>> Nowadays several questions could be asked about MAXBSIZE.
>> - Will we have to consider increasing MAXBSIZE?  Provided ever
>> increasing media
>> sizes, typical filesystem sizes, typical file sizes (all that
>> multimedia) and
>> even media sector sizes.
> 
> I would certainly like to see a larger MAXBSIZE for NFS. Solaris10
> currently uses 128K as a default I/O size and allows up to 1Mb. Using
> larger I/O sizes for NFS is a simpler way to increase bulk data transfer
> rate than more buffers and more agressive read-ahead/write-behind.
> 
> I had assumed that MAXBSIZE is the largest sized block handled by the
> buffer cache? I have no idea what effect just increasing it has, but
> had planned on experimenting with it someday.

I have lightly tested this under qemu.
I used my avgfs:) modified to issue 4*MAXBSIZE bread-s.
I removed size > MAXBSIZE check in getblk (see a parallel thread "panic: getblk:
size(%d) > MAXBSIZE(%d)").
And I bumped MAXPHYS to 1MB.

Some results.
I got no panics, data was read correctly and system remained stable, which is good.
But I observed reading process (dd bs=1m on avgfs) spending a lot of time sleeping
on needsbuffer in getnewbuf.  needsbuffer value was VFS_BIO_NEED_ANY.
Apparently there was some shortage of free buffers.
Perhaps some limits/counts were incorrectly auto-tuned.

Also, I later tried to double MAXBSIZE value and got exactly the same results.
No panics, no corruptions.

But, of course, this doesn't mean that there are no problems.
Especially in the hardware drivers.

-- 
Andriy Gapon


More information about the freebsd-arch mailing list