maxphys and block sizes on slices

Chris chrcoluk at
Mon Feb 25 01:32:32 UTC 2008


I got a server that is primarily handling large files not massive
files but files that are 15meg+ in size and very few smaller files.

So I decided to use the following options in newfs.

-f 4096 -b 32768

Eventually I realised this was a bad decision especially when I
noticed vfs.bufdefragcnt growing.

In addition I have noticed all servers that are using the default
settings have 128kbytes per transfer and appear to use what maxphys is
set to whilst the ones with the custom newfs options are locked to
64kb/transfer even if dfltphys and maxphys are increased.  I did
increase BKVASIZE to 32768 to stop the bufdefragcnt tho.  My lesson is
learned tho new servers I setup I will keep the default block sizes
unless someone has experience of better settings.  For now I want to
make the best of the settings I got in place.

1 - is the 64kB per transfer not adjustable and is a penalty for
choosing the large block size?  It is nearly always penned at 64kB
with 100s transfers per second.

2 - is there a way to adjust the block sizes without wiping the data?

3 - How big an impact does a growing vfs.bufdefragcnt make on
performance? after I fixed it I have noticed no difference.

4 - Is there anything in general reccomended to set for a server that
handles large files but not many of them.

5 - What are the reccomended values on newfs for large files, the
defaults? and does the 1/8th rule have to apply for frag size vs block

6 - finally I have read vfs.hirunningspace boosts write speeds by
buffering more but it can be detrimental to read speeds is this true?


More information about the freebsd-performance mailing list