svn commit: r201658 - head/sbin/geom/class/stripe

Alexander Motin mav at
Wed Jan 6 18:40:07 UTC 2010

Ivan Voras wrote:
> 2010/1/6 Alexander Motin <mav at>:
>> Author: mav
>> Date: Wed Jan  6 17:12:18 2010
>> New Revision: 201658
>> URL:
>> Log:
>>  Increase default block size from 4K to 64K. It was reduces 6 yeard ago,
>>  when trees were big and FAST mode was enabled by default.
>>  So small block size doesn't benefits linear I/O operations in FAST and
>>  significantly slowdowns in ECONOMIC (default) mode. For single stream random
>>  I/Os so small block doesn't give much benefits, as access time is usually
>>  bigger then transfer time there. Same time it requires all heads to seek
>>  together for every single request, reducing performance on parallel load.
> I think there was one more reason - though I'm not sure if it is still
> valid because of your current and future work - the MAXPHYS
> limitation. If MAXPHYS is 128k, with 64k stripes data was only to be
> read from maximum of 2 drives. With 4k stripes it would have been read
> from 128/4=32 drives, though I agree 4k is too low in any case
> nowadays. I usually choose 16k or 32k for my setups.

While you are right about MAXPHYS influence, and I hope we can rise it
not so far, IMHO it is file system business to manage deep enough
read-ahead/write-back to make all drives busy, independently from
MAXPHYS value. With small MAXPHYS value FS should just generate more
requests in advance. Except some RAID3/5/6 cases, where short writes
ineffective, MAXPHYS value should only affect processing overhead.

I've chosen 64K as level, where most of modern HDDs/SSDs reaching close
to maximum performance, where large request splitting with present
MAXPHYS is still possible, where medium request splitting not very
reduces performance on parallel load, and where interrupt rate is not so
high. Definitely it is question of personal tuning, but I think it is a
reasonable default.

Alexander Motin

More information about the svn-src-all mailing list