What is gstripe ? -- benchmarks

bashr bashr at comcast.net
Sat Feb 3 18:28:07 UTC 2007


Fluffles wrote:
> 
> Overall i sugest using a stripesize of 128KB of bigger. This way you can
> be sure that a single I/O request (maximum of 128KB on FreeBSD; MAXPHYS)
> will 'fit' into one stripeblock and thus can be handled by one disk in
> the RAID array. If you use 64KB stripesize and you read 65KB or 100KB,
> two physical disks must be used to handle the request; this will degrade
> performance.
> 
> Misalignment, often caused by using default partitioning, can also
> degrade performance. To counteract this, use manual disklabeling with
> the same offset (or multiple) as the stripesize, use Dangerously
> Dedicated mode, or simply select a stripesize of 256KB or even bigger.
> 

Thank you, that is very helpful.  Here is what happened:
A gmirror volume, gm0s2, consisted of two gstripe volumes, st0s1 and 
st1s1. The stripe st0s1 consisted of ad6s3 and ad8s3 while st1s1 
consisted of ad2s3 and ad4s3.  Disk ad2 is UDMA133.  The others are 
SATA150.  Both stripes were configure with a stripe size of 4k and an 
offset of 16.

I took st1s1 out of the gmirror and reconfigured it with an offset and 
stripesize of 131072.  Then ran iozone for file and record sizes up to 
32M to compare st0s1 and st1s1.  The numbers below are the ratios of the 
throughput for 128k stripe size to throughput for 4k stripesize.

Writing 64k files with 4k records: 1.08
16M w/ 4k: 3.22
32M w/ 16M: 3.72

Reading 64k files 4k records: 1.02
16M w/ 4k: 1.03
32M w/ 16M: 1.01

Then I reconfigured st0s1 with 128k offset and stripesize as well, and 
put both gstripe volumes in a gmirror.  The numbers below are the ratios 
of throughput for the mirrored gstripe volumes to the throughput for 
unmirrored -- both with 128k offset and stripesize.

Writing 64k files with 4k records: 0.98
16M w/ 4k: 0.55
32M w/ 16M: 0.5

Reading 64k files with 4k records: 0.97
16M w/ 4k: 1.01
32M w/ 16M: 1.07



More information about the freebsd-geom mailing list