gstripe performance scaling with many disks
Ivan Voras
ivoras at fer.hr
Thu Dec 28 10:25:22 PST 2006
Vasil Dimov wrote:
> Can someone explain this?
> The tendency is for performace drop when increasing the number of disks
> in a stripe but there are some local peaks/extremums when using 8, 11
> and 16 disks.
I'll take a shot at this: Since maximum kernel reads are still limited
to 128 KB/s, by adding more drives you're making individual requests
shorter. I.e. with one drive, it gets 128 KB requests, with two, each
gets 64 KB, with 16, each gets 8 KB. So network & kernel latency becomes
visible.
AFAIK there's unofficial (still?) GEOM_CACHE class which tries to get
around this by requesting & caching 128K from each drive. Search the
lists, it's mentioned somewhere.
>
> Yes, I have read
> http://lists.freebsd.org/pipermail/freebsd-geom/2006-November/001705.html
>
> kern.geom.stripe.fast is set to 1.
While you're playing with this, you could set vfs.read_max to 32 or
higher and see if it helps.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 250 bytes
Desc: OpenPGP digital signature
Url : http://lists.freebsd.org/pipermail/freebsd-geom/attachments/20061228/242e50cc/signature.pgp
More information about the freebsd-geom
mailing list