GEOM profiling - how to?
Ivan Voras
ivoras at freebsd.org
Sat Nov 27 16:04:58 UTC 2010
On 11/26/10 23:29, Lev Serebryakov wrote:
> Hello, Freebsd-geom.
>
> I'm doing some simple benchmarking of geom_raid5 in preparation of
> putting it into ports. And I notice strange results.
>
> It is array of 5 disks, stripsize=128k. All disks are SATA2 disks on
> ICH10R, AHCI driver (8.1-STABLE).
>
> Reading from device itself (dd with bs=512K) gives speed of one HDD
> exactly. gstat shows 100% load of RAID geom and 1/5 of this speed
> (and 18-22% load) on all disk GEOMs.
This "100% load of RAID geom" is an approximation of disk load, not CPU
load. I don't know how graid5 module works but if it's like most GEOM
modules, you will probably need to use a very small stripe size,
basically 128 / number_of_disks so that one request can span multiple
drives. In your case, try 32 KiB stripe size or 16 KiB stripe size.
> Reading big file from FS (dd with bs=512k, FS block size 32K,
> vfs.read_max=32) gives about twice speed and every disk GEOM is
> loaded 38-42%. CPU time is about 8% system, 0.5% interrupt, so CPU
> is not a bottle neck.
With big readahead (btw try larger read_max values, like 128) you get
parallelism on the drive hardware level, not GEOM, this is why it works.
> How could I profile I/O and GEOM?
There is no specific answer to this question; basically you can use
gstat to observe performance of every GEOM device individually, and use
"top" and similar to observe CPU usage. If you turn on GEOM logging,
your logs will be swamped by a huge number of messages which you can, in
theory, create a tool to analyze them with.
More information about the freebsd-geom
mailing list