geom_mirror performance issues

Pawel Jakub Dawidek pjd at FreeBSD.org
Sun Nov 28 12:44:59 PST 2004


On Sun, Nov 28, 2004 at 01:14:08PM +0100, Tomas Zvala wrote:
+> Hello,
+> 	I've been playing with geom_mirror for a while now and few issues 
+> 	came up to my mind.
+> 	a) I have two identical drives (Seagate 120GB SATA 8MB Cache 
+> 	7.2krpm) that are able to sequentialy read at 58MB/s at the same time 
+> (about 115MB/s throughput). But when I have them in geom_mirror I get 
+> 30MB/s at best. Thats about 60MB/s for the mirror (about half the 
+> potential). The throughput is almost the same for both 'split' and 'load' 
+> balancing algorithms altough with load algorithm it seems that all the 
+> reading is being done from just one drive.

Think how mirror works. When you do sequential read you get something
like this:

	# dd if=/dev/mirror/foo of=/dev/null bs=128k

	disk0		disk1
	(offset)	(offset)
	0		128k
	256k		384k

Now, try to write a program which reads every second sector from the disk.
You will get not more than your 30MB/s. This is not stripe. Time spend on
moving head from offset 128k (after reading first 128kB) to 256k cost the
same as reading those data.

You should try /usr/src/tools/tools/raidtest/ which does random I/Os.

+> 	b) Pretty often i can see in gstat that both drives are doing the 
+> 	same things (the same number of transactions and same throughput) but one 
+> of them has significantly higher load(ie. one 50% and the other one 95%). 
+> How is disk load calculated and why does this happen?

You use round-robin algorithm? I can't reproduce it here. I see ~50% busy
on both components.

+> 	c) When I use 'split' load balancing algorithm, 128kB requests are 
+> split into two 64kB requests making twice as many transactions on the 
+> disks. Is it possible to lure fbsd into allowing 256kB requests that 
+> will get split into two 128kB requests?

You can try to change MAXPHYS in param.h and try to recompile your kernel,
but I've no idea if this will "just work".

+> 	d) When I use round-robin algorithm the performance halves (i get 
+> 	about 20MB/s raw throughput). Why is this? I would expect round-robin 
+> algorithm to be the most effective one for reading as every drive gets 
+> exactly half the load.

Repeat your tests with random I/Os.

+> 	e) My last question again goes with the 'load' balancing. How often 
+> 	is switch between drives done? When I set my load balancing to 'load' i get 
+> 100% load on one drive and 0% or at most 5% on the other one. Is this an 
+> intention. Seems like a bug to me.

Again, try with random reading/writing.

-- 
Pawel Jakub Dawidek                       http://www.FreeBSD.org
pjd at FreeBSD.org                           http://garage.freebsd.pl
FreeBSD committer                         Am I Evil? Yes, I Am!
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 187 bytes
Desc: not available
Url : http://lists.freebsd.org/pipermail/freebsd-geom/attachments/20041128/8a0c561d/attachment.bin


More information about the freebsd-geom mailing list