gmirror

Dominic Marks dom at goodforbusiness.co.uk
Sat May 14 03:29:13 PDT 2005


On Saturday 14 May 2005 10:22, vohand at gmail.com wrote:
> Hardware: SATA RAID adapter with SiliconImage 3114 chip. 2 SATA HDD.
> I did gmirror.

<snip>

> 	a) This is gmirror feature ?
> 	b) This is hardware feature (SiliconImage 3114 chip) ?

I don't think is related to your hardware. I have a Dell PowerEdge SC1425 which
is exhibiting the same behaviour from gmirror, my disc controller is an onboard
Intel ICH5.

System: FreeBSD 5.4-STABLE @ Sun Apr 10 14:07:46 UTC 2005

atapci1: <Intel ICH5 SATA150 controller> port 0xccc0-0xcccf,0xccd8-0xccdb,
 0xcce0-0xcce7,0xccf0-0xccf3,0xccf8-0xccff irq 18 at device 31.2 on pci0
ata2: channel #0 on atapci1
ata3: channel #1 on atapci1

mail# dmesg | grep ad4
ad4: 76293MB <ST380013AS/8.12> [155009/16/63] at ata2-master SATA150

mail# diskinfo -t ad4
<snip>
Seek times:
        Full stroke:      250 iter in   5.606627 sec =   22.427 msec
        Half stroke:      250 iter in   4.382610 sec =   17.530 msec
        Quarter stroke:   500 iter in   6.969860 sec =   13.940 msec
        Short forward:    400 iter in   1.940076 sec =    4.850 msec
        Short backward:   400 iter in   2.349238 sec =    5.873 msec
        Seq outer:       2048 iter in   0.228509 sec =    0.112 msec
        Seq inner:       2048 iter in   0.237599 sec =    0.116 msec
Transfer rates:
        outside:       102400 kbytes in   1.755089 sec =    58345 kbytes/sec
        middle:        102400 kbytes in   2.106003 sec =    48623 kbytes/sec
        inside:        102400 kbytes in   3.496732 sec =    29284 kbytes/sec

Pretty reasonable results.

mail# dmesg | grep ad6
ad6: 76319MB <ST380817AS/3.42> [155061/16/63] at ata3-master SATA150

The second drive achieves slightly better seeking, but lower throughput. But the
variation between ad4 and ad6 is very small as I would expect for two virtually
identical drives.

mail# diskinfo -t ad6
<snip>
Seek times:
        Full stroke:      250 iter in   4.936300 sec =   19.745 msec
        Half stroke:      250 iter in   3.675749 sec =   14.703 msec
        Quarter stroke:   500 iter in   5.970199 sec =   11.940 msec
        Short forward:    400 iter in   1.937453 sec =    4.844 msec
        Short backward:   400 iter in   2.347955 sec =    5.870 msec
        Seq outer:       2048 iter in   0.211831 sec =    0.103 msec
        Seq inner:       2048 iter in   0.218748 sec =    0.107 msec
Transfer rates:
        outside:       102400 kbytes in   1.754634 sec =    58360 kbytes/sec
        middle:        102400 kbytes in   2.107054 sec =    48599 kbytes/sec
        inside:        102400 kbytes in   3.522545 sec =    29070 kbytes/sec

Now the same test against the gmirror:

mail# diskinfo -t /dev/mirror/gmirror0
<snip>
Seek times:
        Full stroke:      250 iter in   1.347611 sec =    5.390 msec
        Half stroke:      250 iter in   1.335664 sec =    5.343 msec
        Quarter stroke:   500 iter in   2.653382 sec =    5.307 msec
        Short forward:    400 iter in   2.254421 sec =    5.636 msec
        Short backward:   400 iter in   2.057330 sec =    5.143 msec
        Seq outer:       2048 iter in   0.265052 sec =    0.129 msec
        Seq inner:       2048 iter in   0.274519 sec =    0.134 msec
Transfer rates:
        outside:       102400 kbytes in   2.774400 sec =    36909 kbytes/sec
        middle:        102400 kbytes in   3.138420 sec =    32628 kbytes/sec
        inside:        102400 kbytes in   4.498140 sec =    22765 kbytes/sec

The seek times are way down, which is great, and makes sense using a
round-robin strategy on the mirror, but my peak transfer rate has been
almost halved too.

I don't mind this too much as in my application low seek times are worth
more than high transfer rates, but it is still puzzling to me to see such a
remarkable drop in throughput.

Thanks very much for insight,
-- 
Dominic
GoodforBusiness.co.uk
I.T. Services for SMEs in the UK.


More information about the freebsd-stable mailing list