3-4MB/s vinum performance with two Promise PDC20268 UDMA100 controllers

Bjorn Eikeland bjorn at eikeland.info
Thu Jan 15 15:43:28 PST 2004


På Thu, 15 Jan 2004 09:43:01 -0200, skrev jason <jason at ec.rr.com>:

> Bjorn Eikeland wrote:
>
>> Hi
>>
>> I had four 160G IDE drives in raid5 on a single controller and it 
>> worked just fine, never benchmarked it since it was faster than the 
>> network anyway. But when adding a second controller card and two more 
>> drives the new array has a terrible write performance.
>>
>> I've tried various stripe and block sizes in desperation but that didnt 
>> help. Then I've tried assinging the same irq to both controller card in 
>> case it was interrupts causing the slow down, so I set both pci slots 
>> to use irq3 in the bios (freebsd wants irq 3 for a non existent sio1 
>> port - so I figure that'll be 'free'?) but despite my setting in the 
>> bios the cards still show up in dmesg with irq 21 and 22?
>>
>> So I looked thourgh the handbook and tried setting the irq in 
>> /boot/device.hints both as  hint.atapci.x.irq="3" and 
>> hint.ata.x.irq="3" but this didnt work either.
>>
>> The problem is the same in freebsd 5.1 and 5.2 (output below is form 
>> 5.2):
>>
>> home# dmesg | grep atapci
>> atapci0: <Promise PDC20268 UDMA100 controller> port 
>> 0xb000-0xb00f,0xb400-0xb403,0xb800-0xb807,0xd000-0xd003,0xd400-0xd407 
>> mem 0xf9000000-0xf9003fff irq 21 at device 9.0 on pci1
>> atapci0: [MPSAFE]
>> ata2: at 0xd400 on atapci0
>> ata3: at 0xb800 on atapci0
>> atapci1: <Promise PDC20268 UDMA100 controller> port 
>> 0x9400-0x940f,0x9800-0x9803,0xa000-0xa007,0xa400-0xa403,0xa800-0xa807 
>> mem 0xf8800000-0xf8803fff irq 22 at device 10.0 on pci1
>> atapci1: [MPSAFE]
>> ata4: at 0xa800 on atapci1
>> ata5: at 0xa000 on atapci1
>> atapci2: <Intel ICH2 UDMA100 controller> port 0x8800-0x880f at device 
>> 31.1 on pci0
>> ata0: at 0x1f0 irq 14 on atapci2
>> ata1: at 0x170 irq 15 on atapci2
>>
>> Any thoughts anyone?
>>
>> Bjorn
>> _______________________________________________
>> freebsd-questions at freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
>> To unsubscribe, send any mail to 
>> "freebsd-questions-unsubscribe at freebsd.org"
>>
> Don't assign the same irq to 2 devices, thats a conflict!  You may not 
> have used win95, but you don't won't to do that.  I know raid 5 isslower 
> than raid 1, but I don't remember any numbers.  Also the more complex 
> you make the system the slower it will go, hince raid 5 slower than raid 
> 1.  Also pci is a shared bus meaning 1 device talks at a time, so maybe, 
> just maybe if you chipset has a pci bridge because you have like 8 
> slots, or the make was real kind, you could try card 1 in say slot 2, 
> and card 2 in slot 6?  You maybe able to get simultainsuos writes and 
> reads that way.  The best option is pcix, or a controler that support 8 
> drives on its own.  Also try to adjust pci latincy or waiting, do a 
> google on it, I hear 95-128 clicks is good.  I just noticed the buillt 
> in ide is on pci0 while the rest are on pci1, theres a bridge you can 
> use.  If you can get it setup, have 2 drives connected to the onboard 
> ide and the rest to your card.
> Jason

Thank you for your thoughts Jason! (Your penny is in the mail ;)

About the irq thing I think I read (while reading up on bridges) that 
interrupts were level triggered (as apposed to edge triggered) and thus 
two NICs sharing a interrupt and asserting it at the same time would only 
cause one context switch - so I figured it worth a try.

As for the pci bus etc, all three pci slots are pci1 - and the secondary 
onboard ide channel gives me read and write errors (the same type as for a 
bad udma100 cable - but im sure its the controller as the cable and drives 
work fine elsewhere). I've found some info on pci latency but will try it 
tomorrow as the box is "headless" - just made some refrence measurements 
tonight.

But I acutally remembered that the previos setup wasn't raid5 - the four 
first drives were striped since it was a temporary arrangement while 
waiting for the 2nd controller - however I do have a linux machine at home 
with 4 of the same drives (all on the onboard UDMA100 controller) running 
raid5 and it does perform better (cant do any mesurements now, but it does 
accept data at about 60Mbps over a smb share and i think the client maxed 
out at that).

I've done some measurements on the drives with different setups - the 
results were quite long so I've posted it on a web page instad 
http://www.eikeland.info/bjorn/archive/040117vinumperf1.txt (Test was dd 
count=10000 bs=65536 if=/dev/zero of=/dev/vinum/test)

Briefly summarized: any raid5 (with 3, 4 or 6 drives) writes ~4M/s and 
reads 27M, 29M and 32M/s respectly. A single drive reads and writes 
~40M/s. Raid0 (4 and 6 drives) writes ~50M and reads 50M and 62M/s 
respectively. Test done to from /dev/zero with 625M or 512M "test file".

Is this really what write performance I can expect from a raid5 array usch 
as this? I knew it wouldnt be writing way fast, but I was quite sure it 
would accept what the 100Mbit network had to offer?

(Should I maybe move this over to freebsd-performance?)

-Bjorn


More information about the freebsd-questions mailing list