Dissapointing performance of ciss RAID 0+1 ?

Mark Kirkwood markir at paradise.net.nz
Thu Nov 9 02:02:31 UTC 2006


Pete French wrote:
> I recently overhauled my RAID array - I now have 4 drives arranged
> as RAID 0+1, all being 15K 147gig Fujitsu's, and split across two
> buses, which are actively terminated to give U160 speeds (and I have
> verified this). The card is a 5304 (128M cache) in a PCI-X slot.
> 
> This replaces a set of 6 7200 rpm drives as RAID 5 which were running at
> 40meg speeds due to non LVD termination. I would expect to see a large speed
> increase wouldn't I ? But it remains about the same - around 45 meg/sec
> for reading a large file (3 gig or so) and half that for copying said
> file. These are 'real world' tests in the sense that I us the drive for
> building large ISo images and copying them around - I really dont care what
> benchmarks say, it's the speed of these two operatiosn that I want to make
> fats.
> 
> I've tried all the possible stripe sizes (128k gives the best performance)
> but still I only get the above speeds. Just one of the 15k drives on it's
> own performs better than this! I would expect the RAID-0 to give me at
> least some speedup, or in the worst case be the same, surely ?
> 
> Booting up Windowws and running some tests gives me far better performance
> however, so I am wondering if there is some driver issue here. Has anyone
> else seen the same kind of results ? I am running the latest stable for
> amd64 and the machine has twin opteron 242's with a gig of RAM each. surely
> it can do better than this ?
> 

You might be able to speed up the read by playing with the vfs.read_max 
sysctl (try 16 or 32).


cheers

Mark


More information about the freebsd-stable mailing list