mps(4) driver (LSI 6Gb SAS) commited to stable/8

Jeremy Chadwick freebsd at jdc.parodius.com
Fri Feb 18 23:13:09 UTC 2011


On Sat, Feb 19, 2011 at 02:05:33AM +0300, Dmitry Morozovsky wrote:
> On Fri, 18 Feb 2011, Kenneth D. Merry wrote:
> 
> KDM> > KDM> I just merged the mps(4) driver to stable/8, for those of you with LSI 6Gb
> KDM> > KDM> SAS hardware.
> KDM> > 
> KDM> > [snip]
> KDM> > 
> KDM> > Again, thank you very much Ken.  I'm planning to stress test this on 846 case 
> KDM> > filled with 12 (yet) WD RE4 disks organized as raidz2, and will post the 
> KDM> > results.
> KDM> > 
> KDM> > Any hints to particularly I/O stressing patterns?  Out of my mind, I'm planning 
> KDM> > multiple parallel -j'ed builds, parallel tars, *SQL benchmarks -- what else 
> KDM> > could you suppose?
> KDM> 
> KDM> The best stress test I have found has been to just do a single sequential
> KDM> write stream with ZFS.  i.e.:
> KDM> 
> KDM> cd /path/to/zfs/pool
> KDM> dd if=/dev/zero of=foo bs=1M
> KDM> 
> KDM> Just let it run for a long period of time and see what happens.
> 
> Well, provided that I'm plannign to use ZFSv28 to be in place, wouldn't be 
> /dev/random more appropriate?

No -- /dev/urandom maybe, but not /dev/random.  /dev/urandom will also
induce significantly higher CPU load than /dev/zero will.  Don't forget
that ZFS is a processor-centric (read: no offloading) system.

I tend to try different block sizes (starting at bs=8k and working up to
bs=256k) for sequential benchmarks.  The "sweet spot" on most disks I've
found is 64k.  Otherwise use benchmarks/bonnie++.

-- 
| Jeremy Chadwick                                   jdc at parodius.com |
| Parodius Networking                       http://www.parodius.com/ |
| UNIX Systems Administrator                  Mountain View, CA, USA |
| Making life hard for others since 1977.               PGP 4BD6C0CB |



More information about the freebsd-scsi mailing list