ZFS obn FreeBSD hardware model for 48 or 96 sata3 paths...

Jason Usher jusher71 at yahoo.com
Mon Sep 19 19:11:42 UTC 2011



--- On Mon, 9/19/11, Bob Friesenhahn <bfriesen at simple.dallas.tx.us> wrote:


> > Hmmm... I understand this, but is there not any data
> that might transfer from multiple magnetic disks,
> simultaneously, at 6GB, that could periodically max out the
> card bandwidth ?  As in, all drives in a 12 drive array
> perform an operation on their built-in cache simultaneously
> ?
> 
> The best way to deal with this is by careful zfs pool
> design so that disks that can be expected to perform related
> operations (e.g. in same vdev) are carefully split across
> interface cards and I/O channels. This also helps with
> reliability.


Understood.

But again, can't that all be dismissed completely by having a one drive / one path build ?  And since that does not add extra cost per drive, or per card ... only per motherboard ... it seems an easy cost to swallow - even if it's a very edge case that it might ever be useful.

Presuming I can *find* a 112+ lane mobo, I assume the cost would be at worst double ($800ish instead of $400ish) a mobo with fewer pcie lanes...


More information about the freebsd-fs mailing list