ZFS obn FreeBSD hardware model for 48 or 96 sata3 paths...

Jason Usher jusher71 at yahoo.com
Mon Sep 19 19:00:16 UTC 2011



--- On Sat, 9/17/11, Daniel Kalchev <daniel at digsys.bg> wrote:

> There is not single magnetic drive on the market that can
> saturate SATA2 (300 Mbps), yet. Most can't match even SATA1
> (150 MBps). You don't need that much dedicated bandwidth for
> drives.
> If you intend to have 48/96 SSDs, then that is another
> story, but then I am doubtful a "PC" architecture can handle
> that much data either.


Hmmm... I understand this, but is there not any data that might transfer from multiple magnetic disks, simultaneously, at 6GB, that could periodically max out the card bandwidth ?  As in, all drives in a 12 drive array perform an operation on their built-in cache simultaneously ?

I know the spinning disks themselves can't do it, but there is 64 MB of cache on each drive, and that can run at 6G ... this doesn't ever happen ?

Further, the cards I use will be the same regardless - the number of PCIe lanes is just a different motherboard choice at the front end, and only adds a marginal extra cost (assuming there _IS_ a 112+ lane mobo around) ... so why not ?


> Memory is much more expensive than SSDs for L2ARC and if
> your workload permits it (lots of repeated small reads),
> larger L2ARC will help a lot. It will also help if you have
> huge spool or if you enable dedup etc. Just populate as much
> RAM as the server can handle and then add L2ARC
> (read-optimized).


That's interesting (the part about dedup being assisted by L2ARC) ... what about snapshots ?  If we run 14 or 21 snapshots, what component is that stressing, and what structures would speed that up ?

Thanks a lot.


More information about the freebsd-fs mailing list