chaining JBOD chassic to server ... why am I scared ? (ZFS)

Jason Usher jusher71 at yahoo.com
Tue Jul 10 18:57:38 UTC 2012


The de-facto configuration the smart folks are using for ZFS seems to be:

- 16/24/36 drive supermicro chassis
- LSI 9211-8i internal cards
- ZFS and probably raidz2 or raidz3 vdevs

Ok, fine.  But then I see some even smarter folks attaching the 48-drive 4U JBOD chassis to this configuration, probably using a different LSI card that has an external SAS cable.

So ... 84 drives accessible to ZFS on one system.  In terms of space and money efficiency, it sounds really great - fewer systems to manage, etc.

But this scares me ...

- two different power sources - so the "head unit" can lose power independent of the JBOD device ... how well does that turn out ?

- external cabling - has anyone just yanked that external SAS cable a few times, and what does that look like ?

- If you have a single SLOG, or a single L2ARC device, where do you put it ?  And then what happens if "the other half" of the system detaches from the half that the SLOG/L2ARC is in ?

- ... any number of other weird things ?


Just how well does ZFS v28 deal with these kind of situations, and do I have a good reason to be awfully shy about doing this ?




More information about the freebsd-fs mailing list