chaining JBOD chassic to server ... why am I scared ? (ZFS)

Kevin Day toasty at dragondata.com
Tue Jul 10 19:48:58 UTC 2012


On Jul 10, 2012, at 1:57 PM, Jason Usher <jusher71 at yahoo.com> wrote:

> The de-facto configuration the smart folks are using for ZFS seems to be:
> 
> - 16/24/36 drive supermicro chassis
> - LSI 9211-8i internal cards
> - ZFS and probably raidz2 or raidz3 vdevs
> 
> Ok, fine.  But then I see some even smarter folks attaching the 48-drive 4U JBOD chassis to this configuration, probably using a different LSI card that has an external SAS cable.
> 
> So ... 84 drives accessible to ZFS on one system.  In terms of space and money efficiency, it sounds really great - fewer systems to manage, etc.
> 
> But this scares me ...
> 
> - two different power sources - so the "head unit" can lose power independent of the JBOD device ... how well does that turn out ?
> 
> - external cabling - has anyone just yanked that external SAS cable a few times, and what does that look like ?
> 
> - If you have a single SLOG, or a single L2ARC device, where do you put it ?  And then what happens if "the other half" of the system detaches from the half that the SLOG/L2ARC is in ?
> 
> - ... any number of other weird things ?
> 
> 
> Just how well does ZFS v28 deal with these kind of situations, and do I have a good reason to be awfully shy about doing this ?
> 


We do this for ftpmirror.your.org (which is ftp3.us.freebsd.org & others). It's got an LSI 9280 in it, which has 3 external chassis (each with 24 3TB drives) attached to it. Before putting into use, we experimented with pulling the power/data cables from random places while using it. Nothing we did was any worse than the whole system just losing power. The only difference was that in some cases losing all the storage would hang the server until it was power cycled, but again… no worse than if everything lost power.  If something goes bad, it's pretty likely things are going to go down, no matter the physical topology. There was no crazy data loss or anything if that's what you're worried about.

-- Kevin



More information about the freebsd-fs mailing list