quantifying zpool performance with number of vdevs

Steven Hartland killing at multiplay.co.uk
Fri Jan 29 21:28:30 UTC 2016


Always a good read is:
http://blog.delphix.com/matt/2014/06/06/zfs-stripe-width/

On 29/01/2016 18:06, Graham Allan wrote:
> In many of the storage systems I built to date I was slightly 
> conservative (?) in wanting to keep any one pool confined to a single 
> JBOD chassis. In doing this I've generally been using the Supermicro 
> 45-drive chassis with pools made of 4x (8+2) raidz2, other slots being 
> kept for spares, ZIL and L2ARC.
>
> Now I have several servers with 3-4 such chassis, and reliability has 
> also been such that I'd feel more comfortable about spanning chassis, 
> if there was worthwhile performance benefit.
>
> Obviously theory says that iops should scale with number of vdevs but 
> it would be nice to try and quantify.
>
> Getting relevant data out of iperf seems problematic on machines with 
> 128GB+ RAM - it's hard to blow out the ARC.
>
> It does seem like I get possibly more valid-looking results if I set 
> "zfs set primarycache=metadata" on my test dataset - it seems like 
> this should mostly disable the ARC (seems to be borne out by arcstat 
> output, though there could still be L2ARC effects).
>
> Wonder if anyone has any thoughts on this, and also on benefits/risks 
> of moving from 40-drive to 80- or 120-drive pools.
>
> Graham



More information about the freebsd-fs mailing list