Extending your zfs pool with multiple devices

Don Lewis truckman at FreeBSD.org
Fri Sep 3 07:43:42 UTC 2010

On  2 Sep, Jeremy Chadwick wrote:
> On Thu, Sep 02, 2010 at 04:56:04PM -0400, Zaphod Beeblebrox wrote:
>> [regarding getting more disks in a machine]

>> An inexpensive option are SATA port replicators.  Think SATA switch or
>> hub.  1:4 is common and cheap.
>> I have a motherboard with intel ICH10 chipset.  It commonly provides 6
>> ports.  This chipset is happy to configure port replicators.  Meaning
>> you can put 24 drives on this motherboard.
>> ...
>> With 1.5T disks, I find that the 4 to 1 multipliers have a small
>> effect on speed.  The 4 drives I have on the multipler are saturated
>> at 100% a little bit more than the drives directly connected.
>> Essentially you have 3 gigabit for 4 drives instead of 3 gigabit for 1
>> drive.
> 1:4 SATA replicators impose a bottleneck on the overall bandwidth
> available between the replicator and the disks attached, as you stated.
> Diagram:
> ICH10
>   |||___ (SATA300) Port 0, Disk 0
>   ||____ (SATA300) Port 1, Disk 1
>   |_____ (SATA300) Port 2, eSATA Replicator
>                            ||||________ (SATA300) Port 0, Disk 2
>                            |||_________ (SATA300) Port 1, Disk 3
>                            ||__________ (SATA300) Port 2, Disk 4
>                            |___________ (SATA300) Port 3, Disk 5
> If Disks 2 through 5 are decent disks (pushing 100MB/sec), essentially
> you have 100*4 = 400MB/sec worth of bandwidth being shoved across a
> 300MB/sec link.  That's making the assumption the disks attached are
> magnetic and not SSD, and not taking into consideration protocol
> overhead.
> Given the evolutionary rate of hard disks and SSDs, replicators are (in
> my opinion) not a viable solution mid or long-term.

> A better choice is a SATA multilane HBA, which are usually PCIe-based
> with a single connector on the back of the HBA which splits out to
> multiple disks (usually 4, but sometimes more).
> An ideal choice is ane Areca ARC-1300 series SAS-based PCIe x4 multilane
> adapters, which provides SATA300 to each individual disk and uses PCIe
> x4 (which can handle about 1GByte/sec in each direction, so 2GByte/sec
> total)...
> http://www.areca.com.tw/products/sasnoneraid.htm
> ...but there doesn't appear to be driver support for FreeBSD for this
> series of controller (arcmsr(4) doesn't mention the ARC-1300 series).  I
> also don't know what Areca means on their site when they say
> "BSD/FreeBSD (will be available with 6Gb/s Host Adapter"), given that
> none of the ARC-1300 series cards are SATA600.
> If people are more focused on total number of devices (disks) that are
> available, then they should probably be looking at dropping a pretty
> penny on a low-end filer.  Otherwise, consider replacing the actual hard
> disks themselves with drives of a higher capacity.

[raises hand]

Here's what I've got on my mythtv box (running Fedora ... sorry):

Filesystem            Size  
/dev/sda4             439G  
/dev/sdb1             1.9T  
/dev/sdc1             1.9T  
/dev/sdd1             1.9T  
/dev/sde1             1.9T  
/dev/sdf1             1.4T  
/dev/sdg1             1.4T  
/dev/sdh1             932G  
/dev/sdi1             932G  
/dev/sdj1             1.4T  
/dev/sdk1             1.9T  
/dev/sdl1             932G  
/dev/sdm1             1.9T  
/dev/sdn1             932G  
/dev/sdo1             699G  
/dev/sdp1             1.4T  

I'm currently upgrading the older drives as I run out of space, and I'm
really hoping that >> 2TB drives arrive soon.  The motherboard is
full-size ATX with six onboard SATA ports, all of which are in use.  The
only x16 PCIe slot is occupied by a graphics card, and all but one of
the x1 PCIe slots are in use.  One of the x1 PCIe slots has a Silicon
Image two-port ESATA controller, which connects to two external
enclosures with 1:4 and 1:5 port replicators.  At the moment there are
also three external USB drives.  This weekend's project is to install a
new 2TB drive and do some consolidation.

Fortunately the bandwidth requirements aren't too high ...

More information about the freebsd-stable mailing list