Extending your zfs pool with multiple devices
zbeeble at gmail.com
Fri Sep 3 04:39:03 UTC 2010
On Fri, Sep 3, 2010 at 12:08 AM, Jeremy Chadwick
<freebsd at jdc.parodius.com> wrote:
> On Thu, Sep 02, 2010 at 04:56:04PM -0400, Zaphod Beeblebrox wrote:
>> With 1.5T disks, I find that the 4 to 1 multipliers have a small
>> effect on speed. The 4 drives I have on the multipler are saturated
>> at 100% a little bit more than the drives directly connected.
>> Essentially you have 3 gigabit for 4 drives instead of 3 gigabit for 1
> 1:4 SATA replicators impose a bottleneck on the overall bandwidth
> available between the replicator and the disks attached, as you stated.
> |||___ (SATA300) Port 0, Disk 0
> ||____ (SATA300) Port 1, Disk 1
> |_____ (SATA300) Port 2, eSATA Replicator
> ||||________ (SATA300) Port 0, Disk 2
> |||_________ (SATA300) Port 1, Disk 3
> ||__________ (SATA300) Port 2, Disk 4
> |___________ (SATA300) Port 3, Disk 5
> If Disks 2 through 5 are decent disks (pushing 100MB/sec), essentially
> you have 100*4 = 400MB/sec worth of bandwidth being shoved across a
> 300MB/sec link. That's making the assumption the disks attached are
> magnetic and not SSD, and not taking into consideration protocol
> A better choice is a SATA multilane HBA, which are usually PCIe-based
> with a single connector on the back of the HBA which splits out to
> multiple disks (usually 4, but sometimes more).
That's just connector-foo. The cards are still very expensive.
Many ZFS loads don't saturate disks ... or don't saturate them
consistently. I just built several systems with one port per disk ---
and those cards tended towards $100/port. 1:4 replicators are less
than $10/port and the six port motherboards don't seem to add any cost
(4 or 6 SATA ports seem standard now).
My point is: if you're building a database server and speed is all you
care about, then one port per disk makes sense. If you are building a
pile of disk and you're on a budget, port replicators are a good
More information about the freebsd-stable