Multi node storage, ZFS
fjwcash at gmail.com
Wed Mar 24 17:14:16 UTC 2010
On Wed, Mar 24, 2010 at 10:04 AM, Michael Loftis <mloftis at wgops.com> wrote:
> --On Wednesday, March 24, 2010 9:20 AM -0700 Freddie Cash <
> fjwcash at gmail.com> wrote:
> On Wed, Mar 24, 2010 at 8:47 AM, Michal <michal at ionic.co.uk> wrote:
>> I wrote a really long e-mail but realised I could ask this question far
>>> far easier, if it doesn't make sense, the original e-mail is bellow
>>> Can I use ZFS to create a multinode storage area. Multiple HDD's in
>>> Multiple servers to create one target of, for example, //officestorage
>>> Allowing me to expand the storage space when needed and clients being
>>> able to retrieve data (like RAID0 but over devices not HDD)
>>> Here is an example I found which is where I'm getting some ideas from
>>> Horribly, horribly, horribly complex. But, then, that's the Linux world.
>> Server 1: bunch of disks exported via iSCSI
>> Server 2: bunch of disks exported via iSCSI
>> Server 3: bunch of disks exported via iSCSI
>> "SAN" box: uses all those iSCSI exports to create a ZFS pool
>> Use 1 iSCSI export from each server to create a raidz vdev. Or multiple
>> mirror vdevs. When you need more storage, just add another server full of
>> disks, export them via iSCSI to the "SAN" box, and expand the ZFS pool.
>> And, if you need fail-over, on your "SAN" box, you can use HAST at the
>> lower layers (currently only available in 9-CURRENT) to mirror the
>> storage across two systems, and use CARP to provide a single IP for the
>> two boxes.
> If you were to do something like this, I'd make sure to have a fast local
> ZIL (log) device on the head node. That would reduce latency for writes,
> you might also do the same for reads. Then your bulk storage comes from the
> iSCSI boxes.
Yes, that would be helpful (mirrored slogs, until we get slog removal
As would an L2ARC (cache) device in the head node.
As well as lots and lots and lots of RAM.
And as fast of ethernet NICs as you can get between the head node and the
And, and, and, and ... :)
fjwcash at gmail.com
More information about the freebsd-stable