Raid + zfs performace.

Sean sean at ttys0.net
Sat Oct 30 18:35:29 UTC 2010


> I thought maybe because the existing pool is kind of r/w saturated
> it should be better to create a new independent pool for the new
> drives. In that way the heavy activity would not "spread" to the
> new drives.

You're trying to be smarter than ZFS. It's a common syndrome, usually
brought about from years of experience dealing with "dumb"
filesystems. If you create a new independent pool, then you are
guaranteeing that your current r/w saturated pool will stay that way,
unless you manually migrate data off of that pool. If you add storage
to that pool, then you are providing that pool additional resource
that ZFS can then manage.

> Now you presented me with a third option. So you think I should skip to create
> a new hardware-raid mirror and instead use two single drives and add these as
> a mirror to the existing pool?

If you're going to keep the hardware raid, then setting up a new
hardware raid of two drives, and then striping da1 with da0 via zfs is
a viable option. It's just another spin on the RAID 10 idea.

> How will zfs handle howswap of these drives?

ZFS doesn't know about your drives, because you hardware raid them. If
you set up the second hardware raid mirror as a striped drive in the
pool, and you then lose both drives within a single hardware raid
mirror set, you'll be in the drink. But that's the case with any RAID
10 scenario.

> I've seen a few crashes due to ata-detach in other systems.

That's not a ZFS issue, that's a driver/support issue with the controller.

-Sean


More information about the freebsd-fs mailing list