Raid + zfs performace.

Peter Ankerstål peter at pean.org
Sat Oct 30 18:38:07 UTC 2010


On 30 okt 2010, at 20.09, Sean wrote:

>> I thought maybe because the existing pool is kind of r/w saturated
>> it should be better to create a new independent pool for the new
>> drives. In that way the heavy activity would not "spread" to the
>> new drives.
> 
> You're trying to be smarter than ZFS. It's a common syndrome, usually
> brought about from years of experience dealing with "dumb"
> filesystems. If you create a new independent pool, then you are
> guaranteeing that your current r/w saturated pool will stay that way,
> unless you manually migrate data off of that pool. If you add storage
> to that pool, then you are providing that pool additional resource
> that ZFS can then manage.
> 
>> Now you presented me with a third option. So you think I should skip to create
>> a new hardware-raid mirror and instead use two single drives and add these as
>> a mirror to the existing pool?
> 
> If you're going to keep the hardware raid, then setting up a new
> hardware raid of two drives, and then striping da1 with da0 via zfs is
> a viable option. It's just another spin on the RAID 10 idea.

Ok. I think I'll go with this option for this machine. In the future I would probably
use a small SSD for booting and then use zfs for all raid-solutions. 

> 
>> How will zfs handle howswap of these drives?
> 
> ZFS doesn't know about your drives, because you hardware raid them. If
> you set up the second hardware raid mirror as a striped drive in the
> pool, and you then lose both drives within a single hardware raid
> mirror set, you'll be in the drink. But that's the case with any RAID
> 10 scenario.
> 
>> I've seen a few crashes due to ata-detach in other systems.
> 
> That's not a ZFS issue, that's a driver/support issue with the controller.
> 
> -Sean
> 



More information about the freebsd-fs mailing list