ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

Damien Fleuriot ml at my.gd
Thu Jan 6 13:11:50 UTC 2011


I see, so no dedicated ZIL device in the end ?

I could make a 15gb slice for the OS running UFS (I don't wanna risk
losing the OS when manipulating ZFS, such as during upgrades), and a
25gb+ for L2ARC, depending on the disk.

I can't afford a *dedicated* drive for the cache though, not enough room
in the machine.


On 1/6/11 12:26 PM, Chris Forgeron wrote:
> You know, these days I'm not as happy with SSD's for ZIL. I may blog about some of the speed results I've been getting over the last 6mo-1yr that I've been running them with ZFS. I think people should be using hardware RAM drives. You can get old Gigabyte i-RAM drives with 4 gig of memory for the cost of a 60 gig SSD, and it will trounce the SSD for speed. 
> 
> I'd put your SSD to L2ARC (cache). 
> 
> 
> -----Original Message-----
> From: Damien Fleuriot [mailto:ml at my.gd] 
> Sent: Thursday, January 06, 2011 5:20 AM
> To: Artem Belevich
> Cc: Chris Forgeron; freebsd-stable at freebsd.org
> Subject: Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks
> 
> You both make good points, thanks for the feedback :)
> 
> I am more concerned about data protection than performance, so I suppose raidz2 is the best choice I have with such a small scale setup.
> 
> Now the question that remains is wether or not to use parts of the OS's ssd for zil, cache, or both ?
> 
> ---
> Fleuriot Damien
> 
> On 5 Jan 2011, at 23:12, Artem Belevich <fbsdlist at src.cx> wrote:
> 
>> On Wed, Jan 5, 2011 at 1:55 PM, Damien Fleuriot <ml at my.gd> wrote:
>>> Well actually...
>>>
>>> raidz2:
>>> - 7x 1.5 tb = 10.5tb
>>> - 2 parity drives
>>>
>>> raidz1:
>>> - 3x 1.5 tb = 4.5 tb
>>> - 4x 1.5 tb = 6 tb , total 10.5tb
>>> - 2 parity drives in split thus different raidz1 arrays
>>>
>>> So really, in both cases 2 different parity drives and same storage...
>>
>> In second case you get better performance, but lose some data
>> protection. It's still raidz1 and you can't guarantee functionality in
>> all cases of two drives failing. If two drives fail in the same vdev,
>> your entire pool will be gone.  Granted, it's better than single-vdev
>> raidz1, but it's *not* as good as raidz2.
>>
>> --Artem


More information about the freebsd-stable mailing list