Quick ZFS mirroring question for non-mirrored pool
Kaya Saman
SamanKaya at netscape.net
Sun May 16 01:29:42 UTC 2010
Many thanks guys for providing so much valuable input and knowledge!!!
I really appreciate all your advice and knowledge.
Please excuse my naivety but the statement below:
On 05/16/2010 03:51 AM, Bob Friesenhahn wrote:
> As long as the pool is not the boot pool, zfs makes such testing quite
> easy.
I was under the impression that one needed a UFS2 filesystem in order to
be able to boot FreeBSD as that is the only FS available upon
install..... unlike Solaris10/OpenSolaris which creates the ZFS
filesystem upon install.
The plan I originally conceived was to use a 40GB solid state disk as
the / (root) directory comprising of all descending file systems, eg.
/usr, /proc, /lib etc... using the UFS2 FS
....and then use ZFS for the storage portion of my server using 2TB
Western Digital RE4 Enterprise SATA drives.
Since it's a simple home based server and not a massive enterprise grade
environment performance is not too much of an issue. However, system
backups are and without funding for a spare system or DAS or SAN
solution the only real option I have is to use a RAID0 esq based setup
so if one or both the primary drives go offline then at least I have all
my data backed up and still available.
Regards,
Kaya
On Sat, May 15, 2010 at 07:51:17PM -0500, Bob Friesenhahn wrote:
> > On Sat, 15 May 2010, Jeremy Chadwick wrote:
>
>> > >What you have here is the equivalent of RAID-10. It might be more
>> > >helpful to look at the above as a "stripe of mirrors".
>> > >
>> > >In this situation, you might be better off with raidz1 (RAID-5 in
>> > >concept). You should get better actual I/O performance due to ZFS
>> > >distributing the I/O workload across 4 disks rather than 2. At least
>> > >that's how I understand it.
>>
> >
> > That would be a reasonable assumption but actual evidence suggests
> > otherwise. For sequential I/O, mirrors and raidz1 seem to offer
> > roughly similar performance, except that mirrors win for reads and
> > raidz1 often win for writes. The mirror configuration definitely
> > wins as soon as there are many seeks or multi-user activity.
> >
> > The reason why mirrors still do well for sequential I/O is that
> > there is still load-sharing across the vdevs (smart "striping") but
> > in full 128K blocks whereas the raidz1 config needs to break the
> > 128K blocks into smaller blocks which are striped across the disks
> > in the vdev. Breaking the data into smaller chunks for raidz
> > multiplies the disk IOPS required. Disk seeks are slow.
> >
> > The main reason to choose raidz1 is for better space efficiency but
> > mirrors offer more performance.
> >
> > For an interesting set of results, see the results summary of "Bob's
> > method" at"http://www.nedharvey.com/".
> >
> > The only way to be sure for your own system is to create various
> > pool configurations and test with something which represents your
> > expected work load. As long as the pool is not the boot pool, zfs
> > makes such testing quite easy.
>
Thanks Bob. You're absolutely right.
I'd never seen/read said data results before, nor had I read the below
material until now; quite interesting and educational.
http://blogs.sun.com/roch/entry/when_to_and_not_to
-- | Jeremy Chadwick jdc at parodius.com | | Parodius Networking
http://www.parodius.com/ | | UNIX Systems Administrator Mountain View,
CA, USA | | Making life hard for others since 1977. PGP: 4BD6C0CB |
More information about the freebsd-fs
mailing list