Raid + zfs performace.
Jeremy Chadwick
freebsd at jdc.parodius.com
Sat Oct 30 20:11:49 UTC 2010
On Sat, Oct 30, 2010 at 09:13:41PM +0200, Peter Ankerstål wrote:
> >
> >
> >> Now you presented me with a third option. So you think I should skip to create
> >> a new hardware-raid mirror and instead use two single drives and add these as
> >> a mirror to the existing pool?
> >
> > If you're going to keep the hardware raid, then setting up a new
> > hardware raid of two drives, and then striping da1 with da0 via zfs is
> > a viable option. It's just another spin on the RAID 10 idea.
> >
> Sorry to ask again but I'm still not sure what you think is the best solution when
> comparing adding the two new drives as a zfs mirror like:
> pool
> da0
> mirror
> da1
> da2
>
> or making a hardware mirror da1 and adding that one
>
> pool
> da0
> da1
The answer is "it depends", and I can't authoritatively act as your
system administrator since I don't have any familiarity with what it is
your systems are doing and so on. That's your job. :-) You'd need to
disclose exactly:
- What hardware RAID controller you're using and all of its
capabilities, including if it has cache and a BBU,
- Full details of the workload on the machine and what the majority of
I/O consists of,
- What exact OS you're running (uname -a please) and how much physical
RAM the system has.
If you really want to answer your own question, I would recommend at
least performing benchmarks (bonnie++ might suffice) with both setups.
And don't forget that if you use ZFS you'll need to perform some
minor loader.conf tuning, and expect to adjust values depending on
workload/environment.
> And by the way. you guys seem zfs-shifty.
Language barrier detected! :-) "ZFS-shifty" could mean either "you're
ZFS advocates (fans of ZFS and recommend using it over anything else)",
or "you're timid when it comes to/afraid of ZFS". I think you meant the
first one, but I'm not certain.
If so: believe it or not, I'm not much of a FreeBSD ZFS advocate. There
are issues that keep appearing on the mailing lists (-stable and -fs),
and each incident has to be handled individually. There are definitely
stability issues (we just experienced one ourselves which was major[1];
it's been fixed in RELENG_8 since mid-October) which are still getting
hammered out.
My logic is this, and this is just one man's opinion:
- If you need absolute stability, don't have time or the desire to
tinker with new technology (or have 100% mission-critical services
in use), stick with using UFS + softupdates.
- If filesystem administrative simplicity is needed over everything
else, ZFS is an excellent choice.
- If you want ZFS and need absolute rock-solid performance, stability,
and It Should Just Work(tm), run Solaris 10 or OpenSolaris.
- If you're going to use ZFS on FreeBSD, you need to run RELENG_8, and
should almost certainly be running amd64, and have at least 4GB RAM.
> Do you have any ideas about my other problem i posted to the list?
> (http://lists.freebsd.org/pipermail/freebsd-fs/2010-October/009922.html)
Nope, I don't. I don't use ZFS send/recv nor snapshot capability. I do
keep seeing problems reported with both of these on the lists, but
again, they have to be handled on a per-case basis.
[1]: http://lists.freebsd.org/pipermail/freebsd-fs/2010-October/thread.html#9687
("Locked up processes after upgrade to ZFS v15")
--
| Jeremy Chadwick jdc at parodius.com |
| Parodius Networking http://www.parodius.com/ |
| UNIX Systems Administrator Mountain View, CA, USA |
| Making life hard for others since 1977. PGP: 4BD6C0CB |
More information about the freebsd-fs
mailing list