Using HDD's for ZFS: 'desktop' vs 'raid / enterprise' -edition
drives?
Steve Bertrand
steve at ibctech.ca
Sat Jan 3 00:25:34 UTC 2009
dick hoogendijk wrote:
> On Fri, 2 Jan 2009 15:17:30 -0500
> stan <stanb at panix.com> wrote:
>
>> On Fri, Jan 02, 2009 at 05:48:27PM +0100, Wojciech Puchar wrote:
>>>>> think twice before doing.
>>>> Could you elaborate please ?
>>> ZFS still doesn't work as described ...
>> Is that comment FreeBSD specifc, or aimed at ZFS in general?
>
> Mind you, ZFS on FreeBSD is not the same as on OpenSolaris-2008.11,
> Nevada or even Solaris 10. On those platforms ZFS generally does what it
> is supposed to do, other than it's still a developing FS.
> On *BSD related systems that is not always the case. Do a good readup.
I had problems with ZFS about a year ago (or so).
Since then, for me, ZFS has been quite reliable:
amanda# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
storage 1.82T 1.21T 623G 66% ONLINE -
amanda# zpool status
NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
raidz1 ONLINE 0 0 0
ad0 ONLINE 0 0 0
ad2 ONLINE 0 0 0
ad4 ONLINE 2 0 0
ad6 ONLINE 0 1 0
...with four drives as such (I'd call them 'resi' or 'home-user' quality:
ad2: 476940MB <WDC WD5000AAKS-00A7B0 01.03B01> at ata1-master SATA300
This machine, which runs AMANDA backup archiver, backing up ~8 FreeBSD
servers at about 120Mbps network every night is:
amanda# uname -a
FreeBSD amanda.x 7.0-STABLE FreeBSD 7.0-STABLE #0: Thu Jul 17 15:24:40
UTC 2008 steve at x:/usr/obj/usr/src/sys/GENERIC i386
I've pushed the machine to 686Mbps network @225kpps, including FBSD SCP
and Windows NetBIOS clients while running iperf on other boxen and was
still able to write/read to the storage.
Instead of this one-liner crap 'don't do it' information to the users of
this list, lets begin explaining *why* its not working, and start
providing coherent solutions as to how the OP can work around the issue,
huh?
Steve
More information about the freebsd-questions
mailing list