ZFS in productions 64 bit
Tonix (Antonio Nati)
tonix at interazioni.it
Wed Jul 8 08:22:20 UTC 2009
Freddie Cash ha scritto:
> On Tue, Jul 7, 2009 at 9:31 AM, Dennis Yusupoff <dyr at homelink.ru> wrote:
>
>
>>> If there's anything missing from there that you would like to know, just
>>> ask. :)
>>>
>> At first, I would like to say thanks for your detailed "success-story"
>> report. It was great!
>> So, now a questions. ;)
>> Have you got any HDD failure, and if yes, how do you repair filesystem
>> and so on?
>>
>>
>
>
> We've had one drive fail so far, which is how we discovered that our intial
> pool setup was horribly, horribly, horribly misconfigured. We originally
> used a single raidz2 vdev using all 24 harddrives. NOT RECOMMENDED!!! Our
> throughput was horrible (taking almost 8 hours to complete a backup run of
> less than 80 servers). Spent over a week trying to get that new drive to
> resilver, but it just thrashed the drives.
>
> Then I found a bunch of articles online that describe how the raidz
> implementation works (limited to the IOps of a single drive), and that one
> should not use more than 8 or 9 drives in a raidz vdev. We built the
> secondary server using the 3-raidz vdev layout, and copied over as much data
> as we could (lost 3 months of daily backups, saved 2 months). Then we
> rebuilt the primary servers using the 3-raidz vdev layout, and copied the
> data back.
>
> Since then, we haven't had any other harddrive issues.
>
> And, we now run a "zpool scrub" every weekend to check for filesystem
> inconsistencies, bad checksums, bad data, and so on. So far, no issues
> found.
>
>
>
>
>> Why are you use software RAID, not hardware?
>>
>>
>
> For the flexibility, and all the integrity features of ZFS. The pooled
> storage concept is just so much nicer/easier to work with than hardware RAID
> arrays, separate LUNs, separate volume managers, separate partitions, etc.
>
> Need more storage? Just add another raidz vdev to the pool. Instantly have
> more storage space, and performance increases as well (the pool stripes
> across all the vdevs by default). Don't have any more drive bays? Then
> just replace the drives in the raidz vdev with larger ones. All the space
> becomes available to the pool. And *all* the filesystems use that pool, so
> they all get access to the extra space (no reformatting, no repartitioning,
> no offline expansion required).
>
> Add in the snapshots feature, that actually works without slowing down the
> system (UFS) or requiring "wasted"/used space (LVM), and it's hard to use
> hardware RAID anymore. :)
>
> Or course, we do still use hardware RAID controllers, for the disk
> management and alerting features, the onboard cache, the fast buses
> (PCI-X/PCIe), multi-lane cabling, hot-plug support, etc; we just don't use
> the actual RAID features.
>
> All of our Linux servers still use hardware RAID (5 and 10), with LVM on
> top, and XFS on top of that. But it's just not as nice of a storage stack
> to work with. :)
>
>
Is there any plan to make ZFS clustered (I mean using iSCSI disks)?
Any special thing to do to make it work with heartbeat?
Tonino
--
------------------------------------------------------------
Inter at zioni Interazioni di Antonio Nati
http://www.interazioni.it tonix at interazioni.it
------------------------------------------------------------
More information about the freebsd-isp
mailing list