HAST with broken HDD

George Kontostanos gkontos.mail at gmail.com
Wed Oct 1 21:33:56 UTC 2014


On Wed, Oct 1, 2014 at 6:51 PM, Matt Churchyard <matt.churchyard at userve.net>
wrote:

> On Wed, Oct 1, 2014 at 4:52 PM, InterNetX - Juergen Gotteswinter <
> jg at internetx.com> wrote:
>
> > Am 01.10.2014 um 15:49 schrieb George Kontostanos:
> > > On Wed, Oct 1, 2014 at 4:29 PM, InterNetX - Juergen Gotteswinter
> > > <jg at internetx.com <mailto:jg at internetx.com>> wrote:
> > >
> > >     Am 01.10.2014 um 15:06 schrieb George Kontostanos:
> > >     >
> > >     >
> > >     > On Wed, Oct 1, 2014 at 3:49 PM, InterNetX - Juergen Gotteswinter
> > >     > <jg at internetx.com <mailto:jg at internetx.com> <mailto:
> > jg at internetx.com
> > >     <mailto:jg at internetx.com>>> wrote:
> > >     >
> > >     >     Am 01.10.2014 um 14:28 schrieb George Kontostanos:
> > >     >     >
> > >     >     > On Wed, Oct 1, 2014 at 1:55 PM, InterNetX - Juergen
> > Gotteswinter
> > >     >     > <jg at internetx.com <mailto:jg at internetx.com> <mailto:
> > jg at internetx.com
> > >     <mailto:jg at internetx.com>>
> > >     >     <mailto:jg at internetx.com <mailto:jg at internetx.com> <mailto:
> > jg at internetx.com
> > >     <mailto:jg at internetx.com>>>> wrote:
> > >     >     >
> > >     >     >     Am 01.10.2014 um 10:54 schrieb JF-Bogaerts:
> > >     >     >     >    Hello,
> > >     >     >     >    I'm preparing a HA NAS solution using HAST.
> > >     >     >     >    I'm wondering what will happen if one of disks of
> > the
> > >     >     primary node will
> > >     >     >     >    fail or become erratic.
> > >     >     >     >
> > >     >     >     >    Thx,
> > >     >     >     >    Jean-François Bogaerts
> > >     >     >
> > >     >     >     nothing. if you are using zfs on top of hast zfs wont
> > even
> > >     >     take notice
> > >     >     >     about the disk failure.
> > >     >     >
> > >     >     >     as long as the write operation was sucessfull on one of
> > the 2
> > >     >     nodes,
> > >     >     >     hast doesnt notify the ontop layers about io errors.
> > >     >     >
> > >     >     >     interesting concept, took me some time to deal with
> this.
> > >     >     >
> > >     >     >
> > >     >     > Are you saying that the pool will appear to be optimal even
> > with a bad
> > >     >     > drive?
> > >     >     >
> > >     >     >
> > >     >
> > >     >     https://forums.freebsd.org/viewtopic.php?&t=24786
> > >     >
> > >     >
> > >     >
> > >     > It appears that this is actually the case. And it is very
> > disturbing,
> > >     > meaning that a drive failure goes unnoticed. In my case I
> > completely
> > >     > removed the second disk on the primary node and a zpool status
> > showed
> > >     > absolutely no problem. Scrubbing the pool began resilvering which
> > >     > indicates that there is actually something wrong!
> > >
> > >
> > >     right. lets go further and think how zfs works regarding direct
> > hardware
> > >     / disk access. theres a layer between which always says ey,
> > everthing is
> > >     fine. no more need for pool scrubbing, since hastd wont tell if
> > anything
> > >     is wrong :D
> > >
> > >
> > > Correct, ZFS needs direct access and any layer in between might end
> > > up a disaster!!!
> > >
> > > Which means that practically HAST should only be used in UFS
> > > environments backed by a hardware controller. In that case, HAST
> > > will not notice again anything (unless you loose the controller) but
> > > at least you will know that you need to replace a disk, by
> > > monitoring the controller status.
> > >
> >
> > imho this should be included at least as a notice/warning in the hastd
> > manpage, afaik theres no real warning about such problems with the
> > hastd/zfs combo. but lots of howtos are out there describing exactly
> > such setups.
> >
> > Yes, it should. I have actually written a guide like that when HAST
> > was at
> its early stages. I had never tested it though for flaws. This thread
> started ringing some bells!
>
>
>
> > sad, since the comparable piece on linux - drbd - is handling io
> > errors fine. the upper layers get notified like it should be imho
> >
> > My next lab environment will be to try a DRBD similar set up. Although
> some tests we performed last year with ZFS on linux were not that
> promising.
>
> From what I can see HAST is working, at least in concept, as it should.
>
> If you install any filesystem on top of a RAID mirror, either disk can
> fail and the filesystem above should just continue on as if nothing
> happened. It's up to the RAID layer to notify you of the problem.
>
> HAST is basically "RAID1-over-network", so if a disk fails, it should just
> handle read/writes using the other disk, and the filesystem on top, be it
> UFS/ZFS/whatever, should just carry on as normal (which is what has been
> observed). Of course, HAST (or the OS) should notify you of the disk error
> though (probably through devd) so you can do something about it. Maybe it
> already exists, but HAST should be able to provide overall status
> information and raise events just like ZFS or any RAID subsystem would. You
> also of course shouldn't get scrub errors and corruption like that seen in
> the original post either just because one half of the HAST mirror has gone.
>
> Personally I've not been brave enough to use HAST yet. It seems to me like
> there's too many possibilities for situations where things can go wrong.
> One of these that has been discussed on the forums is that a ZFS scrub will
> only read data from the local disk. You could happily run a service from
> the master server for years, scrubbing regularly, never knowing that your
> data may be corrupt on the second HAST node. One 'solution' mentioned for
> this would be to regularly switch the master/slave nodes, running scrubs on
> each one while they are master.
>
> --
> Matt
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>

I believe that HAST began as FreeBSD's answer to DRBD. Given the fact that
FreeBSD had (still has) the competitive advantage of ZFS then maybe it
should have been designed with that in mind. ZFS is not meant to work
within a middle layer. Instead it needs direct access to the disks. So,
only the fact that we are using a technology that creates another layer is
a good stopper.

There are of course other ways to achieve redundancy and avoid going over
the network. But in some cases, where you need 2 storages located in to
different DC's, HAST might have been a good choice.

Of course, like you mentioned before. In order for this to work we would
need for HAST to monitor the health of every resource. That could be from
devd or from another HAST daemon or a combination. The administrator should
be able to easily get warnings about faulty components. We would also need
to use a fence device instead of relying on VRRP.

Anyway, thats food for though :)

-- 
George Kontostanos
---


More information about the freebsd-fs mailing list