Any success stories for HAST + ZFS?

Freddie Cash fjwcash at gmail.com
Fri Apr 1 14:18:03 UTC 2011


On Fri, Apr 1, 2011 at 4:22 AM, Pete French <petefrench at ingresso.co.uk> wrote:
>> The other 5% of the time, the hastd crashes occurred either when
>> importing the ZFS pool, or when running multiple parallel rsyncs to
>> the pool.  hastd was always shown as the last running process in the
>> backtrace onscreen.
>
> This is what I am seeing - did you manage to reproduce this with the patch,
> or does it fix the issue for you ? Am doing more test now, with only a single
> hast device to see if it is stable. Am Ok to run without mirroring across
> hast devices for now, but wouldnt like to do so long term!

I have not been able to crash or hang the box since applying Mikolaj's patch.

I've tried the following:
  - destroy pool
  - create pool
  - destroy hast providers
  - create hast providers
  - switch from master to slave via hastctl using "role secondary all"
  - switch from slave to master via hastctl using "role primary all"
  - switch roles via hast-carp-switch which does one provider per second
  - import/export pool

I've been running 6 parallel rsyncs for the past 48 hours, getting a
consistent 200 Mbps of transfers, with just under 2 TB of deduped data
in the pool, without any lockups.

So far, so good.
-- 
Freddie Cash
fjwcash at gmail.com


More information about the freebsd-fs mailing list