redundant storage

Julien Cigar julien.cigar at gmail.com
Fri Jun 3 15:01:27 UTC 2016


On Fri, Jun 03, 2016 at 09:34:24AM -0500, Valeri Galtsev wrote:
> 
> On Fri, June 3, 2016 6:50 am, Julien Cigar wrote:
> > On Fri, Jun 03, 2016 at 11:47:46AM +0100, Steve O'Hara-Smith wrote:
> >> On Fri, 3 Jun 2016 12:14:46 +0200
> >> Julien Cigar <julien.cigar at gmail.com> wrote:
> >>
> >> > On Fri, Jun 03, 2016 at 10:41:38AM +0100, Steve O'Hara-Smith wrote:
> >> > > 	Hi,
> >> > >
> >> > > 	Just one change - don't use RAID1 use ZFS mirrors. ZFS does
> >> > > better RAID than any hardware controller.
> >> >
> >> > right.. I must admit that I haven't looked at ZFS yet (I'm still using
> >> > UFS + gmirror), but it will be the opportunity to do so..!
> >> >
> >> > Does ZFS play well with HAST?
> >>
> >> 	Never tried it but it should work well enough, ZFS sits on top of
> >> geom providers so it should be possible to use the pool on the primary.
> >>
> >> 	One concern would be that since all reads come from local storage
> >> the secondary machine never gets scrubbed and silent corruption never
> >> gets
> >> detected on the secondary. A periodic (say weekly) switch over and scrub
> >> takes care of this concern. Silent corruption is rare, but the bigger
> >> the
> >> pool and the longer it's used the more likely it is to happen
> >> eventually,
> >> detection and repair of this is one of ZFSs advantages over hardware
> >> RAID
> >> so it's good not to defeat it.
> >
> > Thanks, I'll read a bit on ZFS this week-end ..!
> >
> > My ultimate goal would be that the HAST storage survives an hard reboot/
> > unplugged network cable/... during an heavy I/O write, and that the
> > switch between the two nodes is transparent to the clients, without any
> > data loss of course ... feasible or utopian? Needless to say that what
> > I want to avoid at all cost is that the storage becomes corrupted and
> > unrecoverable..!
> 
> Sounds pretty much like distributed file system solution. I tried one
> (moosefs) which I gave up on, and after I asked (on this list) for advise
> about other options, next candidate for me emerged: glusterfs, which I
> hadn't chance to set up yet. You may want to search this list archives,
> those were really good advises that experts gave me.

sorry but: I avoid distributed FS like the plague :)

> 
> Valeri
> 
> >
> >>
> >> 	Drive failures on the primary will wind up causing both the primary
> >> and the secondary to be rewritten when the drive is replaced - this
> >> could
> >> probably be avoided by switching primaries and letting HAST deal with
> >> the
> >> replacement.
> >>
> >> 	Another very minor issue would be that any corrective rewrites (for
> >> detected corruption) will happen on both copies but that's harmless and
> >> there really should be *very* few of these.
> >>
> >> 	One final concern, but it's HAST purely and not really ZFS. Writing
> >> a large file flat out will likely saturate your LAN with half the
> >> capacity
> >> going to copying the data for HAST. A private backend link between the
> >> two
> >> boxes would be a good idea (or 10 gigabit ethernet).
> >
> > yep, that's what I had in mind..! one nic for the replication between
> > the two HAST node, and one (CARP) nic by which clients access to
> > storage..
> >
> >>
> >> > > On Fri, 3 Jun 2016 10:38:43 +0200
> >> > > Julien Cigar <julien.cigar at gmail.com> wrote:
> >> > >
> >> > > > Hello,
> >> > > >
> >> > > > I'm looking for a low-cost redundant HA storage solution for our
> >> > > > (small) team here (~30 people). It will be used to store files
> >> > > > generated by some webapps, to provide a redundant dovecot (imap)
> >> > > > server, etc.
> >> > > >
> >> > > > For the hardware I have to go with HP (no choice), so I planned to
> >> buy
> >> > > > 2 x HP ProLiant DL320e Gen8 v2 E3-1241v3 (768645-421) with
> >> > > > 4 x WD Hard Drive Re SATA 4TB 3.5in 6gb/s 7200rpm 64MB Buffer
> >> > > > (WD4000FYYZ) in a RAID1 config (the machine has a smartarray P222
> >> > > > controller, which is apparently supported by the ciss driver)
> >> > > >
> >> > > > On the FreeBSD side I plan to use HAST with CARP, and the volumes
> >> > > > will be exported through NFS4.
> >> > > >
> >> > > > Any comments on this setup (or other recommendations) ? :)
> >> > > >
> >> > > > Thanks!
> >> > > > Julien
> >> > > >
> >> > >
> >> > >
> >> > > --
> >> > > Steve O'Hara-Smith <steve at sohara.org>
> >> > >
> >> >
> >>
> >>
> >> --
> >> Steve O'Hara-Smith                          |   Directable Mirror Arrays
> >> C:>WIN                                      | A better way to focus the
> >> sun
> >> The computer obeys and wins.                |    licences available see
> >> You lose and Bill collects.                 |    http://www.sohara.org/
> >>
> >
> > --
> > Julien Cigar
> > Belgian Biodiversity Platform (http://www.biodiversity.be)
> > PGP fingerprint: EEF9 F697 4B68 D275 7B11  6A25 B2BB 3710 A204 23C0
> > No trees were killed in the creation of this message.
> > However, many electrons were terribly inconvenienced.
> >
> 
> 
> ++++++++++++++++++++++++++++++++++++++++
> Valeri Galtsev
> Sr System Administrator
> Department of Astronomy and Astrophysics
> Kavli Institute for Cosmological Physics
> University of Chicago
> Phone: 773-702-4247
> ++++++++++++++++++++++++++++++++++++++++

-- 
Julien Cigar
Belgian Biodiversity Platform (http://www.biodiversity.be)
PGP fingerprint: EEF9 F697 4B68 D275 7B11  6A25 B2BB 3710 A204 23C0
No trees were killed in the creation of this message.
However, many electrons were terribly inconvenienced.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.freebsd.org/pipermail/freebsd-questions/attachments/20160603/704e4ca8/attachment.sig>


More information about the freebsd-questions mailing list