HAST + ZFS + NFS + CARP

Julien Cigar julien at perdition.city
Mon Jul 4 19:31:40 UTC 2016


On Mon, Jul 04, 2016 at 11:56:57AM -0700, Jordan Hubbard wrote:
> 
> > On Jul 4, 2016, at 11:36 AM, Julien Cigar <julien at perdition.city> wrote:
> > 
> > I think the discussion evolved a bit since I started this thread, the
> > original purpose was to build a low-cost redundant storage for a small
> > infrastructure, no more no less.
> > 
> > The context is the following: I work in a small company, partially
> > financed by public funds, we started small, evolved a bit to a point
> > that some redundancy is required for $services. 
> > Unfortunately I'm alone to take care of the infrastructure (and it's 
> > only 50% of my time) and we don't have that much money :( 
> 
> Sure, I get that part also, but let’s put the entire conversation into context:
> 
> 1. You’re looking for a solution to provide some redundant storage in a very specific scenario.
> 
> 2. We’re talking on a public mailing list with a bunch of folks, so the conversation is also naturally going to go from the specific to the general - e.g. “Is there anything of broader applicability to be learned / used here?”  I’m speaking more to the larger audience who is probably wondering if there’s a more general solution here using the same “moving parts”.

of course..! It has been an interesting discussion, learned some things,
and it's always enjoyable to get different point of view.

> 
> To get specific again, I am not sure I would do what you are contemplating given your circumstances since it’s not the cheapest / simplest solution.  The cheapest / simplest solution would be to create 2 small ZFS servers and simply do zfs snapshot replication between them at periodic intervals, so you have a backup copy of the data for maximum safety as well as a physically separate server in case one goes down hard.  Disk storage is the cheap part now, particularly if you have data redundancy and can therefore use inexpensive disks, and ZFS replication is certainly “good enough” for disaster recovery.  As others have said, adding additional layers will only increase the overall fragility of the solution, and “fragile” is kind of the last thing you need when you’re frantically trying to deal with a server that has gone down for what could be any number of reasons.
> 
> I, for example, use a pair of FreeNAS Minis at home to store all my media and they work fine at minimal cost.  I use one as the primary server that talks to all of the VMWare / Plex / iTunes server applications (and serves as a backup device for all my iDevices) and it replicates the entire pool to another secondary server that can be pushed into service as the primary if the first one loses a power supply / catches fire / loses more than 1 drive at a time / etc.  Since I have a backup, I can also just use RAIDZ1 for the 4x4Tb drive configuration on the primary and get a good storage / redundancy ratio (I can lose a single drive without data loss but am also not wasting a lot of storage on parity).

You're right, I'll definitively reconsider the zfs send / zfs receive
approach.

> 
> Just my two cents.  There are a lot of different ways to do this, and like all things involving computers (especially PCs), the simplest way is usually the best.
> 

Thanks!

Julien

> - Jordan
> 

-- 
Julien Cigar
Belgian Biodiversity Platform (http://www.biodiversity.be)
PGP fingerprint: EEF9 F697 4B68 D275 7B11  6A25 B2BB 3710 A204 23C0
No trees were killed in the creation of this message.
However, many electrons were terribly inconvenienced.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.freebsd.org/pipermail/freebsd-fs/attachments/20160704/f283e10e/attachment.sig>


More information about the freebsd-fs mailing list