ZFS obn FreeBSD hardware model for 48 or 96 sata3 paths...

Jason Usher jusher71 at yahoo.com
Tue Sep 20 19:25:46 UTC 2011


Hi Julian,

--- On Mon, 9/19/11, Julian Elischer <julian at freebsd.org> wrote:


> jason, you still haven't said what
> the reason for all this is..  speed, capacity, both or
> some other reason..
> (or if you did, I missed it).


I did, but I replied badly and it didn't thread right - sorry.

The use case is very simple - nothing interesting at all - just a big giant local fileserver that will get hit by a lot of big, long rsync and sftp jobs, as well as some simple, but intensive, local housekeeping jobs (file culling with 'find', legacy hardlink "snapshots" and other things that could be done in better ways, but won't be).

In fact, the only interesting aspect of the whole operation is that there are a few hundred million inodes in use and the average file size is between 150 and 200 KB.

So why am I going on about pcie paths and dedicated drive paths, etc. ?  No reason - I just thought it was a simple and cheap optimization that would allow me to never worry about a certain class of problems - admittedly, problems I might not ever run into.  I'm not going to double the cost of 48 drives to get this, nor am I going to double the cost of 6 adaptor cards to do this, but I *would* be willing to double the cost of a single motherboard to do this.

But now I see it's not that practical, and probably doesn't exist.  The latest, greatest 32 lane pcie 2.0 motherboards tend to have just four ports, or other such complications.

So, if there isn't a better suggestion, I think I will economize a bit and get the Supermicro X8DTH-6F ... 8 core / 192 GB / 7 8x slots ... or the X8DAH+-F ... 8 core / 288 GB / 2 x16, 4 x8, 1 x4 slots.

The other questions regarding the ZIL/L2arc and so on have, I think, been answered - many thanks for all of the good suggestions and warnings.


More information about the freebsd-fs mailing list