HPC and zfs.

Peter Ankerstål peter at pean.org
Mon Feb 6 18:04:41 UTC 2012



On 6 feb 2012, at 17:49, Michael Aronsen wrote:

> Hi,
> 
> On Feb 6, 2012, at 17:22 , Jeremy Chadwick wrote:
>> - What single motherboard supports up to 192GB of RAM
> 
> Get an HP DL580/585 - they support 2TB/1TB RAM.
> 
>> - How you plan on getting roughly 410 hard disks (or 422 assuming
>>  an additional 12 SSDs) hooked up to a single machine
> 
> Use LSI SAS92XX 4 (x4) port external controllers, and SuperMicro SC847E26-RJBOD1 disk shelves.
> Each disk shelf needs 2 ports on the LSI controller, which means you get 90 disks per LSI card.
> The DL580/585's have 11 PCIe slots, so you'd end up with 990 disks per server using this setup.
> 
>> 
> 
> We have NetApp's at our University for home storage, but I would struggle to recommend them for HPC storage.
> 
> A dedicated HPC filesystem such as Lustre or FhGFS (http://www.fhgfs.com/cms/) will almost certainly give you better performance as they're purpose made.
> 
> We use FhGFS in a rather small setup (44 TB usable space and ~200 HPC nodes), but they do have installations with 700TB+.
> The setup consists of 2 metadata nodes and 4 storage nodes, all supermicro servers with 24 WD Velociraptor 600 GB 10K RPM disks.
> This setup gives us 4.8GB/sec write and 4.3GB/sec read speeds, all for a lot less than a comparable NetApp solution (we paid around €30.000).
> It now has support for mirroring on a per folder level for resilience.
> 
> Currently it only runs on Linux but i'm considering a FreeBSD port to get ZFS for volume management and now that OFED is in FreeBSD 9, Infinifband is possible.
> 
> I'd highly recommend a parallel filesystem, unfortunately not many, if any, are available on FreeBSD at this time.
> 
Thanks for the input. We recently had a visit by NetApp and Whamcloud actually and they where pitching for a NetApp+Whamcloud(lustre) installation.



More information about the freebsd-fs mailing list