Very large swap

Tim Daneliuk tundra at
Fri Oct 14 17:24:39 UTC 2011

On 10/14/2011 11:43 AM, Nikos Vassiliadis wrote:
> On 10/14/2011 8:08 AM, Dennis Glatting wrote:
>> This is kind of stupid question but at a minimum I thought it would be
>> interesting to know.
>> What is the limitations in terms of swap devices under RELENG_8 (or 9)?
>> A single swap dev appears to be limited to 32GB (there are truncation
>> messages on boot). I am looking at a possible need of 2-20TB (probably
>> more) with as much main memory that is affordable.
> The limit is raised to 256GB in HEAD and RELENG_8
>> I am working with large data sets and there are various ways of solving
>> the problem sets but simply letting the processors swap as they work
>> through a given problem is a possible technique.
> I would advise against this technique. Possibly, it's easier to design
> your program to user smaller amounts of memory and avoid swapping.
> After all, designing your program to use big amounts of swapped out
> memory *and* perform in a timely manner, can be very challenging.
> Nikos

Well ... I dunno how much large dataset processing you've done, but
it's not that simple.  Ordinarily, with modern machines and
architectures, you're right.  In fact, you NEVER want to swap,
instead, throw memory at the problem.

But when you get into really big datasets, it's a different story.
You probably will not find a mobo with 20TB memory capacity :)
So ... you have to do something with disk.  You generally get
two choices:  Memory mapped files or swap.  It's been some years
since I considered either seriously, but they do have some tradeoffs.
MM files give the programmer very fine grained control of just what
might get pushed out to disk at the cost of user space context
switching.  Swap gets managed by the kernel which is about as
efficient as disk I/O is going to get, but that means what and how
things get moved on- and off disk is invisible to the application.

What a lot of big data shops are moving to is SSD for such operations.
SSD is VERY fast and can be RAIDed to overcome the tendency of at least
the early SSD products' tendency to, um ... blow up.

As always, scale is hard, and giant data problems are Really Hard (tm).
That's why people like IBM, Sun/Oracle, and Teradata make lots of money
building giant iron farms.

'Just my 2^1 cents worth ...

Tim Daneliuk     tundra at
PGP Key:

More information about the freebsd-questions mailing list