One or Four?

Paul Mather paul at gromit.dlib.vt.edu
Tue Feb 21 01:08:23 UTC 2012


On Sat, 18 Feb 2012 08:39:53, Matthew Seaman <m.seaman at infracaninophile.co.uk> wrote:

> On 17/02/2012 22:17, Chuck Swiger wrote:
>> On Feb 17, 2012, at 2:05 PM, Robison, Dave wrote:
>>>> We'd like a show of hands to see if folks prefer the "old" style
>>>> default with 4 partitions and swap, or the newer iteration with 1
>>>> partition and swap.
> 
>> For a user/desktop machine, I prefer one root partition.  For other
>> roles like a server, I prefer multiple partitions which have been
>> sized for the intended usage.
> 
> I thought the installer switched to the one-partition style based on
> disk size?  Whatever.  Personally I much prefer using one big partition,
> even for servers -- this applies to /, /usr, /usr/local, /var --
> standard OS level bits, and not to application specific bits like
> partitions dedicated to RDBMS data areas (particularly if the
> application needs to write a lot of data).  Having /tmp on a separate
> memory backed fiesystem is important though: if sshd can't create its
> socket there, then you won't be able to login remotely and fix things.
> 
> The reasoning is simple: running out of space in any partition requires
> expensive sys-admin intervention to fix.  The root partition has
> historically been a particular problem in this regard.   Even if it is
> just log files filling up /var -- sure you can just remove some files,
> but why would you keep the logs in the first place if they weren't
> important?  Splitting space up into many small pieces means each piece
> has limited headroom in which to expand.  Having effectively one common
> chunk of free space makes that scenario much less likely[*].
> 
> Yes, in principle you can fill up the entire disk like this.  However,
> firstly, on FreeBSD that doesn't actually tend to kill the server
> entirely, unless the workload is write-heavy (but see the caveat above
> about application specific partitions) and the system will generally
> carry on perfectly happily if you can get rid of some files and create
> space.  [Note: this is not true of most OSes -- FreeBSD is particularly
> good in this regard.]  Secondly, typical server grade hardware will have
> something like 80--120GB for system drives nowadays.  FreeBSD + a
> selection of server applications takes under 5GB.   Even allowing for a
> pretty large load of application data, you're going to have tens of Gb
> of free space there.  Generally your monitoring is going to flag that
> the disk is filling up well before the space does run out.  Yes, I know
> there are disaster scenarios where the disk fills up in minutes; you're
> screwed whatever partitioning scheme you use in those cases, just a few
> seconds slower than in the multiple partitions case.


I'm coming into this thread part way through, so maybe this has been pointed out already, but, if so, I didn't see it.

It seems from reading this thread that the focus has been on the running out of space aspect.  Using multiple partitions has a value that goes beyond that: it can afford extra protection and help enhance security and even performance.  Separate partitions can have different mount options.  (Even in the Linux world they recognise this: the NSA hardening tips for RHEL 5 [http://www.nsa.gov/ia/_files/os/redhat/rhel5-pamphlet-i731.pdf]  suggests putting areas with user-writeable directories on separately-mounted file systems and to use mount options to limit user access appropriately.)  Options like noexec and nosuid may help improve security.  Options like noatime and async may help improve performance.

Using multiple partitions is very helpful if you are backing up using dump.  It can also help segregate areas of high file system churn, e.g., /usr/ports; /usr/obj; /usr/src; etc.  I like to keep these on separate file systems so I can  treat them differently to system areas I consider to be more stable and valuable.


> [*] Mostly I prefer ZFS nowadays, which renders this whole argument
> moot, as having one common pool of free space is exactly how ZFS works.


I almost always use ZFS-only installs these days, for exactly the reasons you mention.  You get the best of both worlds: pooled storage (meaning not having to agonize over partition sizes) and fine-grained control over file sets (meaning being able to tune attributes to enhance security and performance).

Cheers,

Paul.



More information about the freebsd-questions mailing list