Large File System?
Chad Leigh -- Shire.Net LLC
chad at shire.net
Tue Aug 8 19:07:24 UTC 2006
On Aug 8, 2006, at 1:01 PM, Freminlins wrote:
> Yes, I had all that. It is of absolutely no use in the event of an
> unclean
> shutdown (on FreeBSD). If the file system itself is dirty, it will
> need to
> fsckd. The bigger the file system, the longer it takes (generall).
> That is
> what journalling saves you.
>
> To give you some indication of what this means in real life, I'll
> refer
> (again, sorry) to a power outage we suffered in our colo. This is
> FreeBSD on
> modern hardware:
>
> Jul 23 17:52:05 weeble kernel: WARNING: /var was not properly
> dismounted
> Jul 23 17:55:52 weeble fsck: /dev/aacd0s1f: 1352 files, 956469 used,
> 13988364 free (1484 frags, 1748360
> blocks, 0.0% fragmentation)
>
> I've snipped out the logs in between. But that's nearly 4 minutes
> to get
> itself sorted out. That file system has only 1.9GB of data. Our
> Solaris
> boxes came up straight away.
Right now, if no fsck is really really important to you for your data
store, then get an OpenSolaris system and put ZFS on it. Never fsck
again as it is ALWAYS (they claim) in a coherent state. Or wait for
ZFS to show up on FreeBSD
Not just for the above reasons, I am implementing a Solaris server
with 1.7TB on ZFS and sharing it to a bunch of FreeBSD machines over
nfs on dedicated gigabit with jumbo frames on separate interfaces
from the standard default interface. (My main reason was to not have
storage tied to an individual worker server)
Chad
---
Chad Leigh -- Shire.Net LLC
Your Web App and Email hosting provider
chad at shire.net
More information about the freebsd-questions
mailing list