michaelnottebrock at gmx.net
Sun Nov 28 18:48:46 PST 2004
I recently had a filesystem go bad on me in such a way that it was recognized
way bigger than it actually was, causing fsck to fail while trying to allocate
and equally astronomic amount of memory (and my machine already had 1 Gig of
mem + 2 Gig swap available).
I just newfs'd and I'm now in the process of restoring data, however, I
googled a bit on this and it seems that this kind of fs corruption is
occurring quite often, in particular due to power failures.
Is there really no way that fsck could be made smarter about dealing with
seemingly huge filesystems? Also, what kind of memory would be required to
fsck a _real_ 11TB filesystem?
,_, | Michael Nottebrock | lofi at freebsd.org
(/^ ^\) | FreeBSD - The Power to Serve | http://www.freebsd.org
\u/ | K Desktop Environment on FreeBSD | http://freebsd.kde.org
More information about the freebsd-current