filesystem full error with inumber
Sven Willenberger
sven at dmv.com
Wed Jul 26 17:05:36 UTC 2006
Feargal Reilly presumably uttered the following on 07/24/06 11:48:
> On Mon, 24 Jul 2006 17:14:27 +0200 (CEST)
> Oliver Fromme <olli at lurza.secnetix.de> wrote:
>
>> Nobody else has answered so far, so I try to give it a shot ...
>>
>> The "filesystem full" error can happen in three cases:
>> 1. The file system is running out of data space.
>> 2. The file system is running out of inodes.
>> 3. The file system is running out of non-fragmented blocks.
>>
>> The third case can only happen on extremely fragmented
>> file systems which happens very rarely, but maybe it's
>> a possible cause of your problem.
>
> I rebooted that server, and df then reported that disk at 108%,
> so it appears that df was reporting incorrect figures prior to
> the reboot. Having cleaned up, it appears by my best
> calculations to be showing correct figures now.
>
>> > kern.maxfiles: 20000
>> > kern.openfiles: 3582
>>
>> Those have nothing to do with "filesystem full".
>>
>
> Yeah, that's what I figured.
>
>> > Looking again at dumpfs, it appears to say that this is
>> > formatted with a block size of 8K, and a fragment size of
>> > 2K, but tuning(7) says: [...]
>> > Reading this makes me think that when this server was
>> > installed, the block size was dropped from the 16K default
>> > to 8K for performance reasons, but the fragment size was
>> > not modified accordingly.
>> >
>> > Would this be the root of my problem?
>>
>> I think a bsize/fsize ratio of 4/1 _should_ work, but it's
>> not widely used, so there might be bugs hidden somewhere.
>>
>
> Such as df not reporting the actual data usage, which is now my
> best working theory. I don't know what df bases it's figures on,
> perhaps it either slowly got out of sync, or more likely, got
> things wrong once the disk filled up.
>
> I'll monitor it to see if this happens again, but hopefully
> won't keep that configuration around for too much longer anyway.
>
> Thanks,
> -fr.
>
One of my machines that I recently upgraded to 6.1 (6.1-RELEASE-p3) is also
exhibiting df reporting wrong data usage numbers. Notice the negative "Used" numbers
below:
> df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/da0s1a 496M 63M 393M 14% /
devfs 1.0K 1.0K 0B 100% /dev
/dev/da0s1e 989M -132M 1.0G -14% /tmp
/dev/da0s1f 15G 478M 14G 3% /usr
/dev/da0s1d 15G -1.0G 14G -8% /var
/dev/md0 496M 228K 456M 0% /var/spool/MIMEDefang
devfs 1.0K 1.0K 0B 100% /var/named/dev
Sven
More information about the freebsd-stable
mailing list