disk fragmentation, <0%?

Freminlins freminlins at gmail.com
Mon Aug 15 13:39:06 GMT 2005


On 8/15/05, Jerry McAllister <jerrymc at clunix.cl.msu.edu> wrote:
> >

> As someone mentioned, there is a FAQ on this.   You should read it.
> 
> It is going negative because you have used more than the nominal
> capacity of the slice.   The nominal capacity is the total space
> minus the reserved proportion (usually 8%) that is held out.
> Root is able to write to that space and you have done something
> that got root to write beyond the nominal space.

I'm not sure you are right in this case. I think you need to re-read
the post. I've quoted the relevent part here:
 
> > Filesystem     Size    Used   Avail Capacity  Mounted on
> > /dev/ar0s1e    248M   -278K    228M    -0%    /tmp

Looking at how the columns line up I have to state that I too have
never seen this behaviour.  As an experiment I over-filled a file
system and here's the results:

Filesystem     Size    Used   Avail Capacity  Mounted on
/dev/ad0s1f    965M    895M   -7.4M   101%    /tmp

Note capacity is not negative. So that makes three of us in this
thread who have not seen negative capacity on UFS.

I have seen negative capacity when running an old version of FreeBSD
with a very large NFS mount (not enough bits in statfs if I remember
correctly).

> ////jerry

Frem.


More information about the freebsd-questions mailing list