No more free space after upgrading to 10.1 and zpool upgrade

Adam Nowacki nowakpl at platinum.linux.pl
Tue Nov 18 17:49:00 UTC 2014


On 2014-11-18 06:44, Emil Mikulic wrote:
> On Sun, Nov 16, 2014 at 04:10:28PM +0100, Olivier Cochard-Labb? wrote:
>> On Sun, Nov 16, 2014 at 9:01 AM, Dylan Leigh <fbsd at dylanleigh.net> wrote:
>>
>>>
>>> Could you provide some other details about the pool structure/config,
>>> including the output of "zpool status"?
>>>
>>>
>> It's a raidz1 pool build with 5 SATA 2TB drives, and there are 5 zvolumes
>> without advanced features (no compression, no snapshot, no de-dup, etc...).
>> Because it's a raidz1 pool, I know that FREE space reported by a "zpool
>> list" include redundancy overhead and is bigger than AVAIL space reported
>> by a "zfs list".
>>
>> I've moved about 100GB (on hundred GigaByte) of files and after this step
>> there were only 2GB (two GigaByte) of Free space only: How is it possible ?
> 
> I had the same problem. Very old pool:
> 
> History for 'jupiter':
> 2010-01-20.20:46:00 zpool create jupiter raidz /dev/ad10 /dev/ad12 /dev/ad14
> 
> I upgraded FreeBSD 8.3 to 9.0, which I think went fine, but when I upgraded
> to 10.1, I had 0B AVAIL according to "zfs list" and df(1), even though there was
> free space according to "zpool list"
> 
> # zpool list -p jupiter
> NAME      SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
> jupiter  4466765987840  4330587288576  136178699264    30%         -     96  1.00x  ONLINE  -
> 
> # zfs list -p jupiter
> NAME                            USED        AVAIL          REFER  MOUNTPOINT
> jupiter                2884237136220            0          46376  /jupiter
> 
> Deleting files, snapshots, and child filesystems didn't help, AVAIL stayed at
> zero bytes... until I deleted enough:
> 
> NAME      SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
> jupiter  4466765987840  4320649953280  146116034560    30%         -     96  1.00x  ONLINE  -
> 
> NAME              USED       AVAIL  REFER  MOUNTPOINT
> jupiter  2877618732010  4350460950  46376  /jupiter
> 
> Apparently, the above happened somewhere between 96.0% and 96.9% used.
> 
> Any ideas what happened here? It's almost like 100+GB of free space is somehow
> reserved by the system (and I don't mean "zfs set reservation", those are all
> "none")

This commit is to blame:
http://svnweb.freebsd.org/base?view=revision&revision=268455

3.125% of disk space is reserved.



More information about the freebsd-fs mailing list