No more free space after upgrading to 10.1 and zpool upgrade

Xin Li delphij at delphij.net
Tue Nov 18 19:00:39 UTC 2014


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

On 11/18/14 09:29, Adam Nowacki wrote:
> On 2014-11-18 06:44, Emil Mikulic wrote:
>> On Sun, Nov 16, 2014 at 04:10:28PM +0100, Olivier Cochard-Labb?
>> wrote:
>>> On Sun, Nov 16, 2014 at 9:01 AM, Dylan Leigh <fbsd at
>>> dylanleigh.net> wrote:
>>> 
>>>> 
>>>> Could you provide some other details about the pool
>>>> structure/config, including the output of "zpool status"?
>>>> 
>>>> 
>>> It's a raidz1 pool build with 5 SATA 2TB drives, and there are
>>> 5 zvolumes without advanced features (no compression, no
>>> snapshot, no de-dup, etc...). Because it's a raidz1 pool, I
>>> know that FREE space reported by a "zpool list" include
>>> redundancy overhead and is bigger than AVAIL space reported by
>>> a "zfs list".
>>> 
>>> I've moved about 100GB (on hundred GigaByte) of files and after
>>> this step there were only 2GB (two GigaByte) of Free space
>>> only: How is it possible ?
>> 
>> I had the same problem. Very old pool:
>> 
>> History for 'jupiter': 2010-01-20.20:46:00 zpool create jupiter
>> raidz /dev/ad10 /dev/ad12 /dev/ad14
>> 
>> I upgraded FreeBSD 8.3 to 9.0, which I think went fine, but when
>> I upgraded to 10.1, I had 0B AVAIL according to "zfs list" and
>> df(1), even though there was free space according to "zpool
>> list"
>> 
>> # zpool list -p jupiter NAME      SIZE  ALLOC   FREE   FRAG
>> EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT jupiter  4466765987840
>> 4330587288576  136178699264    30%         -     96  1.00x
>> ONLINE  -
>> 
>> # zfs list -p jupiter NAME                            USED
>> AVAIL          REFER  MOUNTPOINT jupiter
>> 2884237136220            0          46376  /jupiter
>> 
>> Deleting files, snapshots, and child filesystems didn't help,
>> AVAIL stayed at zero bytes... until I deleted enough:
>> 
>> NAME      SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAP  DEDUP
>> HEALTH  ALTROOT jupiter  4466765987840  4320649953280
>> 146116034560    30%         -     96  1.00x  ONLINE  -
>> 
>> NAME              USED       AVAIL  REFER  MOUNTPOINT jupiter
>> 2877618732010  4350460950  46376  /jupiter
>> 
>> Apparently, the above happened somewhere between 96.0% and 96.9%
>> used.
>> 
>> Any ideas what happened here? It's almost like 100+GB of free
>> space is somehow reserved by the system (and I don't mean "zfs
>> set reservation", those are all "none")
> 
> This commit is to blame: 
> http://svnweb.freebsd.org/base?view=revision&revision=268455
> 
> 3.125% of disk space is reserved.

Note that the reserved space is so that one can always delete files,
etc. to get the pool back to a usable state.

I've added a new tunable/sysctl in r274674, but note that tuning is
not recommended: by using too much space the pool would become
read-only permanently and one will have to dump data and recreate the
pool.

Cheers,
- -- 
Xin LI <delphij at delphij.net>    https://www.delphij.net/
FreeBSD - The Power to Serve!           Live free or die
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0

iQIcBAEBCgAGBQJUa5dUAAoJEJW2GBstM+nsyiUP/0KuNeTrocsQPrZ8YnsDGHyd
QXFDdZ9B9RTD3GygUwLZIAX0st1pCy28sTfF1Ph54rfq2DkIJaJwUzlOeOTceNup
hppXcYah5kX4YnnVek73+W6JZWxUV9MbSpOYn6bhyItyRtbv2dDpytJa6D6uKggq
rkpoU1tyIQLZJPZ5m9pL7h3XvxZpHRJLSqD7JlYr9aXzqFDoXxq5vvD6tZkpkx7f
sFhcSDEPb7wKbPA+UbQ+YvycMJyEqKDgdOWvqC1puSGPqRzN8WZcM8Qw/Rs9wpsl
QiCK1OJQwO1RBIJUJq9SVyCE08lDDvMrG+3kEemCac8p066/15Vpxoqu818mskfS
0MA6CUQMAepjHoyntd6vokWGu6O9Lx92pRa11/RfQ5xql29hmOz3dXBtcIX7ApJQ
Wxcvip+2yLaeDMw0bc0M1nxUpuQUPbf4Rob0li8T0S7g26Dll84FUBAWV/1F5hh2
+7i3Tt385opZgautZDEiGk0MZFLb+2EdqXxOWi479vJ3rS1Q2oGsgiPH8PQ4Lrog
QmQ9KplmxIOyrIVdMUrq+ywHWY6nMA5buKsXTEaLKqghCM2mJK8n9LpwZoAkfG9y
Ueko1GssfYhuJ++VQrOAFYcf7voxSrlj4XPdzofS+lrDQ7FluF8b+iFWjz/IEQug
b9PAF4KyBdPfdWMqQgW9
=e2r4
-----END PGP SIGNATURE-----


More information about the freebsd-fs mailing list