No more free space after upgrading to 10.1 and zpool upgrade

Emil Mikulic emikulic at
Wed Nov 19 01:36:18 UTC 2014

On Tue, Nov 18, 2014 at 11:00:36AM -0800, Xin Li wrote:
> On 11/18/14 09:29, Adam Nowacki wrote:
> > This commit is to blame: 
> >
> > 
> > 3.125% of disk space is reserved.

This is the sort of thing I suspected, but I didn't spot this commit.

> Note that the reserved space is so that one can always delete files,
> etc. to get the pool back to a usable state.

What about the "truncate -s0" trick? That doesn't work reliably?

> I've added a new tunable/sysctl in r274674, but note that tuning is
> not recommended


Can you give us an example of how (and when) to tune the sysctl?

Regarding r268455, this is kind of a gotcha for people who are running their
pools close to full - should this be mentioned in UPDATING or in the release

I understand that ZFS needs free space to be able to free more space, but 3% of
a large pool is a lot of bytes.

More information about the freebsd-fs mailing list