No more free space after upgrading to 10.1 and zpool upgrade
killing at multiplay.co.uk
Wed Nov 19 04:12:49 UTC 2014
On 19/11/2014 02:34, Xin Li wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA512
> On 11/18/14 17:36, Emil Mikulic wrote:
>> On Tue, Nov 18, 2014 at 11:00:36AM -0800, Xin Li wrote:
>>> On 11/18/14 09:29, Adam Nowacki wrote:
>>>> This commit is to blame:
>>>> 3.125% of disk space is reserved.
>> This is the sort of thing I suspected, but I didn't spot this
>>> Note that the reserved space is so that one can always delete
>>> files, etc. to get the pool back to a usable state.
>> What about the "truncate -s0" trick? That doesn't work reliably?
>>> I've added a new tunable/sysctl in r274674, but note that tuning
>>> is not recommended
>> Can you give us an example of how (and when) to tune the sysctl?
> sysctl vfs.zfs.spa_slop_shift=6 would tune down the reserved space to
> 1/(2^6) (=1.5625%).
> Personally I would never tune it. At this level of space your pool is
> already running at degraded performance, by the way. Don't do that.
>> Regarding r268455, this is kind of a gotcha for people who are
>> running their pools close to full - should this be mentioned in
>> UPDATING or in the release notes?
>> I understand that ZFS needs free space to be able to free more
>> space, but 3% of a large pool is a lot of bytes.
> Well, if you look at UFS, the reservation ratio is about 7.5% (8/108).
> File systems need free space to do allocation efficiently; even with
> mostly static contents, performance would suffer because at high level
> of space usage the file system would spend more time on looking up for
> free space and the resulted allocation is likely to be more
> fragmented. For ZFS, this means many essential operations like
> resilvering would be much slower, which is a real threat to data
The new space map code should help with that and a fixed 3.125% is a
large portion of a decent sized pool.
On our event cache box for example thats 256GB which feels like silly
amount to reserve.
Does anyone have any stats which backup the need for this amount of free
space on large pool arrays, specifically with spacemaps enabled?
More information about the freebsd-fs