zpool list show nonsense on raidz pools, at least it looks like it for me
Alan Somers
asomers at freebsd.org
Wed Apr 12 19:20:39 UTC 2017
On Wed, Apr 12, 2017 at 12:01 PM, Eugene M. Zheganin <emz at norma.perm.ru> wrote:
> Hi,
>
>
> It's not my first letter where I fail to understand the space usage from zfs
> utilities, and in previous ones I was kind of convinced that I just read it
> wrong, but not this time I guess. See for yourself:
>
>
> [emz at san01:~]> zpool list data
> NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
> data 17,4T 7,72T 9,66T - 46% 44% 1.00x ONLINE -
>
>
> Here' as I understand it, zpool says that less than a half of the pool is
> used. As far as I know this is very complicated when it comes to the radiz
> pools. Let's see:
>
>
> [emz at san01:~]> zfs list -t all data
> NAME USED AVAIL REFER MOUNTPOINT
> data 13,3T 186G 27,2K /data
>
>
> So, if we won't investigate further, it looks like that only 186G is free.
> Spoiling - this is the real free space amount, because I've just managed to
> free 160 gigs of data, and I really know I was short on space when sending
> 30 Gb dataset, because zfs was saying "Not enough free space". So, let's
> investigate further:
>
>
> [emz at san01:~]> zfs list -t all | more
> NAME USED AVAIL REFER MOUNTPOINT
> data 13,3T 186G 27,2K /data
> data/esx 5,23T 186G 27,2K /data/esx
...
> data/esx/boot-esx26 8,25G 194G 12,8K -
> data/esx/shared 5,02T 2,59T 2,61T -
> data/reference 6,74T 4,17T 2,73T -
> data/reference at ver7_214 127M - 2,73T -
> data/reference at ver2_739 12,8M - 2,73T -
> data/reference at ver2_740 5,80M - 2,73T -
> data/reference at ver2_741 4,55M - 2,73T -
> data/reference at ver2_742 993K - 2,73T -
> data/reference-ver2_739-worker100 1,64G 186G 2,73T -
...
>
>
> This are getting really complicated now.
>
> What I don't understand is:
>
> - why the amount of free space changes from dataset to dataset ? I mean they
> all share the same free space pool, all have the same refreservation=none,
> but the AVAIL differs. When it comes to workerX datasets, it differs
> slightly, but when it comes to the large zvols, like esx/shared or
> reference, it differs a lot !
>
> - why the esx/shared and reference datasets are shown like they can be
> enlarged ? I mean, I really don't have THAT much of free space.
>
>
> Here are their properties:
>
>
> [emz at san01:~]> zfs get all data/esx/shared
> NAME PROPERTY VALUE SOURCE
...
> data/esx/shared refreservation 5,02T local
...
> [emz at san01:~]> zfs get all data/reference
> NAME PROPERTY VALUE SOURCE
...
> data/reference refreservation 3,98T local
...
>
>
> Could please someone explain why they show as having like half of the total
> pool space as AVAIL ? I thing this is directly related to the fact that
> zpool list shows only 44% of the total pool space is used. And I use this
> value to monitor the pool space usage, looks like I'm totally failing with
> this.
>
Some of your datasets have refreservations. That's why.
>
> I also don't understand whe the zvol of the size 3.97T really uses 6.74T of
> the space. I found an article, explaing that the volblocksize and the sector
> size has to do something with this, and this happens when the device block
> size is 4k, and volblocksize is default, thus 8k. Mine disks sector size is
> 512 native, so this is really not the case. I'm also having equal number of
> disks in vdevs, and they are 5:
>
The AVAIL reported by zpool list doesn't account for RAIDZ overhead
(or maybe it assumes optimum alignment; I can't remember). But the
USED reported by "zfs list" does account for RAIDZ overhead.
-alan
More information about the freebsd-fs
mailing list