zpool list show nonsense on raidz pools, at least it looks like it for me

Eric A. Borisch eborisch at gmail.com
Wed Apr 12 19:30:09 UTC 2017


OK. There's a lot going on here.

A few notes:

1) Look at the output of 'zfs list -ro space data' ... this does a nice job
of showing what is actually using space.

2) Volumes with refreservation *and* snapshots take up *at least*
refreservation size + usedbysnapshots size, and frequently much more. This
is because you have a contract to allow the user to re-write (change) the
full refreservation size without running out of space, while still
retaining any data currently pointed to by a snapshot (this *is not* the
same as usedbysnapshots). If you want to 'thin provision', set
refreservation=none, but be aware of what you are doing and potential for
problems if you start actually filling up all the volumes.

You can see this all in your listing:

data/reference  used 6,74T                         -
data/reference  available 4,17T                         -
data/reference  referenced 2,73T                         -
data/reference  volsize 3,97T                         local
data/reference  refreservation 3,98T                         local
data/reference  usedbysnapshots 21,6G                         -
data/reference  usedbydataset 2,73T                         -
data/reference  usedbychildren 0                             -
data/reference  usedbyrefreservation 3,98T                         -

Note especially the usedbydataset and usedbyrefreservation line. I'm
guessing you have a recent snapshot, such that ZFS guarantees its existence
(and the 2.73TB it references) AS WELL AS being able to rewrite the whole
4TB volume without running out of space. All of these refreservations are
what is consuming your space available from the zfs (not zpool) perspective.

The 'available' property here is your volsize + the available size from 'zfs
list data'. (How much you could grow this volume to; its "available" size.)

3) The zpool listing (and space available) is 'ignorant' of reservations,
this is a statement of how much data is currently active (written to and
still referenced by active datasets/volumes/snapshots) on the drives, and
how much space on the drives is free to write over.

4) You can get into other overheads with different raid-z levels and pool
widths, but that's not an issue here.

Hope that helps,
  - Eric


More information about the freebsd-fs mailing list