Re: ZFS missing space

From: Frank Leonhardt <freebsd-doc_at_fjl.co.uk>
Date: Wed, 12 Feb 2025 22:26:40 UTC
On 12/02/2025 21:48, Frank Leonhardt wrote:
> On 12/02/2025 14:14, mike tancsa wrote:
>> On 2/12/2025 5:55 AM, Frank Leonhardt wrote:
>>> I've noticed space "go missing" on ZFS before, but not conclusively. 
>>> This time it's happened on a brand new setup I'm doing some testing on.
>>> As this is a "clean" system I can't figure out where the discrepancy 
>>> could possibly be coming from. This is beyond the slop value.
>>>
>>> Any ideas anyone?
>>>
>> What do
>>
>> zfs list -t snapshot -sused -r zr
>> zpool get all
>>
>> show ?
>>
>> does
>> fstat
>> show any open files at the time ?
>>
>>     ---Mike
>
> There are no snapshots. zfs list -t all would have shown them and 
> anything else.
>
> I see where you're going with fstat, but it's been restarted (several 
> times) and there's nothing hanging. But when it rebooted itself it was 
> creating a dataset of about the same size as the black hole.
>
> And yes, I did try a scrub :-)
>
> Since then I've tested all the drives by reading every block and I'm 
> running memory soak tests. It's ECC RAM anyway.
>
> -----
>
> root@zfs2:/ #  zpool list
> NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP DEDUP    
> HEALTH  ALTROOT
> zr    7.27T  10.1G  7.26T        -         -     0%     0% 1.00x    
> ONLINE  -
>
> ------
>
> root@zfs2:/ # zpool status
>   pool: zr
>  state: ONLINE
>   scan: scrub repaired 0B in 00:00:36 with 0 errors on Wed Feb 12 
> 10:36:17 2025
> config:
>
>         NAME        STATE     READ WRITE CKSUM
>         zr          ONLINE       0     0     0
>           raidz1-0  ONLINE       0     0     0
>             da0p3   ONLINE       0     0     0
>             da1p3   ONLINE       0     0     0
>             da2p3   ONLINE       0     0     0
>             da3p3   ONLINE       0     0     0
>
> errors: No known data errors
>
> -------
>
> root@zfs2:/ # zfs list
> NAME              USED  AVAIL  REFER  MOUNTPOINT
> zr               7.33G  5.15T   140K  /zr
> zr/ROOT          5.12G  5.15T   140K  none
> zr/ROOT/default  5.12G  5.15T  5.12G  /
> zr/data           140K  5.15T   140K  /data
> zr/home           343K  5.15T   140K  /home
> zr/home/fjl       203K  5.15T   203K  /home/fjl
> zr/tmp            140K  5.15T   140K  /tmp
> zr/usr           2.19G  5.15T   140K  /usr
> zr/usr/ports     1.16G  5.15T  1.16G  /usr/ports
> zr/usr/src       1.03G  5.15T  1.03G  /usr/src
> zr/var           1.12M  5.15T   140K  /var
> zr/var/audit      140K  5.15T   140K  /var/audit
> zr/var/crash      140K  5.15T   140K  /var/crash
> zr/var/log        407K  5.15T   407K  /var/log
> zr/var/mail       180K  5.15T   180K  /var/mail
> zr/var/tmp        140K  5.15T   140K  /var/tmp
>
> -----------
>
> zfs list and zpool list agree on what's used, just not what's free.
>
> It's like there's a hidden broken dataset that's allocated space but 
> isn't listed.
>
> Thanks, Frank.
>
Okay - I had a missed something important. It's zpool list that's wrong 
here cos to have that much space on a RAIDZ1 with 4x2Tb drives would be 
impossible. As I had several drive failures during installation (I 
replaced three!) I decided to reinstall.


# zpool list
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP DEDUP    
HEALTH  ALTROOT
zr    7.27T   951M  7.26T        -         -     0%     0% 1.00x    
ONLINE  -
# zfs list zr
NAME   USED  AVAIL  REFER  MOUNTPOINT
zr     691M  5.15T   140K  /zr

I'd say both are wrong. There's certainly not that much space in the 
zpool unless you're taking a guess about block compression. Likewise 
there's a bit more than the 5.15T suggested by zfs list.

df gives similar results to zfs list. So why the discrepancy? Has it 
always been there? Er, no. Here's FreeBSD 10:

NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
zroot   920G   543G   377G    59%  1.00x  ONLINE  -
fjl@fjl3:~ % zfs list zroot
NAME    USED  AVAIL  REFER  MOUNTPOINT
zroot   543G   363G   144K  none

Allowing for the slop, these add up!

FreeBSD 12:

root@bs2:~ # zfs list zr
NAME   USED  AVAIL  REFER  MOUNTPOINT
zr    1.29T   474G  3.79M  /zr
root@bs2:~ # zpool list
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP DEDUP  
HEALTH  ALTROOT
zr    1.81T  1.29T   532G        -         -    18%    71% 1.00x  ONLINE  -

Something's going on if anyone has any bright ideas!

Thanks, Frank.

P.S. Now to figure out why the dam thing's rebooting :-(