Re: Cannot find out what uses space in ZFS dataset
Date: Thu, 25 Sep 2025 17:13:44 UTC
On 9/25/25 18:18, Frank Leonhardt wrote: > Could you post the output of this again, now you've deleted the snapshots? > > zfs list -t all -o > name,mountpoint,canmount,refer,used,usedbychildren,usedbydataset,usedbyrefreservation,usedbysnapshots Here it is: > NAME MOUNTPOINT CANMOUNT REFER USED USEDCHILD USEDDS USEDREFRESERV USEDSNAP > ... > zroot/ROOT/default / noauto 62.1G 62.2G 0B 62.1G 0B 43.8M > zroot/ROOT/default@auto_zroot-20250924090000 - - 62.1G 2.01M - - - - > zroot/ROOT/default@auto_zroot-20250924190000 - - 62.1G 792K - - - - > zroot/ROOT/default@auto_zroot-20250924200000 - - 62.1G 1.28M - - - - > zroot/ROOT/default@auto_zroot-20250924210000 - - 62.1G 920K - - - - > zroot/ROOT/default@auto_zroot-20250924220000 - - 62.1G 2.26M - - - - > zroot/ROOT/default@auto_zroot-20250924230000 - - 62.1G 1.89M - - - - > zroot/ROOT/default@auto_zroot-20250925000000 - - 62.1G 1.40M - - - - > zroot/ROOT/default@auto_zroot-20250925010000 - - 62.1G 1.45M - - - - > zroot/ROOT/default@auto_zroot-20250925020000 - - 62.1G 1.71M - - - - > zroot/ROOT/default@auto_zroot-20250925030000 - - 62.1G 2.15M - - - - > zroot/ROOT/default@auto_zroot-20250925040000 - - 62.1G 864K - - - - > zroot/ROOT/default@auto_zroot-20250925050000 - - 62.1G 1.51M - - - - > zroot/ROOT/default@auto_zroot-20250925060000 - - 62.1G 780K - - - - > zroot/ROOT/default@auto_zroot-20250925070000 - - 62.1G 1.55M - - - - > zroot/ROOT/default@auto_zroot-20250925080000 - - 62.1G 1.08M - - - - > zroot/ROOT/default@auto_zroot-20250925090000 - - 62.1G 760K - - - - > zroot/ROOT/default@auto_zroot-20250925100000 - - 62.1G 1.37M - - - - > zroot/ROOT/default@auto_zroot-20250925110000 - - 62.1G 1.20M - - - - > zroot/ROOT/default@auto_zroot-20250925120000 - - 62.1G 1.95M - - - - > zroot/ROOT/default@auto_zroot-20250925130000 - - 62.1G 1.91M - - - - > zroot/ROOT/default@auto_zroot-20250925140000 - - 62.1G 1.02M - - - - > zroot/ROOT/default@auto_zroot-20250925150000 - - 62.1G 1.16M - - - - > zroot/ROOT/default@auto_zroot-20250925160000 - - 62.1G 1.15M - - - - > zroot/ROOT/default@auto_zroot-20250925170000 - - 62.1G 2.25M - - - - > zroot/ROOT/default@auto_zroot-20250925180000 - - 62.1G 1.91M - - - - > ... Of course new snapshots are being taken after I deleted them all. In any case I tried when there were no snapshots and the first line was almost identical. > My guess is there's confusion about where data in directories is stored. > It's tempting to think that /usr/bin is stored in pool/usr but this is > not necessarily the case. Just because a dataset has a mountpoint it > doesn't not mean it's mounted there, even if child datasets are mounted > below it. If canmount isn't set to "yes" then the data apparently stored > in a dataset is stored in it's parent. I don't think this is the case. In fact /usr isn't mounted (as this is the way the installer sets things up by default). > And the way zfs > does deletions it won't show up immediately due to the asynchronous > garbage collection. Agreed, but I guess a couple of months should be a reasonable time for garbage collection to effectively delete a file. > It's possible (but I think unlikely in this case) that space is being > used by metadata. We've actually already determined it's a deleted file that is still hanging around... or do you think this is not true? > This normally happens if there's a lot of > fragmentation (try zpool list -v) > # zpool list -v > NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT > ... > zroot 3.56T 1.56T 2.01T - - 20% 43% 1.00x ONLINE - > mirror-0 3.56T 1.56T 2.01T - - 20% 43.6% - ONLINE > ada2p4 3.58T - - - - - - - ONLINE > ada3p4 3.58T - - - - - - - ONLINE > You mentioned you were wanting to do a zfs send. Have you tried "zfs > send zroot/ROOT@snap | wc -c" > # zfs snap zroot/ROOT/default@dump > # zfs snap zroot/ROOT@dump > # zfs send zroot/ROOT@dump|wc -c > 47072 > # zfs send zroot/ROOT/default@dump|wc -c > 104081355720 > Sparse files can also cause anomalies, but I guess you'd know about > them. I don't think I have any. Tried "find -x . -sparse" but, as man page says, "This might also match files that have been compressed by the filesystem", so it gives too many false positives. > I have encountered leaks in zpools and they're tricky to find - I've > sometimes given up where I've suspected faulty hardware. SMART says everything is alright and ZFS is not complaining about disks. > Final thing before I look at output - you don't have a reservation on > anything, do you? > # zfs get all zroot/ROOT/default|grep reserv > zroot/ROOT/default reservation none default > zroot/ROOT/default refreservation none default > zroot/ROOT/default usedbyrefreservation 0B - bye & Thanks av.