Re: freebsd-update install no space left on device
- In reply to: Robert : "Re: freebsd-update install no space left on device"
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Fri, 01 Aug 2025 10:46:41 UTC
On 30/07/2025 02:47, Robert wrote: > On 7/29/2025 8:26 PM, Karl Vogel wrote: >>>> On Tue 29 Jul 2025 at 04:48:07 (-0400), Frank Leonhardt wrote: >>> The mystery to me isn't the free space, it's that the total space has >>> changed. I'm watching this in case someone can explain. >> I've seen this when I have a diaper-load of snapshots laying around. >> >>> I don't quite trust df with zfs - "zfs list" may be more accurate. >> Agreed. > > System is all good as far as I can tell. I agree as well, the `zfs > list` shows what I expect to see... > > root@monitor1:~ # zfs list > NAME USED AVAIL REFER > MOUNTPOINT > zroot 5.36G 1.91G 96K /zroot > zroot/ROOT 5.32G 1.91G 96K none > zroot/ROOT/14.2-RELEASE-p4_2025-07-27_191756 8K 1.91G 4.75G / > zroot/ROOT/14.3-RELEASE_2025-07-27_192345 8K 1.91G 4.83G / > zroot/ROOT/default 5.32G 1.91G 1.89G / > zroot/home 252K 1.91G 96K /home > zroot/home/admin 156K 1.91G 156K > /home/admin > zroot/tmp 144K 1.91G 144K /tmp > zroot/usr 288K 1.91G 96K /usr > zroot/usr/ports 96K 1.91G 96K > /usr/ports > zroot/usr/src 96K 1.91G 96K /usr/src > zroot/var 23.3M 1.91G 96K /var > zroot/var/audit 96K 1.91G 96K > /var/audit > zroot/var/crash 96K 1.91G 96K > /var/crash > zroot/var/log 22.7M 1.91G 22.7M /var/log > zroot/var/mail 212K 1.91G 212K > /var/mail > zroot/var/tmp 96K 1.91G 96K /var/tmp > root@monitor1:~ # df -h > Filesystem Size Used Avail Capacity > Mounted on > zroot/ROOT/default 3.8G 1.9G 1.9G 50% / > devfs 1.0K 0B 1.0K 0% /dev > /dev/gpt/efiboot0 260M 1.3M 259M 1% /boot/efi > zroot/tmp 1.9G 144K 1.9G 0% /tmp > zroot/usr/ports 1.9G 96K 1.9G 0% /usr/ports > zroot/var/log 1.9G 23M 1.9G 1% /var/log > zroot/var/mail 1.9G 212K 1.9G 0% /var/mail > zroot/var/tmp 1.9G 96K 1.9G 0% /var/tmp > zroot/home 1.9G 96K 1.9G 0% /home > zroot 1.9G 96K 1.9G 0% /zroot > zroot/var/audit 1.9G 96K 1.9G 0% /var/audit > zroot/home/admin 1.9G 156K 1.9G 0% /home/admin > zroot/usr/src 1.9G 96K 1.9G 0% /usr/src > zroot/var/crash 1.9G 96K 1.9G 0% /var/crash > > Just 2 small snapshots... > > root@monitor1:~ # zfs list -t snapshot > NAME USED AVAIL REFER MOUNTPOINT > zroot/ROOT/default@2025-07-27-19:17:56-0 58.0M - 4.75G - > zroot/ROOT/default@2025-07-27-19:23:45-0 2.53M - 4.83G - > > -- Robert > My guess is that the anomaly is simply a result of using df. Under normal circumstances (UFS) df knows the size of the partition and how much of it is used. When it's mapping onto ZFS datasets it simply doesn't know this as unless a quota has been set, as the maximum size of the dataset = current usage plus the remaining space in the zpool. Even then it's a minimum as it can't account for compression. The "zpool list" command is probably more useful when looking at a zpool overall. As I've said, df is pretty much useless. It's just mapped on to zfs for backward compatibility with scripts, and the free space figure (its raison d'être) is the least reliable output value! Regards, Frank. (Sorry Robert - sent off-list by mistake!)