Re: freebsd-update install no space left on device

From: atsac <adrian_at_atsac.au>
Date: Mon, 28 Jul 2025 01:16:43 UTC
Hi,

Have you checked 'bectl list’?

On a zfs install, updating creates a Boot Environment for the previous
install, so the previous version can be booted up on demand. These Boot
Environments can build up and consume a fair amount of disk space:

  ~> bectl list
  BE                                Active Mountpoint Space Created
  12.3-RELEASE_2021-12-15_122800    -      -          104M  2021-12-15 12:28
  13.0-RELEASE-p4_2021-12-15_135123 -      -          36.1M 2021-12-15 13:51
  13.1-RELEASE-p3_2022-11-16_135244 -      -          1.36G 2022-11-16 13:52
  13.1-RELEASE-p4_2022-12-13_192926 -      -          3.00G 2022-12-13 19:29
  13.1-RELEASE_2022-11-02_154039    -      -          1.61G 2022-11-02 15:40
  default                           NR     /          48.6G 2016-06-04 03:16

you can remove them with 'bectl destroy':

  ~> sudo bectl destroy 13.0-RELEASE-p4_2021-12-15_135123
  ~> sudo bectl destroy 13.1-RELEASE-p3_2022-11-16_135244

which releases the space consumed by the BE.

Cheers, Adrian

> On 28 Jul 2025, at 9:34 am, Robert <robert@webtent.org> wrote:
> 
> I was doing an upgrade from 14.2 to 14.3 and started receiving a long list of 'no space left on device' after the reboot and running install the second time. It did finish afterward, see below. I didn't expect this as I did what I thought was a mirror system just before and didn't have the issue. Here is what the system looked like afterward...
> 
>> root@monitor1:/usr/local/etc # df -h
>> Filesystem                         Size    Used   Avail Capacity  Mounted on
>> zroot/ROOT/default                 2.0G    1.9G     70M    97%   /
>> devfs                              1.0K      0B    1.0K     0%   /dev
>> /dev/gpt/efiboot0                  260M    1.3M    259M     1%   /boot/efi
>> zroot/tmp                           70M    144K     70M     0%   /tmp
>> zroot/usr/ports                     70M     96K     70M     0%   /usr/ports
>> zroot/var/tmp                       70M     96K     70M     0%   /var/tmp
>> zroot/var/audit                     70M     96K     70M     0%   /var/audit
>> zroot/var/crash                     70M     96K     70M     0%   /var/crash
>> zroot/var/log                       92M     22M     70M    24%   /var/log
>> zroot                               70M     96K     70M     0%   /zroot
>> zroot/home                          70M     96K     70M     0%   /home
>> zroot/var/mail                      70M    212K     70M     0%   /var/mail
>> zroot/home/admin                    70M    156K     70M     0%   /home/admin
>> zroot/usr/src                       70M     96K     70M     0%   /usr/src
> 
> But then I looked at last nights daily run output and found the zroot/ROOT/default was 5.7G, and yes, I double checked to make sure I was looking at the correct daily run output email...
> 
>> Disk status:
>> Filesystem            Size    Used   Avail Capacity  Mounted on
>> zroot/ROOT/default    5.7G    4.2G    1.5G    73%    /
>> devfs                 1.0K      0B    1.0K     0%    /dev
>> /dev/gpt/efiboot0     260M    1.3M    259M     1%    /boot/efi
>> zroot/usr/ports       1.5G     96K    1.5G     0%    /usr/ports
>> zroot/var/audit       1.5G     96K    1.5G     0%    /var/audit
>> zroot/var/crash       1.5G     96K    1.5G     0%    /var/crash
>> zroot/var/log         1.5G     22M    1.5G     1%    /var/log
>> zroot/home            1.5G     96K    1.5G     0%    /home
>> zroot/tmp             1.5G    144K    1.5G     0%    /tmp
>> zroot/var/mail        1.5G    212K    1.5G     0%    /var/mail
>> zroot/usr/src         1.5G     96K    1.5G     0%    /usr/src
>> zroot                 1.5G     96K    1.5G     0%    /zroot
>> zroot/var/tmp         1.5G     96K    1.5G     0%    /var/tmp
>> zroot/home/admin      1.5G    156K    1.5G     0%    /home/admin
> 
> How on earth did the size of zroot/ROOT/default change? More importantly at this point, am I going to start seeing any system issues and what is the best way to remedy? The zpool is 7.5G...
> 
>> root@monitor1:/usr/local/etc # zpool list
>> NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP DEDUP    HEALTH  ALTROOT
>> zroot  7.50G  7.20G   310M        -         -    84%    95% 1.00x    ONLINE  -
> 
> This is a VM on a TrueNAS server. The mirror server on a separate host shows the same zpool with 89% CAP. These VMs are not that old, they were spun up on FreeBSD 14.1. Here is how the freebsd-update install ended...
> 
>> install: c34fc2d9bef9bef3a5244a4bcb085e4167b09135c0d5388e79129eaab937a3ca: No such file or directory
>> rm: c34fc2d9bef9bef3a5244a4bcb085e4167b09135c0d5388e79129eaab937a3ca: No such file or directory
>> /usr/sbin/freebsd-update: cannot create 436480fa5540b23a3d090c53bb19ab30b5290f09364ac9a844dd75c7324dbc87: No space left on device
>> install: 436480fa5540b23a3d090c53bb19ab30b5290f09364ac9a844dd75c7324dbc87: No such file or directory
>> rm: 436480fa5540b23a3d090c53bb19ab30b5290f09364ac9a844dd75c7324dbc87: No such file or directory
>> /usr/sbin/freebsd-update: cannot create b8b57186732467b24bcb1ced083cce181fb70ad4764feed7b6fec2eb3b8693a9: No space left on device
>> install: b8b57186732467b24bcb1ced083cce181fb70ad4764feed7b6fec2eb3b8693a9: No such file or directory
>> rm: b8b57186732467b24bcb1ced083cce181fb70ad4764feed7b6fec2eb3b8693a9: No such file or directory
>> /usr/sbin/freebsd-update: cannot create 4f6891535bc9cc281fb941adfc0d11240dd6dd782f0b973074a954dfa715a57d: No space left on device
>> install: 4f6891535bc9cc281fb941adfc0d11240dd6dd782f0b973074a954dfa715a57d: No such file or directory
>> rm: 4f6891535bc9cc281fb941adfc0d11240dd6dd782f0b973074a954dfa715a57d: No such file or directory
>> chflags: ///usr/lib/libl.a: No such file or directory
>> chflags: ///usr/lib/libln.a: No such file or directory
>> 
>> Restarting sshd after upgrade
>> Performing sanity check on sshd configuration.
>> Stopping sshd.
>> Waiting for PIDS: 853.
>> Performing sanity check on sshd configuration.
>> Starting sshd.
>>  done.
> 
> --
> Robert
> 
> 
>