mdconfig on ZFS leaks disk space

Mickaël Maillot mickael.maillot at gmail.com
Sat Jun 26 16:29:43 UTC 2010


what is your svn rev ?
because r208869: Fix freeing space after deleting large files with holes
dated: Sun Jun  6 13:08:36 2010


2010/6/26 Fabian Keil <freebsd-listen at fabiankeil.de>:
> Peter Jeremy <peterjeremy at acm.org> wrote:
>
>> I recently did a quick experiment to create an 8TB UFS filesystem
>> via mdconfig and after destroying the md and deleting the file,
>> the disk space used by the md was not returned - even after a
>> reboot.  Has anyone else seen this?
>>
>> I was using a 8.1-prelease/amd64 with everything on ZFS v14 and did:
>>
>> # truncate -s 8T /tmp/space
>> # mdconfig -a -t vnode -f /tmp/space
>> # newfs /dev/md0
>> /dev/md0: 8388608.0MB (17179869184 sectors) block size 16384, fragment size 2048
>>         using 45661 cylinder groups of 183.72MB, 11758 blks, 23552 inodes.
>>
>> This occupied ~450MB on /tmp which uses lzjb compression.
>>
>> # fsck -t ufs /dev/md0
>> needed ~550MB VSZ and had ~530MB resident by the end.
>>
>> # mount /dev/md0 /mnt
>> # df -k /mnt
>> /dev/md0  8319620678  4 7654051020 0%  2 1075407868    0%   /mnt
>>
>> I then copied a random collection of files into /mnt, boosting the
>> size of /tmp/space to ~880MB.
>>
>> # umount /mnt
>> # fsck -t ufs /dev/md0
>> # mdconfig -d -u 0
>> # rm /tmp/space
>>
>> At this point, 'df' on /tmp reported 881MB used whilst 'du' on /tmp
>> report 1MB used.  lsof showed no references to the space.  Whilst
>> there were snapshots of /tmp, none had been taken since /tmp/space
>> was created.  I deleted them anyway to no effect.
>
> I can't reproduce this with Martin Matuska's ZFS v16 patch:
>
> fk at r500 /tank/sparse-file-test $df -h ./
> Filesystem               Size    Used   Avail Capacity  Mounted on
> tank/sparse-file-test     62G    932M     61G     1%    /tank/sparse-file-test
> fk at r500 /tank/sparse-file-test $sudo rm space
> fk at r500 /tank/sparse-file-test $df -h ./
> Filesystem               Size    Used   Avail Capacity  Mounted on
> tank/sparse-file-test     62G     96K     62G     0%    /tank/sparse-file-test
>
> The pool is still v14.
>
> I thought I remembered reports on zfs-discuss@ about a known bug with
> leaked disk space after deleting sparse files that's supposed to be
> fixed in latter ZFS versions, but so far I only found reports about
> a similar problem with sparse volumes, so maybe I'm mistaken.
>
> Fabian
>


More information about the freebsd-fs mailing list