More on ZFS filesystem sizes.
brooks at freebsd.org
Wed Dec 17 15:17:13 PST 2008
On Wed, Dec 17, 2008 at 04:51:00PM -0500, Zaphod Beeblebrox wrote:
> So... I posted before about the widly different sizes reported by zfs list
> and du -h for my ports repository. Nobody explained this to any satisfying
> I now have another quandry. I have ZFS on my laptop (two drives, mirrored)
> and I "zfs send" backups to my big array (6 drives, raid-Z1). The problem
> is that they don't match up:
> On the 6 drive array:
> vr2/backup/canoe/64/usr at 20080307-1541 746M - 4.82G -
> vr2/backup/canoe/64/usr at 20080309-1443 221M - 4.79G -
> vr2/backup/canoe/64/usr at 20080319-1722 334M - 4.97G -
> vr2/backup/canoe/64/usr at 20080329-0041 27.8M - 5.24G -
> vr2/backup/canoe/64/usr at 20080402-2300 21.9M - 5.27G -
> vr2/backup/canoe/64/usr at 20080416-0223 18.5M - 5.29G -
> vr2/backup/canoe/64/usr at 20080417-0117 18.6M - 5.29G -
> On the 2 drive laptop:
> canoe/64/usr at 20080307-1541 738M - 4.76G -
> canoe/64/usr at 20080309-1443 217M - 4.73G -
> canoe/64/usr at 20080319-1722 330M - 4.90G -
> canoe/64/usr at 20080329-0041 26.7M - 5.17G -
> canoe/64/usr at 20080402-2300 20.6M - 5.20G -
> canoe/64/usr at 20080416-0223 17.5M - 5.22G -
> canoe/64/usr at 20080417-0117 17.5M - 5.22G -
> ... note that the snapshot sizes differ by many megabytes ... and not
> seemingly any fixed amount, either.
Have you tried asking the zfs developers? I'd tend to assume zfs is
reporting the amount of space it thinks it's using and that as long as
the numbers are close to expected it's not likely to be a FreeBSD issue.
It might well be the case that a given bit of data takes different
amounts of space when stored on different pool types due to needing
different meta data.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 187 bytes
Desc: not available
Url : http://lists.freebsd.org/pipermail/freebsd-fs/attachments/20081217/39feedb1/attachment.pgp
More information about the freebsd-fs