zfs: df and zpool list report different size

Dan Nelson dnelson at allantgroup.com
Thu Apr 26 16:16:07 UTC 2007


In the last episode (Apr 26), Barry Pederson said:
>  Alexandre Biancalana wrote:
> > I update one machine to -CURRENT (yesterday), and now I'm creating zfs
> > filesystem using the following devices:
> > ad9: 305245MB <Seagate ST3320620AS 3.AAE> at ata4-slave SATA150
> > ad11: 305245MB <Seagate ST3320620AS 3.AAE> at ata5-slave SATA150
> > Next I created the pool:
> > # zpool create backup raidz ad9 ad11
> > # mount
> > /dev/ad8s1a on / (ufs, local)
> > devfs on /dev (devfs, local)
> > backup on /backup (zfs, local)
> > # df -h
> > Filesystem     Size    Used   Avail Capacity  Mounted on
> > /dev/ad8s1a     72G    2.2G     64G     3%    /
> > devfs          1.0K    1.0K      0B   100%    /dev
> > backup         293G      0B    293G     0%    /backup
> > # zpool list
> > NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
> > backup                  596G    222K    596G     0%  ONLINE     -
> > My doubt is why zpool list and df -h report different size ? Which of then
> > is correct and should I trust  ?
> 
>  The zpool size is correct in totalling up the usable size on the
>  pool's drives, but it's not telling you how much is taken up by
>  redundancy, so it's probably not a useful number to you.
> 
>  The "df -h" is also correct and probably more useful.  "zfs list"
>  should show a similar useful number.

That looks like bug 6308817 "discrepancy between zfs and zpool space
accounting".  "zpool list" is including the parity disk space when it
shouldn't.

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6308817

"zfs list" should give you the same info as "df -k".

Note that a 2-disk raidz is really an inefficient way of creating a
mirror, so the "workaround" in your case might just be to drop your
raidz vdev and replace it with a mirror.

-- 
	Dan Nelson
	dnelson at allantgroup.com


More information about the freebsd-current mailing list