ZFS and disk usage
Peter Maloney
peter.maloney at brockmann-consult.de
Fri Apr 13 13:32:32 UTC 2012
Please run this (my script I call zfsdf):
zfs list -o
name,used,referenced,usedbychildren,usedbydataset,usedbysnapshots,available,mountpoint,quota,reserv,refquota,refreserv
"$@" | sed -r "s/none/ -/g"
zpool list -o name,size,allocated,free,capacity
It may give some small hints (similar to zfs get all <poolname> but for
all datasets, telling about refquota, snap used, etc.),
but...
I think it won't tell us enough, and the problem is 8.2-RELEASE. You
should definitely upgrade to 8-STABLE, or 8.3-rc2. [with regression
testing of course, but not as much as you would need for 9.x] [btw, I
found that 8-STABLE in Sept 2011 would hang renaming snapshots with
ZVOLS, but not 8-STABLE from Feb 2012, so be extra careful with zvols]
On my 8-STABLE systems, the numbers make sense:
semi-new system:
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
tank 37.7T 10.2T 67.5K /tank
# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
tank 63.2T 49.0T 14.3T 77% 1.00x ONLINE -
# bc
scale=5
37.7/(10.2+37.7)
.78705
year old system (scripts create and destroy 1 recursive snapshot every
20 minutes):
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
tank 14.5T 17.5T 5.99G /tank
# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
tank 43.5T 19.4T 24.1T 44% 1.00x ONLINE -
# bc
scale=5
14.5/(14.5+17.5)
.45312
On 04/13/2012 02:21 PM, Mark Schouten wrote:
> Hi,
>
> I'm having some issues with a FreeBSD box using ZFS to serve iscsi to other boxes.
>
> [root at storage ~]# zpool list
> NAME SIZE USED AVAIL CAP HEALTH ALTROOT
> storage 1.77T 431G 1.34T 23% ONLINE -
>
> As you can see, the zpool is at only 23% of it's capacity. However, if you get a list of filesystems with "zfs list", you see that there is only 138GB free space left.
> [root at storage ~]# zfs list
> NAME USED AVAIL REFER MOUNTPOINT
> storage 1.60T 138G 431G /storage
> storage/ZFS_FS_1 20G 158G 16K -
> storage/ZFS_FS_2 20G 158G 16K -
> storage/ZFS_FS_3 100G 238G 16K -
> storage/ZFS_FS_4 20G 158G 16K -
> storage/ZFS_FS_5 1G 139G 16K -
> storage/ZFS_FS_6 400G 538G 16K -
> storage/ZFS_FS_7 20G 158G 16K -
> storage/ZFS_FS_8 400G 538G 16K -
> storage/ZFS_FS_9 20G 158G 16K -
> storage/ZFS_FS_10 20G 158G 16K -
> storage/ZFS_FS_11 20G 158G 16K -
> storage/ZFS_FS_12 150G 288G 16K -
> storage/ZFS_FS_13 20G 158G 16K -
>
> These are fiesystems that are created with the following command.
> zfs create -V ${size}GB ${ZFS_ROOT}/${diskname}
>
> Now, it seems that zpool only counts the data that is actually written on the disk, and that zfs counts both the sum of the individual filesystems *and* the data actually written on disk. If I was to create a new filesystem of 138GB, the filesystem would be full, eventhough that that's not the case.
>
> This seems weird, but I'm not sure if it's me doing something wrong, or if its a bug. Please enlighten me, thanks.
>
>
>
> Some more info:
> [root at storage ~]# uname -a
> FreeBSD storage.storage 8.2-RELEASE FreeBSD 8.2-RELEASE #0: Thu Feb 17 02:41:51 UTC 2011 root at mason.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC amd64
>
> [root at storage ~]# zpool get all storage
> NAME PROPERTY VALUE SOURCE
> storage size 1.77T -
> storage used 431G -
> storage available 1.34T -
> storage capacity 23% -
> storage altroot - default
> storage health ONLINE -
> storage guid 10905194744545589060 default
> storage version 15 default
> storage bootfs - default
> storage delegation on default
> storage autoreplace off default
> storage cachefile - default
> storage failmode wait default
> storage listsnapshots off default
>
> [root at storage ~]# zfs get all storage
> NAME PROPERTY VALUE SOURCE
> storage type filesystem -
> storage creation Tue May 10 11:59 2011 -
> storage used 1.60T -
> storage available 138G -
> storage referenced 431G -
> storage compressratio 1.00x -
> storage mounted yes -
> storage quota none default
> storage reservation none default
> storage recordsize 128K default
> storage mountpoint /storage default
> storage sharenfs off default
> storage checksum on default
> storage compression off default
> storage atime on default
> storage devices on default
> storage exec on default
> storage setuid on default
> storage readonly off default
> storage jailed off default
> storage snapdir hidden default
> storage aclmode groupmask default
> storage aclinherit restricted default
> storage canmount on default
> storage shareiscsi off default
> storage xattr off temporary
> storage copies 1 default
> storage version 4 -
> storage utf8only off -
> storage normalization none -
> storage casesensitivity sensitive -
> storage vscan off default
> storage nbmand off default
> storage sharesmb off default
> storage refquota none default
> storage refreservation none default
> storage primarycache all default
> storage secondarycache all default
> storage usedbysnapshots 0 -
> storage usedbydataset 431G -
> storage usedbychildren 1.18T -
> storage usedbyrefreservation 0 -
>
--
--------------------------------------------
Peter Maloney
Brockmann Consult
Max-Planck-Str. 2
21502 Geesthacht
Germany
Tel: +49 4152 889 300
Fax: +49 4152 889 333
E-mail: peter.maloney at brockmann-consult.de
Internet: http://www.brockmann-consult.de
--------------------------------------------
More information about the freebsd-fs
mailing list