ZFS (and quota)
johan at stromnet.se
Wed Sep 19 15:56:11 PDT 2007
I just installed FreeBSD-current on a box (actually upgraded 6.2 to -
current) to experiment a bit.
I was playing around with ZFS a bit and tried out the quota features.
While doing this I noticed that it doesnt seem like you get a "disk
full" notice the same way as you do on a "normal" (UFS) filesystem.
Instead of aborting the operation with "No space left on device" it
[root at devbox ~]# zpool create tank /dev/ad2
[root at devbox ~]# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
tank 37.2G 111K 37.2G 0% ONLINE -
[root at devbox /tank]# zfs create -V 10M tank/set3vol
[root at devbox /tank]# newfs /dev/zvol/tank/set3vol
/dev/zvol/tank/set3vol: 10.0MB (20480 sectors) block size 16384,
fragment size 2048
using 4 cylinder groups of 2.52MB, 161 blks, 384 inodes.
super-block backups (for fsck -b #) at:
160, 5312, 10464, 15616
[root at devbox /tank]# mount /dev/zvol/tank/set3vol set3vol/
[root at devbox /tank]# cd set3vol/
[root at devbox /tank/set3vol]# dd if=/dev/urandom of=test
/tank/set3vol: write failed, filesystem is full
dd: test: No space left on device
19169+0 records in
19168+0 records out
9814016 bytes transferred in 2.276896 secs (4310261 bytes/sec)
[root at devbox /tank]# zfs create tank/set2
[root at devbox /tank/set2]# zfs set quota=10M tank/set2
[root at devbox /tank/set2]# zfs get quota tank/set2
NAME PROPERTY VALUE SOURCE
tank/set2 quota 10M local
[root at devbox /tank/set2]# dd if=/dev/urandom of=test
18563+0 records in
18562+0 records out
9503744 bytes transferred in 199.564353 secs (47622 bytes/sec)
[root at devbox /tank/set2]# zfs list tank/set2
NAME USED AVAIL REFER MOUNTPOINT
tank/set2 9.15M 870K 9.15M /tank/set2
No hard stop there, it just tries to write more and more and more..
Well the quota is enforced fine but shouldnt there be some more hard
error? I'm not sure how regular UFS quotas work though since I never
used them, but this seems like strange behaviour.
Anyway, how "stable" is the ZFS support and -current / Fbsd7 in
general now? I'm about to get a new server, 8 core xeon thingy with
lots of disk, so I would probably benifit very much from running
freebsd-7 (much better multicore performance if i've understood
correct). Beeing able to use ZFS for some of my jails would rock too,
having individual quotas and all the other flexibilitys ZFS provides
(ie creating a new set for every jail and enforce individual quota)..
Would anyone dare to do this on a production machine yet? Is anyone
Well, it can't be said to many times, keep up the good work! Thanks
all fbsd developers (and others too!) :)
johan at stromnet.se
More information about the freebsd-fs