Unable to delete files on ZFS volume
Mister Olli
mister.olli at googlemail.com
Sat Jun 20 19:49:54 UTC 2009
Hi,
> This is a a known issue with write allocate file systems and snapshots.
great so it's not something completly unknown...
> I haven't seen this before on v13 without any snapshots.
maybe I should mention that ZFS is running in a xen domU with 786MB ram,
on i386 (as I already read that i386 can be troublesome)..
> A few questions:
some, yeah ;-))
> - How many file systems?
I'm not sure how to count correclty, hut the 'zfs list' output is
complete, with filesystems
- test
- test/data1
- test/data2
nothing more
> - How old are the file systems?
as in 'zpool get all' not older as 48 hours.
> - How much churn has there been on the file system?
not sure what you mean with 'churn' (there seem to be no translation to
german that makes sense ;-))
> - Was this an upgraded v6 or created as v13?
no.
> - How many files on test?
quite a lot, as I started with a bash loop that created 3k large files
for 1/2 day, then switched to randomized sizes.
- test/data1 has 57228
- test/data2 has 9024
(measured with 'ls -l /test/data2/data1 | cat -n | tail -n 10' -1)
> ... as well as any other things that occur to you to characterize the
> file system.
all data on test/data1 was created using an endless bash loop to test if the system crashes, using
while ( true ) ; do dd if=/dev/random of=/test/data1/`date +%Y%m%d%H%M%S` bs=1k count=3 ; sleep 1s; done
while 'count=3' was replaced by 'count=$RANDOM' after approx. 16 hours.
test/data2 is a copy of test/data1 which started as data1 used 1.62GB
and ran until all space in pool was filled up. This lead to remaining
copy processes aborted with 'no space left on device' failure.
as the dir listing of test/data1 is too long for shell (sh/ bash) I did
copying like this:
cp -r /test/data1 /test/data2
That's pretty much everything I did.
Let me know if you need further details.
Regards,
---
Mr. Olli
>
> Cheers,
> Kip
>
>
> On Sat, Jun 20, 2009 at 12:26 PM, Mister Olli<mister.olli at googlemail.com> wrote:
> > Hi,
> >
> >> Do you have snapshots or run ZFS v6?
> > neither one or the other. Here are my pool/ ZFS details.
> >
> > [root at template-8_CURRENT /test/data2]# zpool get all test
> > NAME PROPERTY VALUE SOURCE
> > test size 2.98G -
> > test used 2.94G -
> > test available 47.9M -
> > test capacity 98% -
> > test altroot - default
> > test health ONLINE -
> > test guid 5305090209740383945 -
> > test version 13 default
> > test bootfs - default
> > test delegation on default
> > test autoreplace off default
> > test cachefile - default
> > test failmode wait default
> > test listsnapshots off default
> > [root at template-8_CURRENT /test/data2]# zfs get all test
> > NAME PROPERTY VALUE SOURCE
> > test type filesystem -
> > test creation Fri Jun 19 21:01 2009 -
> > test used 1.96G -
> > test available 0 -
> > test referenced 26.6K -
> > test compressratio 1.00x -
> > test mounted yes -
> > test quota none default
> > test reservation none default
> > test recordsize 128K default
> > test mountpoint /test default
> > test sharenfs off default
> > test checksum on default
> > test compression off default
> > test atime on default
> > test devices on default
> > test exec on default
> > test setuid on default
> > test readonly off default
> > test jailed off default
> > test snapdir hidden default
> > test aclmode groupmask default
> > test aclinherit restricted default
> > test canmount on default
> > test shareiscsi off default
> > test xattr off temporary
> > test copies 1 default
> > test version 3 -
> > test utf8only off -
> > test normalization none -
> > test casesensitivity sensitive -
> > test vscan off default
> > test nbmand off default
> > test sharesmb off default
> > test refquota none default
> > test refreservation none default
> > test primarycache all default
> > test secondarycache all default
> > test usedbysnapshots 0 -
> > test usedbydataset 26.6K -
> > test usedbychildren 1.96G -
> > test usedbyrefreservation 0 -
> > [root at template-8_CURRENT /test/data2]# zfs list -t snapshot
> > no datasets available
> >
> >
> >
> >> Confirm that you've deleted your snapshots and are running pool v13.
> >>
> >> Future ZFS mail should be directed to freebsd-fs@
> > Sorry for that. fixed now ;-))
> >
> > Regards,
> > ---
> > Mr. Olli
> >
> >
> >>
> >>
> >> On Sat, Jun 20, 2009 at 10:36 AM, Mister Olli<mister.olli at googlemail.com> wrote:
> >> > Hi,
> >> >
> >> > after filling up a ZFS volume until the last byte, I'm unable to delete
> >> > files, with error 'No space left on the device'.
> >> >
> >> >
> >> >
> >> > [root at template-8_CURRENT /test/data2]# df -h
> >> > Filesystem Size Used Avail Capacity Mounted on
> >> > /dev/ad0s1a 8.7G 5.2G 2.8G 65% /
> >> > devfs 1.0K 1.0K 0B 100% /dev
> >> > test 0B 0B 0B 100% /test
> >> > test/data1 1.6G 1.6G 0B 100% /test/data1
> >> > test/data2 341M 341M 0B 100% /test/data2
> >> > [root at template-8_CURRENT /test/data2]# zfs list
> >> > NAME USED AVAIL REFER MOUNTPOINT
> >> > test 1.96G 0 26.6K /test
> >> > test/data1 1.62G 0 1.62G /test/data1
> >> > test/data2 341M 0 341M /test/data2
> >> > [root at template-8_CURRENT /test/data2]# ls -l data1 |tail -n 20 <-- there are quite a lot of files, so I truncated ;-))
> >> > -rw-r--r-- 1 root wheel 3072 Jun 20 17:13 20090620165743
> >> > -rw-r--r-- 1 root wheel 9771008 Jun 20 17:11 20090620165803
> >> > -rw-r--r-- 1 root wheel 624640 Jun 20 17:12 20090620165809
> >> > -rw-r--r-- 1 root wheel 1777664 Jun 20 17:14 20090620165810
> >> > -rw-r--r-- 1 root wheel 4059136 Jun 20 17:15 20090620165817
> >> > -rw-r--r-- 1 root wheel 23778304 Jun 20 17:13 20090620165925
> >> > -rw-r--r-- 1 root wheel 20318208 Jun 20 17:13 20090620165952
> >> > -rw-r--r-- 1 root wheel 28394496 Jun 20 17:10 20090620170013
> >> > -rw-r--r-- 1 root wheel 23698432 Jun 20 17:12 20090620170021
> >> > -rw-r--r-- 1 root wheel 26476544 Jun 20 17:19 20090620170100
> >> > -rw-r--r-- 1 root wheel 19904512 Jun 20 17:15 20090620170132
> >> > -rw-r--r-- 1 root wheel 23815168 Jun 20 17:14 20090620170142
> >> > -rw-r--r-- 1 root wheel 6683648 Jun 20 17:11 20090620170225
> >> > -rw-r--r-- 1 root wheel 19619840 Jun 20 17:11 20090620170322
> >> > -rw-r--r-- 1 root wheel 13902848 Jun 20 17:13 20090620170331
> >> > -rw-r--r-- 1 root wheel 28981248 Jun 20 17:13 20090620170346
> >> > -rw-r--r-- 1 root wheel 18287616 Jun 20 17:11 20090620170355
> >> > -rw-r--r-- 1 root wheel 16762880 Jun 20 17:16 20090620170405
> >> > -rw-r--r-- 1 root wheel 26966016 Jun 20 17:10 20090620170429
> >> > -rw-r--r-- 1 root wheel 5252096 Jun 20 17:14 20090620170502
> >> > [root at template-8_CURRENT /test/data2]# rm -rf data1
> >> > rm: data1/20090620141524: No space left on device
> >> > rm: data1/20090620025202: No space left on device
> >> > rm: data1/20090620014926: No space left on device
> >> > rm: data1/20090620075405: No space left on device
> >> > rm: data1/20090620155124: No space left on device
> >> > rm: data1/20090620105723: No space left on device
> >> > rm: data1/20090620170100: No space left on device
> >> > rm: data1/20090620040149: No space left on device
> >> > rm: data1/20090620002512: No space left on device
> >> > rm: data1/20090620052315: No space left on device
> >> > rm: data1/20090620083750: No space left on device
> >> > rm: data1/20090620063831: No space left on device
> >> > rm: data1/20090620155029: No space left on device
> >> > rm: data1/20090619234313: No space left on device
> >> > rm: data1/20090620115346: No space left on device
> >> > rm: data1/20090620075508: No space left on device
> >> > rm: data1/20090620145541: No space left on device
> >> > rm: data1/20090620093335: No space left on device
> >> > rm: data1/20090620101846: No space left on device
> >> > rm: data1/20090620132456: No space left on device
> >> > rm: data1/20090620040044: No space left on device
> >> > rm: data1/20090620091401: No space left on device
> >> > rm: data1/20090620162251: No space left on device
> >> > rm: data1/20090619220813: No space left on device
> >> > rm: data1/20090620010643: No space left on device
> >> > rm: data1/20090620052218: No space left on device
> >> >
> >> >
> >> >
> >> >
> >> >
> >> > Regards,
> >> > ---
> >> > Mr. Olli
More information about the freebsd-fs
mailing list