ZFS Problem - full disk, can't recover space :(.
Jeremy Chadwick
freebsd at jdc.parodius.com
Sun Mar 27 09:41:23 UTC 2011
On Sun, Mar 27, 2011 at 10:31:05AM +0100, Dr Josef Karthauser wrote:
> On 27 Mar 2011, at 09:43, Jeremy Chadwick wrote:
> >>>>
> >>>> This is the problematic filesystem:
> >>>>
> >>>> void/j/legacy-alpha 56.6G 3.41G 56.6G /j/legacy-alpha
> >>>>
> >>>> No chance that an application is holding any data - I rebooting and came up
> >>>> in single user mode to try and get this resolved, but no cookie.
> >>>
> >>> Are these filesystems using compression? Have any quota or reservation
> >>> filesystem settings set?
> >>>
> >>> "zfs get all" might help, but it'll be a lot of data. We don't mind.
> >>>
> >>
> >> Ok, here you are. ( http://www.josef-k.net/misc/zfsall.txt.bz2 )
> >>
> >> I suspect that the problem is the same as reported here:
> >> http://web.archiveorange.com/archive/v/Lmwutp4HZLFDEkQ1UlX5 namely that there was a bug with the handling of sparse files on zfs. The file in question that caused the problem is a bayes database from spam assassin.
> >
> > That was going to be my next question, actually (yep really :-) ).
>
> So, I guess my next question is, would I be mad to apply the zpool version 28 patch to 8.2 and run with that? Or are sparse files so broken on zfs that I ought to find some ufs to run the bayesdb on?
There have been a lot of problem reports (as far as the patch applying
fine but then things breaking badly) from what I've seen regarding the
ZFS v28 patch on RELENG_8. I will also point out that the administrator
of cvsup9.freebsd.org just tried moving to that patch on RELENG_8 and
broke the server badly. I have the mails, but they're off-list/private
and I don't feel comfortable just dumping those here. My advice is that
if you care about stability, don't run the v28 patch, period.
I'm curious about something -- we use RELENG_8 systems with a mirror
zpool (kinda funny how I did it too, since the system only has 2 disks)
for /home. Our SpamAssassin configuration is set to obviously writes to
$user/.spamassassin/bayes_* files. Yet, we do not see this sparse file
problem that others are reporting.
$ df -k /home
Filesystem 1024-blocks Used Avail Capacity Mounted on
data/home 239144704 107238740 131905963 45% /home
$ zfs list data/home
NAME USED AVAIL REFER MOUNTPOINT
data/home 102G 126G 102G /home
$ zpool status data
pool: data
state: ONLINE
scrub: resilver completed after 0h9m with 0 errors on Wed Oct 20 03:08:22 2010
config:
NAME STATE READ WRITE CKSUM
data ONLINE 0 0 0
mirror ONLINE 0 0 0
ada1 ONLINE 0 0 0
ada0s1g ONLINE 0 0 0 26.0G resilvered
$ grep bayes /usr/local/etc/mail/spamassassin/local.cf
use_bayes 1
bayes_auto_learn 1
bayes_ignore_header X-Bogosity
bayes_ignore_header X-Spam-Flag
bayes_ignore_header X-Spam-Status
$ ls -l .spamassassin/
total 4085
-rw------- 1 jdc users 102192 Mar 27 02:30 bayes_journal
-rw------- 1 jdc users 360448 Mar 27 02:30 bayes_seen
-rw------- 1 jdc users 4947968 Mar 27 02:30 bayes_toks
-rw------- 1 jdc users 8719 Mar 20 04:11 user_prefs
--
| Jeremy Chadwick jdc at parodius.com |
| Parodius Networking http://www.parodius.com/ |
| UNIX Systems Administrator Mountain View, CA, USA |
| Making life hard for others since 1977. PGP 4BD6C0CB |
More information about the freebsd-fs
mailing list