ZFS panic space_map.c line 110

Martin nakal at web.de
Fri May 8 07:20:42 UTC 2009

Hi Richard and Kip,

> This panic wouldn't have anything to do with zpool.cache (that's just
> a file to help the system find which devices it should expect to find
> zpools on during boot).   This is a problem with the free space map,
> which is part of the filesystem metadata.  If you're lucky, it's just
> the in-core copy of the free space map that was bogus and there's a
> valid map on disk.  If you're unlucky, the map on disk is trashed,
> and there's no really easy way to recover that pool.

I really cannot tell. I thought it would be nice to have ZFS for jail
managements, so I can create one file system for one jail that's why I
installed -CURRENT with version 13 of ZFS on a server in production.

> > One more piece of information I can give is that every hour the ZFS
> > file systems create snapshots. Maybe it triggered some
> > inconsistency between the writes to a file system and the snapshot,
> > I cannot tell, because I don't understand the condition.
> I doubt this had anything to do with the problem.  

Well, you said you provoked the panic by mounting and unmounting very
often. The zfs-snapshot-mgmt port that I used shows similar behavior in
certain situations.

> This could be a locking bug or a space map corruption (depressing).
> There really isn't enough context here for me to go on. If you can't
> get a core, please at least provide us with a backtrace from ddb.

It does not look like a locking bug to me. I tried several times to get
the pool running, also with an older kernel. It paniced in the same
way. I could get past the panic the first time, when I removed
zfs_enable="YES" from rc.conf.

ZFS really made we worried and I removed the pools now, created UFS
partition and restored all data from backup. Sorry, I did not
investigate the problem deeper because I wanted to get the file server
running and thought that the exact panic line number and mentioning the
situation (during importing the pool) would be enough to make the
problem clear.

Nothing was lost, this ZFS data corruption just ended my ZFS experiment
for now. I will use the good old UFS2 for now and check it at a later
time again. 

Thanks to you both for your advice.


More information about the freebsd-current mailing list