zpool failmode=continue

Johannes Totz johannes at jo-t.de
Tue Dec 27 16:37:50 UTC 2011


On 13/12/2011 14:53, Johannes Totz wrote:
> On 13/12/2011 14:44, Peter Maloney wrote:
>> Are you using NFS or ZVOLs?
>
> Neither, see below.
>
>> My zfs hangs (all IO) if I go into the .zfs/snapshots directory over
>> NFS. (planning to file a PR after I find a way to reproduce it reliably,
>> but it depends on specific snapshots). My workaround is to mount
>> /var/empty on top of the .zfs directory on the nfs client, and give
>> nobody else access. Another workaround I thought of is to have another
>> parent directory in the dataset, and share the 2nd level down which
>> doesn't contain the .zfs directory.
>
> My pool is not exported to any clients. My situation is actually the
> other way around, should have been more clear: the block device on which
> I created the pool is a on the network.
> It's kind of a crazy setup:
> - sshfs to another (Linux) machine
> - create big image file
> - create pool from file vdev mounted via sshfs
> Eventually the network drops out, zpool shows read and write errors,
> fine so far. But all new io just hangs instead of failing with an error.

After some observation, turns out that periodic/security/100.chksetuid 
makes all i/o die on the test pool.
Is find doing something funny? As it does not even search around on the 
testpool (it's imported but not mounted) nor the sshfs (only ufs and zfs 
is searched) I don't have any clue as to what might go wrong...
zpool status simply mentions read/write errors.

I noticed this because when logging iostat to a file, i/o always stopped 
at 3am. But I can also trigger it by simply running 100.chksetuid. All 
the other stuff in daily and security is fine.

Anybody has any idea what might cause it?



More information about the freebsd-fs mailing list