gvirstor & UFS

Bruce Evans bde at zeta.org.au
Thu Mar 29 22:10:46 UTC 2007


On Thu, 29 Mar 2007, Ivan Voras wrote:

> Ivan Voras wrote:
>
>> The file system on the virstor device was created with softupdates
>> enables, as shown...
>
> Without softupdates, the I/O requests fail, with the usual spewing of
> kernel messages from g_vfs_done() in the log, but the application (dd)
> doesn't receive failure codes. In effect, it looks like the requests are
> ignored - they fail, but dd continues pumping more requests.

This might be because the writes are only of data, and data writes are
async with little error checking in ffs without soft updates.  Any
error cannot be reported on return from bdwrite() and bawrite() since
the write probably hasn't happened then, and there is no mechanism
other than kernel messages for reporting errors.

I thought that geom was too silent about i/o errors.  Actually, it
seems to be more verbose than my dscheck() about low level errors
(g_vfs_done() always prints something if there is an error) and less
verbose about errors that it checks for (g_io_check() never prints
anything, and buffers with errors detected by g_io_check() apparently
don't get as far as g_vfs_done()).  The messages printed by my dscheck()
are too verbose but are sometimes useful (especially the details about
what caused the error, which g_vfs_done() cannot print because its
level is too high).

> Something else occurred to me: what if an UFS metadata update (for
> example, in cg) fails in this way - that it requires an additional chunk
> of physical data that's not available, is there a chance that the fs
> will be corrupted?

ffs is supposed to detect and handle i/o errors for metadata.  It is
sloppy for indirect blocks (it uses (void)bwrite() a lot), but hopefully
when the write of an indirect block fails the damage is limited to one
file.  I/o errors in old blocks can easily cause corruption, but for
ENOSPC-type errors in new blocks the error handling hopefully aborts
writing before any damage is done.  This depends on ffs using a safe
order of writing and the physical order not being different, which I
doubt actually happens, especially for virtual disks -- everything
would have to be synchronous or otherwise slow to preserve the order
in all lower layers.

Bruce


More information about the freebsd-fs mailing list