Need to force sync(2) before umounting UFS1 filesystems?

Garrett Cooper yanegomi at gmail.com
Thu Sep 29 15:55:51 UTC 2011


On Thu, Sep 29, 2011 at 8:40 AM, Attilio Rao <attilio at freebsd.org> wrote:
> 2011/9/29 Kirk McKusick <mckusick at mckusick.com>:
>>> Date: Thu, 29 Sep 2011 12:04:24 +0200
>>> From: Attilio Rao <attilio at freebsd.org>
>>> To: Kirk McKusick <mckusick at mckusick.com>
>>> Cc: Garrett Cooper <yanegomi at gmail.com>, freebsd-fs at freebsd.org,
>>>         Xin LI <delphij at freebsd.org>
>>> Subject: Re: Need to force sync(2) before umounting UFS1 filesystems?
>>>
>>> 2011/9/29 Kirk McKusick <mckusick at mckusick.com>:
>>> > Hi Attilio,
>>> >
>>> > I have been looking into the problem described below and since you
>>> > appear to be the person that put in the change in question, I would
>>> > like to get you opinion on what (if anything) should be changed here.
>>>
>>> Kirk,
>>> please note that I didn't add/change anything wrt. that codepath.
>>>
>>> In the old code it was present a lockmgr() acquisition with LK_DRAIN
>>> and LK_NOWAIT. This means that if the lockmgr() lock on the struct
>>> mount was already held by any other consumer it was going to fallback
>>> in the codepath you outlined in the patch immediately, rather than
>>> just sleeping (and note that LK_NOWAIT was just passed in the case of
>>> a non-forced unmount).
>>>
>>> Said that, I don't really have an objection with making the forced
>>> unmount case as the default, but I still didn't go through the whole
>>> thread you outlined and I don't have any context on it, thus I'm not
>>> sure if this is the right approach or not.
>>>
>>> If you want to share more context on the problem you are trying to
>>> solve by switching that policy we may discuss this too, but in general
>>> I don't have a problem about adopting forced unmount policy on unmount
>>> for all the cases.
>>>
>>> Attilio
>>> --
>>> Peace can only be achieved by understanding - A. Einstein
>>
>> Thanks for providing a bit more of the history on this codepath.
>>
>> Since 9-stable has now been branched, I believe that the best path
>> forward is to check this change into head and let it sit there for
>> several months so that we can get some experience with it. If it
>> causes folks problems we can back it out. If it does not cause
>> problems, then we can MFC it to 9-stable.
>>
>> Does this seem like a reasonable approach?
>
> In general yes, but I'd like to understand why unmount should fail so
> much with SU... do we do extended period with vfs_busy()'ed
> filesystem?
>
> I need more context here, likely I'd need to look into the PRs too
> before to give an informative answer.

The case noted in PR 161016 is that data isn't being completely
flushed out to disk (or in this case memory disk) at the end of each
nanobsd build when it's creating an md(4) image on the 2nd (and
subsequent tries) when creating the disk image; so we have to place
hacks in nanobsd.sh to sync out the SU data before umount so we can
unmount and destroy the md device to prevent the build from barfing.

I know that another company that does something similar to nanobsd
disk images in a different way for appliance builds (I don't remember
if the md generation scripts syncs out to disk though), and I'm sure
that there are more companies that do the same thing.

This was a behavior change between 8.x and 9.x that I noticed recently
only because I started using nanobsd ~1 month ago (the other company I
used to work for might have had this hack in place and I just didn't
realize it at the time).

Thanks,
-Garrett


More information about the freebsd-fs mailing list