Broken gmirror: why /dev/ufs is empty when geom_mirror is not loaded?

Lev Serebryakov lev at serebryakov.spb.ru
Mon Jan 17 14:09:06 UTC 2011


Hello, Ivan.
You wrote 17 января 2011 г., 16:46:46:

>>>>>     I have mirrored PARTITION (/dev/ad4s1d + /dev/ad6s1d) with UFS with
>>>>> label on it. This label is shown in /dev/ufs when geom_mirror is
>>>>> loaded.
>>>>
>>>>>     When geom_mirror is NOT loaded both ad4s1d and ad6s1d are valid,
>>>>> complete, clean filesystems, but here is no /dev/ufs entries for them,
>>>>> and kernel can not mount FSes at all.
>>>>     And even worse: it sees ONE of all FSes and when "geom_mirror" is
>>>>     loaded, it puck up one of components from "/dev/ufs/home" instead of
>>>>     device node and everything hangs up due to loop (?)...
>>> Yes, gmirror and glabel are known to interact badly because of such edge
>>> cases - since glabel presents the whole underlying device in pretty much
>>> the same way as the original device entry, gmirror cannot distinguish
>>> between the two. You could use the "-h" argument to "gmirror create" to
>>> get around this.
>>> Since this is so common and has also bitten me in the past, I wonder if
>>> some kind of avoidance detection mechanism could be created in gmirror?
>>
>>    I think, it will be better if geom_label will create ufs/ufsid items
>> always (even if FS size is smaller that it's container (provider)
>> size), but create providers only as big as FS itself. It this case
>> geom_mirror will never see its metadata inside "UFS-based" providers
>> and geom_label will show FS labels even it it inside mirror when
>> geom_mirror is not loaded at all. Both problems are solved with one
>> solution :)
> Ah but you see - the UFS metadata *does* record the correct file system
> size - and this size spans the entire container, just like /dev/adXsY 
> etc. - so both glabel and gmirror behave correctly.
   No, no, no. Here is example.

   Let imagine /dev/ad0s1a and /dev/ad1s1a both have, say, 1024
 sectors. They are mirrored as "mirror/rootfs". It should have size
 1023 sectors, am I right? 1 sector is spent on metadata and can not be
 accessed via /dev/mirror/gm0.

   UFS2 is created on /dev/mirror/gm0, with volume label "rootfs"
 so it records file system size as "1023 sectors", Ok?

   When geom_mirror is loaded and "win" tasting of ad?s1a, geom_label
 reads label from /dev/mirror/gm0 and shows proper
 "/dev/ufs/rootfs" and "/dev/ufsid/whatever" with proper sizes "1023
 sectors".

   When geom_mirror is not loaded, but geom_label is, now it will not
 show "/dev/ufs/rootfs" because ad?s1a.size != rootf.size, am I right?

   If change geom_label to create labels in such cases (simply remove
 sizes check in current code), there will be problem with geom_mirror
 loaded AFTER geom_label -- it could pickup mirror component via
 label, not via device itself.

   But if we remove this check AND change geom_label in such way, as
 it will create provider based on UFS size (not underlying provider
 size), it will work always. When there is no geom_mirror labels will
 be created but it will be impossible to pickup such label as
 component of mirror, because label-provider will not contain mirror
 metadata.

   And there is one more problem: it is almost always possible to
 create mirror AFTER file system (add mirror to existing installation
 without re-creating FSes), but it will have incorrect FS size in
 metadata and here is no way to "shrink" FS even with one sector. And,
 yes, it such case current geom_label implementation will create
 labels from mirror components (size check will be passed), and allows
 geom_mirror to use labels as components. Complete mess :( And
 creation mirror on live system looks like useful feature.

-- 
// Black Lion AKA Lev Serebryakov <lev at serebryakov.spb.ru>



More information about the freebsd-geom mailing list