gvinum missing setupstate/setstate

Paul Schenkeveld fb-geom at psconsult.nl
Fri Mar 10 12:05:05 UTC 2006

Hi Lukas,

On Thu, Mar 09, 2006 at 11:18:45AM +0100, Lukas Ertl wrote:
> On Thu, 9 Mar 2006, Paul Schenkeveld wrote:
> >I know that the data on the datadisks are mirrored correctly as they
> >were so while running 4-STABLE, so I'd prefer not to re-synchronize
> >those disks (2 pairs of two 500GB disks per server).  So I tried the
> >'volume <name> setupstate' while creating but that seems not supported.
> >Also, doing setstate up <name.p1> <name.p1.s0> afterwards seems not
> >supported.
> Try "setstate -f up name.p1.s0".

That worked, thanks!

I may have found another reproducable problem with gvinum however.  My
estimate here is that 'gvinum create' on a set of disks with no vinum
configuration present writes unreliable metadata if the config file has
more than a couple of volumes.  I can't fix the source (ENOTIME, and
probably EINSUFFICIENTEXPERIENCE with geom and gvinum code) so below is
the story which made me feel gvinum create is broken.

This server is finally up after a downtime of more than 24 hours so my
next opportunity to test will be the next server, somewhere next month.

The long (& boring) story:

For the upgrade from 4.x to 6-STABLE, I first installed a new couple of
SCSI disks:

  # fdisk -s da0
  /dev/da0: 8924 cyl 255 hd 63 sec
  Part        Start        Size Type Flags
     1:          63     2088387 0xa5 0x00
     2:     2088450   141275610 0xa5 0x80

  # disklabel da0s1
  # /dev/da0s1:
  8 partitions:
  #        size   offset    fstype   [fsize bsize bps/cpg]
    a:  1564099   524288    4.2BSD        0     0     0
    b:   524288        0      swap
    c:  2088387        0    unused        0     0

  # disklabel da0s2
  # /dev/da0s2:
  8 partitions:
  #        size   offset    fstype   [fsize bsize bps/cpg]
    a:   524288  2097433    4.2BSD     2048 16384 32776
    c: 141275610        0    unused        0     0
    e: 141275594       16     vinum

da1 is exactly the same.

The first slice on each disk has 6.0-RELEASE installed on it which I
use as a kind of Fixit environment (faster than CD), it also contains an
up-to-date copy of my vinum configuration file.

There are 12 gvinum volumes, including swap, /, /var, /usr, /home.

Because of problems I saw with vinum in the past my standard procedure
for changing anything with any [g]vinum configuration is:
  - reboot the machine and stay in single-user mode
  - vinum resetconfig
    or even: dd bs=512 count=265 < /dev/zero > vinum_partition
  - vinum create /etc/vinum.conf

Since resetconfig is not supported I use the dd method on >= FB-6.
Note that I always reboot before dd so geom_vinum is not yet loaded
at that time.

I have tried many times to follow the above procedure on this server
after I added two existing large mirrored filesystems on 4 ATA disks of
500GB to this system and added their config to /etc/vinum.conf.  Every
time, it looks OK after gvinum create but after a reboot subdisks, plexes
and volumes show up as down, stale or empty and sometimes a drive is
reported down.  Explicit gvinum saveconfig after gvinum create does not
help either.

To rule out kernel differences between 6.0-R and 6-STABLE I also tried
with the 6.0-R on both da[01]s1a and da[01]s2a and with 6-STABLE on
both slices.  I always ended up having down/stale/empty objects after
reboot but sometimes only half of each mirror was gone, sometimes both
plexes of each mirror.

Reducing the vinum config to only cover the 2 SCSI disks and even taking
the ATA disks and controllers out of the machine did not help.  Then,
around 2:30 am last night I remembered that when I first built the new
disks I started with only swap, /, /tmp, /var, /usr and /home and added
the other volumes later.  Eureka!  I reduced /etc/vinum.conf to only
contain these volumes, dd'ed /dev/zero over the vinum metadata for the
2^infinitied time, gvinum create, reboot to single user and everything
turned up ok!  Both booting F1 (da0s1a) or F2 (gvinum/root) are stable
and no matter how many reboots all objects remain up.

Then I added the remaining volumes of my SCSI disks and the two large
volumes on the ATA disks, had to setstate -f up on every subdisk of
every second plex and the system (at least 10 reboots later now) is
still fine.

> regards,
> le

Regards and thanks a lot for all your effords geomifying vinum!

Paul Schenkeveld

More information about the freebsd-geom mailing list