Using mirroring to replace drive?

John Nielsen lists at jnielsen.net
Sat Oct 18 19:03:39 PDT 2008


On Saturday 18 October 2008, Chris Pratt wrote:
> Hi, For years I've been upgrading by building a temp
> server, transferring a production function to it and
> temporarily decommissioning the one server while
> I upgrade and rebuild it. I was thinking of trying a different
> approach since having tried out gvinum in the last
> couple of years.
>
> The current scenario is that I have a machine where the
> adaptec controller is suggesting I replace a failing SCSI
> drive which happens to be the system disk. I purchased
> a couple of new drives and thought I might just plug it in
> and mirror the failing drive on the new drive. Then
> pull the failing drive and plug in the other new drive as
> the second mirrored drive and be done with it. One
> obvious outcome would be a having a system drive
> mirror for future such issues. I have never built a mirror
> on the fly but it seems many have from what I've read
> and the cookbooks out there make it sound very
> easy. I was going to use GEOM Mirror on 6.2 (then
> upgrade to 7.0 after establishing the new good drives).
>
> 1. Is this an appropriate way to deal with this?

It could be. However if the new disks are not the same size as the failing 
disk (or perhaps even if they are) I would recommend using dump/restore to 
do the transfer rather than including the failing drive in the mirror. 
Assuming you can only have 2 disks attached at any given time and want to 
mirror at the disk level (as opposed to partition or slice), the sequence 
would be something like this:

Connect new disk.
Gmirror label ... (create a single-member ("broken") mirror on the new disk)
Partition (fdisk) and label (bsdlabel) the new mirror device, installing 
boot blocks as appropriate (fdisk -B and bsdlabel -wB, for example)
Newfs and mount (to a temporary location) each filesystem on the mirror.
Dump the contents of each filesystem from the original disk to the mirror 
device. Use the -L flag to dump to dump from a snapshot for "live" 
filesystems.
Edit <temproot>/etc/fstab and change the relevant mountpoint entries to 
refer to the ones on the mirror.
Ensure that <temproot>/boot/loader.conf contains 'geom_mirror_load="YES"'.
Shut down, remove the old disk and connect the second new disk.
Boot (from the first new disk). If this doesn't succeed switch back to the 
old disk and figure out why.
Gmirror insert ... (add the second disk to the mirror)
Wait for rebuild to complete
Finished!

> 2. Are there any high risk aspects of doing this while running
> a server in production? I'm thinking of things like how
> probable it is of trashing the original disk, making the
> system unbootable in the process etc?

Like other GEOM classes gmirror stores its metadata in the last sector of 
the provider (the disk, in this case). If you decide to include the old 
disk in a mirror there is a chance that this sector will have been in use 
by the filesystem, though in the whole-disk scenario this is somewhat rare. 
Using the approach I outlined above avoids the possibility altogether.

Other risks are minimal. The system will be I/O loaded during the 
dump/restore and mirror resync phases, though decent hardware can make this 
less obvious. If you manage to tickle a UFS snapshot bug during the dump 
the system could panic, though in my experience (on lightly-loaded systems 
without other snapshots and not using quotas) this has not happened.

Having a fallback plan (revert to the unmodified original disk) is another 
selling point of the method I outlined above.

> 3. Are there better approaches that are safer (aside from
> my normal hardware swap MO).

See my response to 1).

> 4. Does using GEOM Mirror RAID-1 make the upgrade from
> 6.2 to 7.0 a dangerous proposition. I do upgrades via
> cvsup and buildworld.

Not really. The gmirror module in 7.x will read and understand (and possibly 
update) the on-disk metadata as soon as it sees it. Just be sure to load 
it. Worst case you end up booting from a single drive and have to manually 
specify your root partition.

> The environment is
> 	FreeBSD 6.2
> 	Supermicro with Adaptec SCSI
> 	All ~73 GB Maxtor and Seagate drives
> 		Current da0 system is Maxtor, there
> 		will be minor size differences, the
> 		replacement Cheetah is a hair larger.
> 	Apache, PHP5 and Mysql
> 	No existing RAID Configuration

JN


More information about the freebsd-questions mailing list