Migrating to gmirrored RAID1
John Nielsen
lists at jnielsen.net
Mon Aug 18 20:17:20 UTC 2008
On Monday 18 August 2008 05:39:10 am Henry Karpatskij wrote:
> Hi,
>
> I have a failing IDE disk which is running my 7.0-p1 server. I've been
> investigating the possible solutions and I've decided to go with two
> new IDE disks and gmirror. However, I'm not too familiar with disk
> internals, I know how to install the system and somehow understand the
> concept of slices and partitions, but that's about it.
>
> I found some examples on how to install the gmirror on a running
> system, but they all have in common that they just add new spare disk
> to the system and turn on the mirroring to it, but I need to replace
> the current disk which is not the same size as the new ones.
>
> Any suggestions how one would do such an operation? Should I just re-
> install the server to a new disk, turn on the mirroring and then
> restore the configuration and files from the failing disk? Or is it
> easier to add the disks to the running system, turn on mirroring and
> then somehow dump the current disk to the mirror and then re-configure
> it to boot from the gmirror and remove the failing disk?
I think the latter approach is easier and makes the most sense for your
situation. Install the disks, set up the mirror(s) that you want,
transfer data and then do a boot test.
Something along these lines should work. Substitue device and volume names
to match your hardware and tastes.
#set up a single mirror to use the whole disk (versus mirroring
individual slices/partitions)
gmirror label myraid1 /dev/ad4 /dev/ad6
#install a partition table and the boot0 code
fdisk -BI /dev/mirror/myraid1
#install a default label and the main boot code
bsdlabel -wB /dev/mirror/myraid1s1
#create BSD partitions by hand. remember to set EDITOR if you don't like
vi
bsdlabel -e /dev/mirror/myraid1s1
#This is the tricky part. Create the partitions you want on the mirror.
Use the output of "bsdlabel /dev/ad0s1" as a guide. Remember that "a"
should be root, "b" is traditionally swap, "c" is the "raw" partition and
should not be changed, and "d" - "h" are other partitions. I find a
spreadsheet to be handy for figuring out the correct values, though a
calculator is adequate (I've used dc more than once..). The units you are
dealing with are 512-byte sectors. Best practice (which sysinstall
doesn't follow but bsdlabel -w does) is to leave 16 sectors at the start
of the slice for the boot code (but both swap and UFS will avoid
clobbering it even if you don't do this). If you follow the best practice
and do the partitions in order, then the offset for "a" is 16, and the
offset for any other partition is the offset of the previous one plus the
size of the previous one. Assuming your last filesystem wants to use the
remainder of the slice, figure its offset as above then subtract it from
the total (the size of "c") for the size. For filesystem partitions the
fstype should be "4.2BSD", and use "2048 16384 0" for the last three
columns unless you have reason to do otherwise. (The bps is recalculated
when you create a filesystem so it won't be 0 later. That's expected.)
The fstype for swap space is "swap" and the last three columns are
omitted. Save and exit the editor when finished.)
#Create filesystems
newfs /dev/mirror/myraid1s1a
#(repeat for other filesystems, changing the partition letter as
appropriate)
#Make temp mountpoints
mkdir /newroot
#(again repeat as needed)
#Mount new filesystems
mount /dev/mirror/myraid1s1a /newroot
#(repeat as needed)
#Dump/restore filesystems
cd /newroot
dump -0 -L -C32 -f - / | restore -r -f -
rm restoresymtable
#(repeat as needed, changing the filesystem argument to dump and the cwd
for your new filesystems. one or two messages from restore about getting
a different inode than expected is normal.)
#edit /newroot/etc/fstab. Change the device for /
to /dev/mirror/myraid1s1a. Make a similar change for other filesystems.
#edit /newroot/boot/loader.conf. Make sure it includes this line:
geom_mirror_load="YES"
#shut down, remove the original disk, and try booting
Good luck!
JN
> Current df output:
>
> Filesystem 1K-blocks Used Avail Capacity Mounted on
> /dev/ad0s1a 507630 159262 307758 34% /
> devfs 1 1 0 100% /dev
> /dev/ad0s1e 507630 56 466964 0% /tmp
> /dev/ad0s1f 33573476 6044408 24843190 20% /usr
> /dev/ad0s1d 1762414 381632 1239790 24% /var
> devfs 1 1 0 100% /var/named/dev
>
> Thanks in advance,
More information about the freebsd-questions
mailing list