HP DL385G1 Smart Array 6i AMD64 FBSD 6.3
jc at irbs.com
Sat May 3 14:36:13 UTC 2008
Quoting Ulf Zimmermann (ulf at Alameda.net):
> On Fri, May 02, 2008 at 09:46:41AM +0200, Rainer Duffner wrote:
> > Xin LI schrieb:
> > >Todorov wrote:
> > >| Hi all,
> > >|
> > >| I want to migrate my RAID1 (2 disks) (automatically assigned as RAID1
> > >| bacause I have two disks inserted in RAID 1+0 Logical drive), add two
> > >| more disks and get actual RAID 1+0 drive of four disks.
> > >|
> > >| I was reading the ACU specs of HP and I see this can be done online. I
> > >| can have downtime - the question is if I can do it w/o dump and restore
> > >| of the filesystem?
> > >|
> > >| I fully realize that the size will be doubled of /dev/da0 device,
> > >| currently 136GB will become 272GB. Can I make a spare partition of it,
> > >| will the whole procedure happen w/o any dump/restore?
> > >
> > >I think you should at least take a backup before resizing anything
> > That, and I think you will end up with a 2nd DOS-partition that
> > comprises the "added" free space.
> Backup, yes do it. But here is how to do it:
> hpacucli ctrl slot=0 ld 1 add drives=allunassigned
> This should add the additional drives to your first logical drive and
> expand it. Whenever I do this, I reboot now so FreeBSD sees the
> new larger physical disk.
> In most cases now you have to update the partition table, either
> by addition another slice or changing the size of your FreeBSD slice.
> If the file system you want to grow is the last one in disklabel, you
> can use disklabel to change the size of it, also the total line (c:)
> needs to grow.
> And then finally you can use growfs on the file system.
Growfs on 6.X will destroy your UFS2, and mabye UFS1, filesystem.
The patch in bin/115174 appeared to fix growfs but I recently had
a file system related panic on a machine with a growfs expanded
reboot after panic: ffs_alloccg: map corrupted
That panic may be unrelated to growfs but that machine is the only
6.3 machine I have in production with a grown file system. The
800G fileystem had a few percent used when it paniced. It was being
brought into service as a Cyrus IMAP replica.
A working growfs was a requirement for me to move from rock solid
4.11. It would really be nice if the filesystem guru's could take
a look at the growfs problem.
> Regards, Ulf.
> Ulf Zimmermann, 1525 Pacific Ave., Alameda, CA-94501, #: 510-865-0204
> You can find my resume at: http://www.Alameda.net/~ulf/resume.html
> freebsd-proliant at freebsd.org mailing list
> To unsubscribe, send any mail to "freebsd-proliant-unsubscribe at freebsd.org"
More information about the freebsd-proliant