volume management
Gergely CZUCZY
phoemix at harmless.hu
Mon Apr 9 15:32:06 UTC 2007
On Mon, Apr 09, 2007 at 05:24:01PM +0200, Pawel Jakub Dawidek wrote:
> On Mon, Apr 09, 2007 at 04:38:18PM +0200, Gergely CZUCZY wrote:
> > On Mon, Apr 09, 2007 at 09:28:35AM -0500, Eric Anderson wrote:
> > > On 04/08/07 13:57, Dag-Erling Sm??rgrav wrote:
> > > >Gergely CZUCZY <phoemix at harmless.hu> writes:
> > > >>yeap, i know about ZFS, as i assume, it will need around 1.5-2 years
> > > >>from now, when 7.0-RELEASE will be ready.
> > > >No, it's expected this fall.
> > > >>and i'm looking for a solution for a production environment within
> > > >>a year.
> > > >There is no other solution.
> > >
> > > How about gconcat? You could create a mirror, then gconcat another mirror, etc, extending the GEOM.
> > > Then run growfs on that extended volume. Wouldn't that work?
> > why gmirror? gconcat somehow could be used for this,
> > but
> > 1) i see no attach operation for gconcat to add
> > providers on the fly.
> > 2) this would require to always create subpartitions/bsdlabels
> > on the disk, and add a bit more on need.
>
> Slow down:) Implementing off-line 'attach' operation is trivial and
> on-line 'attach' operation is also easy, but because you need to unmount
> file system anyway, off-line attach is ok.
>
> Let's assume you have currently two disks: da0 and da1.
>
> # gconcat label foo da0 da1
> # newfs /dev/concat/foo
> # mount /dev/concat/foo /foo
>
> and you want to extend your storage by adding two disks: da2 and da3:
>
> # umount /foo
> # gconcat stop foo
> # gconcat label foo da0 da1 da2 da3
> # growfs /dev/concat/foo
> # mount /dev/concat/foo /foo
>
> That's all.
>
> You can operate on mirrors too:
>
> # gmirror label foo0 da0 da1
> # gconcat label foo mirror/foo0
> # newfs /dev/concat/foo
> # mount /dev/concat/foo /foo
>
> And extending:
>
> # gmirror label foo1 da2 da3
> # umount /foo
> # gconcat stop foo
> # gconcat label foo mirror/foo0 mirror/foo1
> # growfs /dev/concat/foo
> # mount /dev/concat/foo /foo
yes, this was the trivial part, but:
1) to increment them, i need a device(disk/slice/label/etc).
if i increment a lot, i need a lot of devices.
2) these incrementum-devices (the ones i increment by),
have to be made, each of the has to be chopped from the
storage pool.
please also look at the bsdlabel issue i have mentioned.
gconcating is the most easy part of that. recursively
bsdlabeling is what i have mostly referred to as the
real issue. i really don't think this is the way to
do it...
if you are down to the bits: we are running our systems
on 3ware cards. the end of the disk (usually total-20G) is
the storage pool. under linux's LVM2 we use this as a pool
to allocate space for our services. At the startup only
a minimal part of the pool is used, and as a service needs
more space, we enlarge its available space, by little increments.
so, we are not adding new disks, or anything, as you have assumed
in your upper examples. we just give it a bit more space, nothing
special.
new disks are not being added, that's why i had said "storage pool",
to reflect this situation. it wasn't just a term for an abstraction
level :)
>
> --
> Pawel Jakub Dawidek http://www.wheel.pl
> pjd at FreeBSD.org http://www.FreeBSD.org
> FreeBSD committer Am I Evil? Yes, I Am!
Bye,
Gergely Czuczy
mailto: gergely.czuczy at harmless.hu
--
Weenies test. Geniuses solve problems that arise.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 2559 bytes
Desc: not available
Url : http://lists.freebsd.org/pipermail/freebsd-geom/attachments/20070409/04dfd085/attachment.pgp
More information about the freebsd-geom
mailing list