gmirroring slices
Aaron Hurt
aaron at goflexitllc.com
Mon Nov 16 13:35:37 UTC 2009
Miroslav Lachman wrote:
> Lorenzo Perone wrote:
>>
>> Hello,
>>
>> I was wondering if anyone could give me an advice on how viable and
>> reliable it is, to use gmirror on a slice of an MBR-style partitioned
>> disk, and use the second slice(s) within a zpool.
>>
>> I remember a discussion here on where metadata is kept (always at the
>> end of the disk as opposed to the end of the given consumer?), so I
>> wasn't sure about how much of a good idea this might be.
>
> I think metadata is stored at the end of the provider (slice in this
> case), but I am not a GEOM expert.
>
>> The reason I'd
>> like to have it like this is, that I had mixed bad experiences in the
>> effort of using ZFS as a boot and root volume, so I'd rather keep a
>> traditional slice for booting/rooting, and a zpool for the production
>> jails on that machine.
>>
>> The example would be
>>
>> provider: mirror/gm0
>> consumers: ad6s1 and ad8s1
>>
>> zpool mirror made out of
>> ad6s2 and ad8s2
>
> I am running following setup for year without any configuration problems
>
> # gmirror status
> Name Status Components
> mirror/gms1 COMPLETE ad4s1
> ad6s1
>
> # zpool status
> pool: tank
> state: ONLINE
> scrub: none requested
> config:
>
> NAME STATE READ WRITE CKSUM
> tank ONLINE 0 0 0
> mirror ONLINE 0 0 0
> ad4s2 ONLINE 0 0 0
> ad6s2 ONLINE 0 0 0
>
> The first slice is 20GB partitioned as usual:
> # mount -t ufs
> /dev/mirror/gms1a on / (ufs, local)
> /dev/mirror/gms1e on /usr (ufs, local, soft-updates)
> /dev/mirror/gms1d on /var (ufs, local, nosuid, soft-updates)
> /dev/mirror/gms1f on /tmp (ufs, local, noexec, nosuid, soft-updates)
>
> The rest (450GB) is used in ZFS mirrored zpool for jails (each jail
> has its own filesystem)
>
>> while experimenting, I got into the problem that gmirror label -v -b
>> round-robin gm0 ad6s1 got a permission denied (even with sysctl
>> kern.geom.debugflags=16/17). Any hints on what can cause this (I might
>> have screwed up something with fdisk/bsdlabel, but after doublechecking
>> I wonder what it could be..)
>
> I did it in non-standard way - converting already installed system on
> one disk to mirrored. So when I was in system running off ad6 I
> created two slices on ad4, setup gmirror gms1 from first slice of ad4,
> create partitions, newfs, mount it and transfer files from running
> system by dump & restore, edit fstab. Then I rebooted system from
> gms1, destroy content of ad6, create slices on ad6 and insert first
> slice in to gms1.
> After this I had ad4s2 and ad6s2 ready for zpool.
> All was done remotely through ssh.
>
> Miroslav Lachman
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>
> !DSPAM:2,4aff15c1775218542073880!
>
An example with gpart ... this is how I know have all of my production
dedicated servers setup and running 8.0-RCx ...
net1# gpart show
=> 34 312581741 ad6 GPT (149G)
34 128 1 freebsd-boot (64K)
162 8388608 2 freebsd-swap (4.0G)
8388770 10485760 3 freebsd-ufs (5.0G)
18874530 293707245 4 freebsd-zfs (140G)
=> 34 312581741 ad16 GPT (149G)
34 128 1 freebsd-boot (64K)
162 8388608 2 freebsd-swap (4.0G)
8388770 10485760 3 freebsd-ufs (5.0G)
18874530 293707245 4 freebsd-zfs (140G)
net1# gmirror status
Name Status Components
mirror/boot COMPLETE ad6p1
ad16p1
mirror/swap COMPLETE ad6p2
ad16p2
mirror/root COMPLETE ad6p3
ad16p3
net1# zpool status
pool: pool0
state: ONLINE
scrub: scrub completed after 0h5m with 0 errors on Wed Oct 14 12:45:40 2009
config:
NAME STATE READ WRITE CKSUM
pool0 ONLINE 0 0 0
mirror ONLINE 0 0 0
ad6p4 ONLINE 0 0 0
ad16p4 ONLINE 0 0 0
errors: No known data errors
net1# mount -t ufs
/dev/mirror/root on / (ufs, local, soft-updates)
net1# mount -t zfs
pool0 on /pool0 (zfs, local)
pool0/tmp on /tmp (zfs, local, nosuid)
pool0/usr on /usr (zfs, local)
pool0/usr/home on /usr/home (zfs, local)
pool0/usr/hosting on /usr/hosting (zfs, local, noexec, nosuid)
pool0/usr/ports on /usr/ports (zfs, local, nosuid)
pool0/usr/ports/distfiles on /usr/ports/distfiles (zfs, local, noexec,
nosuid)
pool0/usr/ports/packages on /usr/ports/packages (zfs, local, noexec, nosuid)
pool0/usr/src on /usr/src (zfs, local, noexec, nosuid)
pool0/var on /var (zfs, local)
pool0/var/crash on /var/crash (zfs, local, noexec, nosuid)
pool0/var/db on /var/db (zfs, local, noexec, nosuid)
pool0/var/db/pkg on /var/db/pkg (zfs, local, nosuid)
pool0/var/empty on /var/empty (zfs, local, noexec, nosuid, read-only)
pool0/var/log on /var/log (zfs, local, noexec, nosuid)
pool0/var/mail on /var/mail (zfs, local, noexec, nosuid)
pool0/var/qmail on /var/qmail (zfs, local)
pool0/var/run on /var/run (zfs, local, noexec, nosuid)
pool0/var/tmp on /var/tmp (zfs, local, nosuid)
It runs great and I haven't experienced any issues related to sharing
disks between ufs and zfs using gpt partitioning.
--
Aaron Hurt
Managing Partner
Flex I.T., LLC
611 Commerce Street
Suite 3117
Nashville, TN 37203
Phone: 615.438.7101
E-mail: aaron at goflexitllc.com
More information about the freebsd-fs
mailing list