Waht is the minimum free space ... (Full Report)

Peter Jeremy peter at rulingia.com
Tue May 21 09:29:12 UTC 2019


On 2019-May-17 14:49:37 +0200, Peter <pmc at citylink.dinoex.sub.org> wrote:
>On Fri, May 17, 2019 at 03:30:43PM +1000, Peter Jeremy wrote:
>! On 2019-May-17 03:02:39 +0200, Peter <pmc at citylink.dinoex.sub.org> wrote:
>! >The original idea was to check if ZFS can grow a raid5.
>! 
>! I've done this (see https://bugs.au.freebsd.org/dokuwiki/zfsraid), though I
>! also migrated from RAIDZ1 to RAIDZ2 in the process.  If this process no
>! longer works (that page is 4 years old), it would seem that there has been
>! an unfortunate regression.
>
>You don't mention setting "autoexpand=on" - I suppose it would not
>work without that.

Hmmm... After all this time, I don't recall.  I did an export/import, in
which case, I don't believe autoexpand is necessary.

>What we have here is most likely not a problem with the raid or it's
>growth, but a kind of "autority conflict" between ZFS and GPT on who
>is going to manage the underlying partitions. (Which doesn't surprize
>me - if I were ZFS, I would be quite frustrated to run under GPT.)

I don't agree with this.  Geom is layered - by definition, a geom partition
manager class (eg BSD, GPT) is responsible for managing the partition
layout.  ZFS can only see exposed device entries - which means either entire
raw disks or the partitions exposed via geom.  If ZFS tries to access data
outside the partition it is using then it should receive an error back (or
there's a serious bug in geom).

When juggling disk partitions, the most common problem is unexpected
metadata: Both geom and ZFS store metadata at the edges of the containers
they live in (disks or partitions).  Using gpart to resize a partition does
not touch any data on the disk (other that geom metadata).  Whilst this is
good in some circumstances (if you accidently mis-partition your disk you
can fix the partitioning and your data will still be there), it can cause
problems if a partitioning change exposes stale metadata.  In particular,
ZFS will "taste" every[*] partition it can see, looking for ZFS metadata.
Resizing a partition could result in seemingly valid but stale ZFS metadata
becoming visible, potentially confusing ZFS.  If you look through my
procedure, you'll notice that I explicitly write zeroes over regions that
contained ZFS metadata to guard against this.

[*] I'm not sure whether ZFS looks at the partition type.

-- 
Peter Jeremy
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 963 bytes
Desc: not available
URL: <http://lists.freebsd.org/pipermail/freebsd-fs/attachments/20190521/5769c801/attachment.sig>


More information about the freebsd-fs mailing list