Waht is the minimum free space between GPT partitions?

Peter pmc at citylink.dinoex.sub.org
Thu May 16 01:13:40 UTC 2019

On Thu, May 16, 2019 at 12:29:16AM +0200, Miroslav Lachman wrote:
! > I found, if I put partitions directly together (so that another starts
! > immediately after one ends), under certain circumstances the volumes
! > become inaccessible and the system (11.2) does crash. Obviousely there
! > is a safety distance required - but how big should it be?
! I read your post on forum 
! https://forums.freebsd.org/threads/create-degraded-raid-5-with-2-disks-on-freebsd.70750/#post-426756

Hi, great, that should explain how to make it happen.

! No problems for years.

Me neither with MBR/packlabels, but only recently switched to GPT.

I suppose either GPT or ZFS-autoexpand seems to go out-of-bounds; I
couldn't determine which.

! I think your case is somewhat different if you split disk in to 3 
! partitions later used as 3 devices for one ZFS pool, so maybe there is 
! some coincidence with expanding ZFS... and then it is a bug which should 
! be fixed.

If we could fix it that would be even better! Agreed, it's an ugly
operation, but I love to do ugly things with ZFS, and usually it
stands it. ;)

! Can you prepare some simple testcase (scriptable) which make a panic on 
! your host? I will try it in some spare VM.

The description in mentioned forum-post is pretty much what I did.

At first I did it on my router, as there is empty space on a disk,
and when that had gone by-by, I tried it on the desktop with an
(otherwise empty) USB stick. Takes an eternity to create ZFS-raidz
even on USB-3 stick - they are not designed for that - but the outcome
was the same.

Procedure is:
1. create new GPT scheme on stick.
2. add 3x 1G freebsd-zfs partitions with 1G -free- in between.
3. zpool create test raidz da0p1 da0p2 da0p3
4. resize 3x partitions to 2G each
5. zpool set autoexpand=on test
6. export the pool
7. zpool online

At that point it will start to complain that (some of) the pool isn't
Now resize the partitions back to 1G. -> kernel crash
And after going thru that and having all partitions back at 1G, the
pool works again. :)

I'll try to reproduce it from a script, as soon as my toolchain is
done with building from the recent patches.


More information about the freebsd-fs mailing list