Waht is the minimum free space between GPT partitions?

Andrey V. Elsukov bu7cher at yandex.ru
Thu May 16 08:05:22 UTC 2019

On 16.05.2019 03:36, Peter wrote:
> On Thu, May 16, 2019 at 12:29:16AM +0200, Miroslav Lachman wrote:
> ! > I found, if I put partitions directly together (so that another starts
> ! > immediately after one ends), under certain circumstances the volumes
> ! > become inaccessible and the system (11.2) does crash. Obviousely there
> ! > is a safety distance required - but how big should it be?
> ! 
> ! I read your post on forum 
> ! https://forums.freebsd.org/threads/create-degraded-raid-5-with-2-disks-on-freebsd.70750/#post-426756
> Hi, great, that should explain how to make it happen.
> ! No problems for years.
> Me neither with MBR/packlabels, but only recently switched to GPT.
> I suppose either GPT or ZFS-autoexpand seems to go out-of-bounds; I
> couldn't determine which.
> ! I think your case is somewhat different if you split disk in to 3 
> ! partitions later used as 3 devices for one ZFS pool, so maybe there is 
> ! some coincidence with expanding ZFS... and then it is a bug which should 
> ! be fixed.
> If we could fix it that would be even better! Agreed, it's an ugly
> operation, but I love to do ugly things with ZFS, and usually it
> stands it. ;)
> ! Can you prepare some simple testcase (scriptable) which make a panic on 
> ! your host? I will try it in some spare VM.
> The description in mentioned forum-post is pretty much what I did.
> At first I did it on my router, as there is empty space on a disk,
> and when that had gone by-by, I tried it on the desktop with an
> (otherwise empty) USB stick. Takes an eternity to create ZFS-raidz
> even on USB-3 stick - they are not designed for that - but the outcome
> was the same.
> Procedure is:
> 1. create new GPT scheme on stick.
> 2. add 3x 1G freebsd-zfs partitions with 1G -free- in between.
> 3. zpool create test raidz da0p1 da0p2 da0p3
> 4. resize 3x partitions to 2G each
> 5. zpool set autoexpand=on test
> 6. export the pool
> 7. zpool online

When you exported the pool, ZFS is able to find its labels on the entire
da0 disk. This is probably leads to panic, and not because there are no
free space between partitions.

When panic happens, just make a photo of panic screen, in most cases it
can say where is the problem. Even better, if you add debug options to
the kernel and then you will be able get core dump.

WBR, Andrey V. Elsukov

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 554 bytes
Desc: OpenPGP digital signature
URL: <http://lists.freebsd.org/pipermail/freebsd-fs/attachments/20190516/65d63d6f/attachment.sig>

More information about the freebsd-fs mailing list