Waht is the minimum free space between GPT partitions?

Miroslav Lachman 000.fbsd at quip.cz
Wed May 15 22:29:22 UTC 2019


Peter wrote on 2019/05/15 22:42:
> It appears I can't find an answeer anywhere:
> 
> What is the recommendend safety distance (free space) between two
> GPT partitions?
> 
> I found, if I put partitions directly together (so that another starts
> immediately after one ends), under certain circumstances the volumes
> become inaccessible and the system (11.2) does crash. Obviousely there
> is a safety distance required - but how big should it be?

I read your post on forum 
https://forums.freebsd.org/threads/create-degraded-raid-5-with-2-disks-on-freebsd.70750/#post-426756

There is no requirement to have empty space between partitions. These 2 
disks 6T each are partitioned with GPT and each partition is mirrored by 
gmirror:

~/> gpart show
=>         34  11721045101  ada0  GPT  (5.5T)
            34            6        - free -  (3.0K)
            40         1024     1  freebsd-boot  (512K)
          1064          984        - free -  (492K)
          2048      2097152     2  freebsd-ufs  (1.0G)
       2099200     16777216     3  freebsd-swap  (8.0G)
      18876416     16777216     4  freebsd-ufs  (8.0G)
      35653632     16777216     5  freebsd-ufs  (8.0G)
      52430848      6291456     6  freebsd-ufs  (3.0G)
      58722304  11628767232     7  freebsd-ufs  (5.4T)
   11687489536     33555599        - free -  (16G)

=>         34  11721045101  ada1  GPT  (5.5T)
            34            6        - free -  (3.0K)
            40         1024     1  freebsd-boot  (512K)
          1064          984        - free -  (492K)
          2048      2097152     2  freebsd-ufs  (1.0G)
       2099200     16777216     3  freebsd-swap  (8.0G)
      18876416     16777216     4  freebsd-ufs  (8.0G)
      35653632     16777216     5  freebsd-ufs  (8.0G)
      52430848      6291456     6  freebsd-ufs  (3.0G)
      58722304  11628767232     7  freebsd-ufs  (5.4T)
   11687489536     33555599        - free -  (16G)

No problems for years.

I think your case is somewhat different if you split disk in to 3 
partitions later used as 3 devices for one ZFS pool, so maybe there is 
some coincidence with expanding ZFS... and then it is a bug which should 
be fixed.

Can you prepare some simple testcase (scriptable) which make a panic on 
your host? I will try it in some spare VM.

Miroslav Lachman


More information about the freebsd-fs mailing list