zpool imported twice with different names (was Re: Fwd: ZFS)

Nikos Vassiliadis nvass at gmx.com
Mon May 15 18:11:41 UTC 2017


Fix the e-mail subject

On 05/15/2017 08:09 PM, Nikos Vassiliadis wrote:
> Hi everybody,
> 
> While trying to rename a zpool from zroot to vega,
> I ended up in this strange situation:
> nik at vega:~ % zfs list -t all
> NAME                 USED  AVAIL  REFER  MOUNTPOINT
> vega                1.83G  34.7G    96K  /zroot
> vega/ROOT           1.24G  34.7G    96K  none
> vega/ROOT/default   1.24G  34.7G  1.24G  /
> vega/tmp             120K  34.7G   120K  /tmp
> vega/usr             608M  34.7G    96K  /usr
> vega/usr/home        136K  34.7G   136K  /usr/home
> vega/usr/ports        96K  34.7G    96K  /usr/ports
> vega/usr/src         607M  34.7G   607M  /usr/src
> vega/var             720K  34.7G    96K  /var
> vega/var/audit        96K  34.7G    96K  /var/audit
> vega/var/crash        96K  34.7G    96K  /var/crash
> vega/var/log         236K  34.7G   236K  /var/log
> vega/var/mail        100K  34.7G   100K  /var/mail
> vega/var/tmp          96K  34.7G    96K  /var/tmp
> zroot               1.83G  34.7G    96K  /zroot
> zroot/ROOT          1.24G  34.7G    96K  none
> zroot/ROOT/default  1.24G  34.7G  1.24G  /
> zroot/tmp            120K  34.7G   120K  /tmp
> zroot/usr            608M  34.7G    96K  /usr
> zroot/usr/home       136K  34.7G   136K  /usr/home
> zroot/usr/ports       96K  34.7G    96K  /usr/ports
> zroot/usr/src        607M  34.7G   607M  /usr/src
> zroot/var            724K  34.7G    96K  /var
> zroot/var/audit       96K  34.7G    96K  /var/audit
> zroot/var/crash       96K  34.7G    96K  /var/crash
> zroot/var/log        240K  34.7G   240K  /var/log
> zroot/var/mail       100K  34.7G   100K  /var/mail
> zroot/var/tmp         96K  34.7G    96K  /var/tmp
> nik at vega:~ % zpool status
>    pool: vega
>   state: ONLINE
>    scan: scrub repaired 0 in 0h0m with 0 errors on Mon May 15 01:28:48 2017
> config:
> 
>      NAME        STATE     READ WRITE CKSUM
>      vega        ONLINE       0     0     0
>        vtbd0p3   ONLINE       0     0     0
> 
> errors: No known data errors
> 
>    pool: zroot
>   state: ONLINE
>    scan: scrub repaired 0 in 0h0m with 0 errors on Mon May 15 01:28:48 2017
> config:
> 
>      NAME        STATE     READ WRITE CKSUM
>      zroot       ONLINE       0     0     0
>        vtbd0p3   ONLINE       0     0     0
> 
> errors: No known data errors
> nik at vega:~ %
> -------------------------------------------
> 
> It seems like there are two pools, sharing the same vdev...
> 
> After running a few commands in this state, like doing a scrub,
> the pool was (most probably) destroyed. It couldn't boot anymore
> and I didn't research further. Is this a known bug?
> 
> Steps to reproduce:
>    install FreeBSD-11.0 in a pool named zroot
>    reboot into a live-CD
>    zpool import -f zroot vega
>    reboot again
> 
> Thanks,
> Nikos
> 
> PS:
> Sorry for the cross-posting, I am doing this to share to more people
> because it is a rather easy way to destroy a ZFS pool.
> 


More information about the freebsd-fs mailing list