[Bug 271462] zpool list may wrongly report allocation of zero bytes
- Reply: bugzilla-noreply_a_freebsd.org: "[Bug 271462] zpool list may wrongly report allocation of zero bytes"
- Reply: bugzilla-noreply_a_freebsd.org: "[Bug 271462] If readonly=on when a pool is imported, then zpool list will wrongly report zero bytes allocated"
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Wed, 17 May 2023 01:20:02 UTC
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=271462
Bug ID: 271462
Summary: zpool list may wrongly report allocation of zero bytes
Product: Base System
Version: 13.2-STABLE
Hardware: Any
OS: Any
Status: New
Severity: Affects Only Me
Priority: ---
Component: bin
Assignee: bugs@FreeBSD.org
Reporter: dclarke@blastwave.org
Two zpools were imported as readonly :
pluto# zpool import
pool: tank
id: 2737444605056550389
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
tank ONLINE
ada1p4 ONLINE
pool: p0
id: 13515875225729946510
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
p0 ONLINE
mirror-0 ONLINE
ada0p4 ONLINE
gpt/zfs1 ONLINE
ada2p1 ONLINE
pluto#
pluto# zpool import -f -o readonly=on -o cachefile=none -o autotrim=off -N -R
/mnt/p0 13515875225729946510
pluto# zpool import -f -o readonly=on -o cachefile=none -o autotrim=off -N -R
/mnt/tank 2737444605056550389
pluto#
I was surprised to see that a zpool imported may show zero bytes
allocated :
pluto# zpool list -v -p
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ
FRAG CAP DEDUP HEALTH ALTROOT
p0 15942918602752 0 15942918602752 - -
0 0 1.00 ONLINE /mnt/p0
mirror-0 7971459301376 0 7971459301376 - - 0
0 - ONLINE
ada0p4 7984109322240 - - - - - -
- ONLINE
gpt/zfs1 7984109322240 - - - - - -
- ONLINE
ada2p1 7984109584384 0 7971459301376 - - 0
0 - ONLINE
tank 15977278341120 0 15977278341120 - -
0 0 1.00 ONLINE /mnt/tank
ada1p4 15983451570176 0 15977278341120 - - 0
0 - ONLINE
z0 987842478080 1774287360 986068190720 - -
0 0 1.00 ONLINE -
ada3p4 995640737792 1774287360 986068190720 - - 0
0 - ONLINE
pluto#
There we see two pools with 0 bytes allocated. This is very wrong.
I can then export the pools :
pluto#
pluto# zpool export p0
pluto# zpool export tank
pluto#
pluto# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH
ALTROOT
z0 920G 1.65G 918G - - 0% 0% 1.00x ONLINE -
pluto#
Perform the import again without the readonly options etc :
pluto#
pluto# zpool import -f -N -R /mnt/p0 13515875225729946510
pluto#
pluto# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH
ALTROOT
p0 14.5T 10.1T 4.40T - - 17% 69% 1.00x ONLINE
/mnt/p0
z0 920G 1.65G 918G - - 0% 0% 1.00x ONLINE -
pluto#
pluto# zpool import -f -N -R /mnt/tank 2737444605056550389
pluto#
pluto# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH
ALTROOT
p0 14.5T 10.1T 4.40T - - 17% 69% 1.00x ONLINE
/mnt/p0
tank 14.5T 9.91T 4.62T - - 0% 68% 1.00x ONLINE
/mnt/tank
z0 920G 1.65G 918G - - 0% 0% 1.00x ONLINE -
pluto#
pluto# zpool list -v -p p0
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ
FRAG CAP DEDUP HEALTH ALTROOT
p0 15942918602752 11108432945152 4834485657600 - -
17 69 1.00 ONLINE /mnt/p0
mirror-0 7971459301376 7394602434560 576856866816 - -
25 92 - ONLINE
ada0p4 7984109322240 - - - - - -
- ONLINE
gpt/zfs1 7984109322240 - - - - - -
- ONLINE
ada2p1 7984109584384 3713830510592 4257628790784 - -
9 46 - ONLINE
pluto#
Those numbers look correct.
Seems as if the options previously used in the import cause some confusion
about the allocation in the vdevs and the pools themselves.
Dennis Clarke
--
You are receiving this mail because:
You are the assignee for the bug.