[Bug 250816] AWS EC2 ZFS cannot import its own export!
bugzilla-noreply at freebsd.org
bugzilla-noreply at freebsd.org
Mon Nov 2 19:01:02 UTC 2020
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=250816
Bug ID: 250816
Summary: AWS EC2 ZFS cannot import its own export!
Product: Base System
Version: 12.2-RELEASE
Hardware: amd64
OS: Any
Status: New
Severity: Affects Some People
Priority: ---
Component: kern
Assignee: bugs at FreeBSD.org
Reporter: raj at gusw.net
On a fresh deployment of the most recent official FreeBSD-12.2 EC2 AMI on
Amazon. No complicated configurations. Only one added line in rc.conf
zfs_enable="YES"
without which zfs wouldn't even work. The summary overview is this:
1. zpool create .... works and creates the pool shown with zpool list
2. zpool export ... without error
3. zpool import ... says that one or more devices are corrupt
Here is a (ba)sh script, you can just run this yourself:
<script>
mkdir zfstc
truncate -s 100M zfstc/0
truncate -s 100M zfstc/1
mkdir zfstd
for i in zfstc/* ; do ln -s /dev/$(mdconfig -a -t vnode -f $i) zfstd/$(basename
$i) ; done
zpool create -o feature at embedded_data=enabled -o feature at lz4_compress=enabled
-O dedup=on -O compression=lz4 testpool raidz $(for i in zfstd/* ; do readlink
$i ; done)
zpool list
zpool export testpool
zpool import -d zfstd
for i in zfstd/* ; do mdconfig -d -u $(readlink $i) && rm $i ; done
rm zfstc/*
truncate -s 100M zfstc/0
truncate -s 100M zfstc/1
for i in zfstc/* ; do ln -s /dev/$(mdconfig -a -t vnode -f $i) zfstd/$(basename
$i) ; done
zpool create testpool raidz $(for i in zfstd/* ; do readlink $i ; done)
zpool list
zpool export testpool
zpool import -d zfstd
for i in zfstd/* ; do mdconfig -d -u $(readlink $i) && rm $i ; done
rm zfstc/*
truncate -s 100M zfstc/0
truncate -s 100M zfstc/1
for i in zfstc/* ; do ln -s /dev/$(mdconfig -a -t vnode -f $i) zfstd/$(basename
$i) ; done
zpool create testpool mirror $(for i in zfstd/* ; do readlink $i ; done)
zpool list
zpool export testpool
zpool import -d zfstd
for i in zfstd/* ; do mdconfig -d -u $(readlink $i) && rm $i ; done
rm -r zfstc zfstd
</script>
You see in it repeated attempts changing the options and zfs device type, none
of which makes any difference.
Here is the log on another system where it all worked:
<log>
# mkdir zfstc
# truncate -s 100M zfstc/0
# truncate -s 100M zfstc/1
# mkdir zfstd
# for i in zfstc/* ; do ln -s /dev/$(mdconfig -a -t vnode -f $i)
zfstd/$(basename $i) ; done
#
# zpool create -o feature at embedded_data=enabled -o feature at lz4_compress=enabled
-O dedup=on -O compression=lz4 testpool raidz $(for i in zfstd/* ; do readlink
$i ; done)
# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH
ALTROOT
testpool 176M 186K 176M - - 1% 0% 1.00x ONLINE
-
# zpool export testpool
# zpool import -d zfstd
pool: testpool
id: 14400958070908437474
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
testpool ONLINE
raidz1-0 ONLINE
md10 ONLINE
md11 ONLINE
#
# for i in zfstd/* ; do mdconfig -d -u $(readlink $i) && rm $i ; done
# rm zfstc/*
# truncate -s 100M zfstc/0
# truncate -s 100M zfstc/1
# for i in zfstc/* ; do ln -s /dev/$(mdconfig -a -t vnode -f $i)
zfstd/$(basename $i) ; done
#
# zpool create testpool raidz $(for i in zfstd/* ; do readlink $i ; done)
# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH
ALTROOT
testpool 176M 156K 176M - - 1% 0% 1.00x ONLINE
-
# zpool export testpool
# zpool import -d zfstd
pool: testpool
id: 7399105644867648490
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
testpool ONLINE
raidz1-0 ONLINE
md10 ONLINE
md11 ONLINE
#
# for i in zfstd/* ; do mdconfig -d -u $(readlink $i) && rm $i ; done
# rm zfstc/*
# truncate -s 100M zfstc/0
# truncate -s 100M zfstc/1
# for i in zfstc/* ; do ln -s /dev/$(mdconfig -a -t vnode -f $i)
zfstd/$(basename $i) ; done
#
# zpool create testpool mirror $(for i in zfstd/* ; do readlink $i ; done)
# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH
ALTROOT
testpool 80M 67.5K 79.9M - - 1% 0% 1.00x ONLINE
-
# zpool export testpool
# zpool import -d zfstd
pool: testpool
id: 18245765184438368558
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
testpool ONLINE
mirror-0 ONLINE
md10 ONLINE
md11 ONLINE
#
# for i in zfstd/* ; do mdconfig -d -u $(readlink $i) && rm $i ; done
# rm -r zfstc zfstd
</log>
Now here on the new system where it fails:
<log>
[root at geli ~]# mkdir zfstc
[root at geli ~]# truncate -s 100M zfstc/0
[root at geli ~]# truncate -s 100M zfstc/1
[root at geli ~]# mkdir zfstd
[root at geli ~]# for i in zfstc/* ; do ln -s /dev/$(mdconfig -a -t vnode -f $i)
zfstd/$(basename $i) ; done
[root at geli ~]#
[root at geli ~]# zpool create -o feature at embedded_data=enabled -o
feature at lz4_compress=enabled -O dedup=on -O compression=lz4 testpool raidz
$(for i in zfstd/* ; do readlink $i ; done)
[root at geli ~]# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH
ALTROOT
testpool 176M 182K 176M - - 1% 0% 1.00x ONLINE
-
[root at geli ~]# zpool export testpool
[root at geli ~]# zpool import -d zfstd
pool: testpool
id: 3796165815934978103
state: UNAVAIL
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
see: http://illumos.org/msg/ZFS-8000-5E
config:
testpool UNAVAIL insufficient replicas
raidz1-0 UNAVAIL insufficient replicas
7895035226656775877 UNAVAIL corrupted data
5600170865066624323 UNAVAIL corrupted data
[root at geli ~]#
[root at geli ~]# for i in zfstd/* ; do mdconfig -d -u $(readlink $i) && rm $i ;
done
[root at geli ~]# rm zfstc/*
[root at geli ~]# truncate -s 100M zfstc/0
[root at geli ~]# truncate -s 100M zfstc/1
[root at geli ~]# for i in zfstc/* ; do ln -s /dev/$(mdconfig -a -t vnode -f $i)
zfstd/$(basename $i) ; done
[root at geli ~]#
[root at geli ~]# zpool create testpool raidz $(for i in zfstd/* ; do readlink $i
; done)
[root at geli ~]# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH
ALTROOT
testpool 176M 146K 176M - - 1% 0% 1.00x ONLINE
-
[root at geli ~]# zpool export testpool
[root at geli ~]# zpool import -d zfstd
pool: testpool
id: 17325954959132513026
state: UNAVAIL
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
see: http://illumos.org/msg/ZFS-8000-5E
config:
testpool UNAVAIL insufficient replicas
raidz1-0 UNAVAIL insufficient replicas
7580076550357571857 UNAVAIL corrupted data
9867268050600021997 UNAVAIL corrupted data
[root at geli ~]#
[root at geli ~]#
[root at geli ~]# for i in zfstd/* ; do mdconfig -d -u $(readlink $i) && rm $i ;
done
[root at geli ~]# rm zfstc/*
[root at geli ~]# truncate -s 100M zfstc/0
[root at geli ~]# truncate -s 100M zfstc/1
[root at geli ~]# for i in zfstc/* ; do ln -s /dev/$(mdconfig -a -t vnode -f $i)
zfstd/$(basename $i) ; done
[root at geli ~]#
[root at geli ~]# zpool create testpool mirror $(for i in zfstd/* ; do readlink $i
; done)
[root at geli ~]# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH
ALTROOT
testpool 80M 73K 79.9M - - 3% 0% 1.00x ONLINE
-
[root at geli ~]# zpool export testpool
[root at geli ~]# zpool import -d zfstd
pool: testpool
id: 7703888355221758527
state: UNAVAIL
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
see: http://illumos.org/msg/ZFS-8000-5E
config:
testpool UNAVAIL insufficient replicas
mirror-0 UNAVAIL insufficient replicas
23134336724506526 UNAVAIL corrupted data
16413307577104054419 UNAVAIL corrupted data
[root at geli ~]#
[root at geli ~]# for i in zfstd/* ; do mdconfig -d -u $(readlink $i) && rm $i ;
done
[root at geli ~]# rm -r zfstc zfstd
<log>
If you are wondering if there is anything wrong with the md vnode device, I can
assure you that there is not, since I produced the MD5 hash on the underlying
chunk files and through the /dev/md?? device with the same result.
If you are wondering whether it is the create or export that is faulty or the
import, I have proof that it is the import that is faulty. Why? Because I
discovered this problem when I moved such files from the other FreeBSD system
to the new one, and failed on the import like that. First thing was run md5
hash over the files to see if they were corrupted. But no. And same files with
same checksum could be imported again on the old system.
--
You are receiving this mail because:
You are the assignee for the bug.
More information about the freebsd-bugs
mailing list