[Bug 204646] 10.2 iSCSI backed zpool shows imporper warnings about non-native block sizes that 10.1 doesn't show
bugzilla-noreply at freebsd.org
bugzilla-noreply at freebsd.org
Tue Nov 17 22:01:06 UTC 2015
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=204646
Bug ID: 204646
Summary: 10.2 iSCSI backed zpool shows imporper warnings about
non-native block sizes that 10.1 doesn't show
Product: Base System
Version: 10.2-RELEASE
Hardware: amd64
OS: Any
Status: New
Severity: Affects Many People
Priority: ---
Component: misc
Assignee: freebsd-bugs at FreeBSD.org
Reporter: chris at acsi.ca
This is similar to Bug 197513, but that is a 9.3 system.
Consider this scenario:
Virtual FreeBSD Machine Initiator, with a zpool created out of iSCSI disks.
Physical FreeBSD Machine Target, with a zpool holding a sparse file that is the
target for the iSCSI disk.
- The 10.2 Machines are 10.2-p7 RELEASE, updated via freebsd-update, no custom.
- The 10.1 Machine are 10.1-p24 RELEASE, updated via freebsd-update, no custom.
- iSCSI is all CAM iSCSI, not the old istgt platform.
- The iSCSI Target is a sparse file, stored on a zpool (not a vdev Target)
The target machine is the same physical machine, with the same zpools - I
either boot 10.1 or 10.2 for testing, and use the same zpool/disks
to ensure nothing is changing.
If I have a 10.2 iSCSI Initiator (client) connected to a 10.2 iSCSI Target, I
get eroneous warnings.
If I have a 10.1 iSCSI Initiator (client) connected to a 10.1 iSCSI Target, I
don't get the warnings.
On the iSCSI Target, the file that backs the iSCSI disk is stored in a zpool
that has a recordsize=64k set.
It's something in the CAM iSCSI Target code that is reporting the zfs
recordsize as the sector size.
If you use a 10.1 iSCSI Initiator connected to a 10.2 iSCSI Target, or 10.1
iSCSI Target, this is what you see on the Initiator:
#zpool status
pool: iscsi-nfs
state: ONLINE
scan: scrub repaired 0 in 0h0m with 0 errors on Tue Nov 17 15:25:51 2015
config:
NAME STATE READ WRITE CKSUM
iscsi-nfs ONLINE 0 0 0
diskid/DISK-MYSERIAL%20%20%201 ONLINE 0 0 0
errors: No known data errors
pool: iscsi-nfs1
state: ONLINE
scan: scrub repaired 0 in 0h0m with 0 errors on Tue Nov 17 15:25:51 2015
config:
NAME STATE READ WRITE CKSUM
iscsi-nfs1 ONLINE 0 0 0
diskid/DISK-MYSERIAL%20%20%200 ONLINE 0 0 0
errors: No known data errors
With a 10.2 iSCSI Initiator connected to a 10.2 iSCSI Target, this is what I
see on zpool status:
# zpool status
pool: iscsi-nfs
state: ONLINE
status: One or more devices are configured to use a non-native block size.
Expect reduced performance.
action: Replace affected devices with devices that support the
configured block size, or migrate data to a properly configured
pool.
scan: scrub repaired 0 in 0h0m with 0 errors on Tue Nov 17 15:25:51 2015
config:
NAME STATE READ WRITE CKSUM
iscsi-nfs ONLINE 0 0 0
diskid/DISK-MYSERIAL%20%20%200 ONLINE 0 0 0 block
size: 4096B configured, 65536B native
errors: No known data errors
pool: iscsi-nfs1
state: ONLINE
status: One or more devices are configured to use a non-native block size.
Expect reduced performance.
action: Replace affected devices with devices that support the
configured block size, or migrate data to a properly configured
pool.
scan: scrub repaired 0 in 0h0m with 0 errors on Tue Nov 17 15:25:51 2015
config:
NAME STATE READ WRITE CKSUM
iscsi-nfs1 ONLINE 0 0 0
diskid/DISK-MYSERIAL%20%20%201 ONLINE 0 0 0 block
size: 4096B configured, 65536B native
errors: No known data errors
Notice the block size warnings.
The /etc/ctl.conf on both of the target machines is:
portal-group pg0 {
discovery-auth-group no-authentication
listen 0.0.0.0
listen [::]
}
lun 0 {
path /pool92/iscsi/iscsi.zvol
blocksize 4K
size 5T
option unmap "on"
option scsiname "pool92"
option vendor "pool92"
option insecure_tpc "on"
}
}
target iqn.iscsi1.zvol {
auth-group no-authentication
portal-group pg0
lun 0 {
path /pool92_1/iscsi/iscsi.zvol
blocksize 4K
size 5T
option unmap "on"
option scsiname "pool92_1"
option vendor "pool92_1"
option insecure_tpc "on"
}
}
Note the 4k size for my iSCSI Blocks.
I believe this error message to be incorrect, as recordsize=64k on the Target
will not affect the ashift value at all, which is what this warning is really
designed for, correct? It's going to warn when you have ashift=9 on a 4k drive,
or similar mis-matches.
The last interesting bit is that somehow the iSCSI Initiator is believing that
the Target's sector size _is_ the zfs recordsize, as a zpool creation with a
iSCSI drive in this situation trys zet the zpool's ashift to
vfs.zfs.max_auto_ashift of 13 (8k Sectors)
Thanks.
--
You are receiving this mail because:
You are the assignee for the bug.
More information about the freebsd-bugs
mailing list