about zfs and ashift and changing ashift on existing zpool

tech-lists tech-lists at zyxst.net
Tue Apr 9 13:49:00 UTC 2019


On Mon, Apr 08, 2019 at 09:25:43PM -0400, Michael Butler wrote:
>On 2019-04-08 20:55, Alexander Motin wrote:
>> On 08.04.2019 20:21, Eugene Grosbein wrote:
>>> 09.04.2019 7:00, Kevin P. Neal wrote:
>>>
>>>>> My guess (given that only ada1 is reporting a blocksize mismatch) is that
>>>>> your disks reported a 512B native blocksize.  In the absence of any override,
>>>>> ZFS will then build an ashift=9 pool.
>>>
>>> [skip]
>>>
>>>> smartctl 7.0 2018-12-30 r4883 [FreeBSD 11.2-RELEASE-p4 amd64] (local build)
>>>> Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org
>>>>
>>>> === START OF INFORMATION SECTION ===
>>>> Vendor:               SEAGATE
>>>> Product:              ST2400MM0129
>>>> Revision:             C003
>>>> Compliance:           SPC-4
>>>> User Capacity:        2,400,476,553,216 bytes [2.40 TB]
>>>> Logical block size:   512 bytes
>>>> Physical block size:  4096 bytes
>>>
>>> Maybe it't time to prefer "Physical block size" over "Logical block size" in relevant GEOMs
>>> like GEOM_DISK, so upper levels such as ZFS would do the right thing automatically.
>>
>> No.  It is a bad idea.  Changing logical block size for existing disks
>> will most likely result in breaking compatibility and inability to read
>> previously written data.  ZFS already uses physical block size when
>> possible -- on pool creation or new vdev addition.  When not possible
>> (pool already created wrong) it just complains about it, so that user
>> would know that his configuration is imperfect and he should not expect
>> full performance.
>
>And some drives just present 512 bytes for both .. no idea if this is
>consistent with the underlying silicon :-( I built a ZFS pool on it
>using 4k blocks anyway.
>
>smartctl 7.0 2018-12-30 r4883 [FreeBSD 13.0-CURRENT amd64] (local build)
>Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org
>
>=== START OF INFORMATION SECTION ===
>Device Model:     WDC WDS100T2B0A-00SM50
>Serial Number:    1837B0803409
>LU WWN Device Id: 5 001b44 8b99f7560
>Firmware Version: X61190WD
>User Capacity:    1,000,204,886,016 bytes [1.00 TB]
>Sector Size:      512 bytes logical/physical
>Rotation Rate:    Solid State Device
>Form Factor:      2.5 inches
>Device is:        Not in smartctl database [for details use: -P showall]
>ATA Version is:   ACS-4 T13/BSR INCITS 529 revision 5
>SATA Version is:  SATA 3.3, 6.0 Gb/s (current: 6.0 Gb/s)
>Local Time is:    Mon Apr  8 21:22:15 2019 EDT
>SMART support is: Available - device has SMART capability.
>SMART support is: Enabled
>AAM feature is:   Unavailable
>APM level is:     128 (minimum power consumption without standby)
>Rd look-ahead is: Enabled
>Write cache is:   Enabled
>DSN feature is:   Unavailable
>ATA Security is:  Disabled, frozen [SEC2]
>Wt Cache Reorder: Unavailable

Yeah it's weird isn't it. So it seems it's not an issue with zfs at all
as far as I can see. This is one of the drives that was replaced, and
it's identical to the other two making up the array. So not unreasonably
ashift was 9, as all three drives making up the array were
512 logical/physical.

=== START OF INFORMATION SECTION ===
Model Family:     Western Digital Black
Device Model:     WDC WD4001FAEX-00MJRA0
Firmware Version: 01.01L01
User Capacity:    4,000,787,030,016 bytes [4.00 TB]
Sector Size:      512 bytes logical/physical
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ATA8-ACS (minor revision not indicated)
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Tue Apr  9 12:47:01 2019 BST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

I replaced one of them with an 8tb drive:

TART OF INFORMATION SECTION ===
Model Family:     Seagate Archive HDD
Device Model:     ST8000AS0002-1NA17Z
Firmware Version: AR13
User Capacity:    8,001,563,222,016 bytes [8.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    5980 rpm
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-2, ACS-3 T13/2161-D revision 3b
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Tue Apr  9 12:55:55 2019 BST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

so the 2nd drive is emulating 512. But ZFS seems to see through that and
correctly determines it's a 4k drive.

In any case, the fix was to make a new pool (which automatically set
ashift to 12 when the 8Tb disk was added) then zfs send from the old
pool to the new one, then destroy the old pool. Fortunately this was
easy because the system had zfs installed as an afterthought. So no
root-on-zfs. The OS is on a SSD.

All I can say is that zpool performance of a 4k drive in an a9 zpool is
non-ideal. The new pool feels quicker (even though the disks aren't
built for speed), and I've learned something new :D

-- 
J.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: not available
URL: <http://lists.freebsd.org/pipermail/freebsd-fs/attachments/20190409/88c99c5d/attachment.sig>


More information about the freebsd-fs mailing list