extremely slow disk I/O after updating to 12.0
markand at malikania.fr
Wed Jul 3 14:31:52 UTC 2019
Le 03/07/2019 à 15:51, Karl Denninger a écrit :
> On 7/3/2019 08:42, Trond Endrestøl wrote:
>> On Wed, 3 Jul 2019 13:34+0200, David Demelier wrote:
>>> zpool status indicates that the blocksize is erroneous and that I may expect
>>> performance degradation. But that much is impressive. Can someone confirm?
>>> # zpool status
>>> pool: tank
>>> state: ONLINE
>>> status: One or more devices are configured to use a non-native block size.
>>> Expect reduced performance.
>>> action: Replace affected devices with devices that support the
>>> configured block size, or migrate data to a properly configured
>>> scan: none requested
>>> NAME STATE READ WRITE CKSUM
>>> tank ONLINE 0 0 0
>>> raidz1-0 ONLINE 0 0 0
>>> gpt/zfs0 ONLINE 0 0 0 block size: 512B configured, 4096B native
>>> gpt/zfs1 ONLINE 0 0 0 block size: 512B configured, 4096B native
>>> errors: No known data errors
>>> According to some googling, I must update those pools to change the block
>>> size. However there are no many articles on that so I'm a bit afraid of doing
>>> this. The zfs0 and zfs1 are in raidz.
>>> Any help is very welcome.
> ashift=9 on a 4k native block device is going to do horrible things to
> performance. There's no way to change it on an existing pool, as the
> other respondent noted; you will have to back up the data on the pool,
> destroy the pool and then re-create it.
> Was this pool originally created with 512b disks and then the drives
> were swapped out with a "replace" at some point for advanced-format units?
Thanks for your answers.
No, it was created almost a decade ago back in 2012 using FreeBSD 9. I
don't have the history for these commands but it was something like
zpool create tank raidz /dev/gpt/zfs0 /dev/gpt/zfs1
More information about the freebsd-questions