extremely slow disk I/O after updating to 12.0

Trond Endrestøl trond.endrestol at ximalas.info
Wed Jul 3 13:43:06 UTC 2019

On Wed, 3 Jul 2019 13:34+0200, David Demelier wrote:

> zpool status indicates that the blocksize is erroneous and that I may expect
> performance degradation. But that much is impressive. Can someone confirm?
> # zpool status
>   pool: tank
>  state: ONLINE
> status: One or more devices are configured to use a non-native block size.
>         Expect reduced performance.
> action: Replace affected devices with devices that support the
>         configured block size, or migrate data to a properly configured
>         pool.
>   scan: none requested
> config:
>         NAME          STATE     READ WRITE CKSUM
>         tank          ONLINE       0     0     0
>           raidz1-0    ONLINE       0     0     0
>             gpt/zfs0  ONLINE       0     0     0  block size: 512B configured, 4096B native
>             gpt/zfs1  ONLINE       0     0     0  block size: 512B configured, 4096B native
> errors: No known data errors
> According to some googling, I must update those pools to change the block
> size. However there are no many articles on that so I'm a bit afraid of doing
> this. The zfs0 and zfs1 are in raidz.
> Any help is very welcome.

If you want to change the block size, I'm afraid you must backup your 
data somewhere, destroy tank, and recreate it after you set:

sysctl vfs.zfs.min_auto_ashift=12

If you only deal with 4Kn drives, then I suggest you edit 
/etc/sysctl.conf, adding for future use:


Options range from replicating the data on another computer, simply as 
a file (do this twice while saving to a different filename each time), 
or receiving and unpacking the zstream on another computer's zpool, or 
migrating to a new pair of disks.

Here's my outline for doing the ZFS transfer:


Prepare computer B for receiving the zstream:

nc -l 1234 > some.file.zfs

Or, still on computer B:

nc -l 1234 | zfs recv -Fduv somepool
# Optional, to be done after the transfer:
zfs destroy -Rv somepool at transfer

In the latter case, existing filesystems beneath the toplevel 
filesystem in somepool will be replaced by whatever is in the zstream. 
Filesystems with "pathnames" unique to somepool will be unaffected.

On computer A:

zfs snap tank at transfer
zfs send -RLev tank at transfer | nc -N computer.B.some.domain 1234
zfs destroy -Rv tank at transfer


Feel free to replace nc (netcat) with ssh or something else.


zfs send and zfs recv can be piped together if the pools are connected 
to the same computer:

zfs send -RLev tank at transfer | zfs recv -Fduv newtank

newtank can be renamed simply by exporting it and importing it using 
its current and desired name:

zpool export newtank
zpool import -N newtank tank

Note, this must be done while running FreeBSD from some other media, 
such as a DVD or a memstick.

Take care to ensure the bootfs pool property is pointing to the 
correct BE before rebooting.


To transfer the data back to the new tank pool:

Prepare computer A for receiving the zstream:

nc -l 1234 | zfs recv -Fduv tank
# Do these two commands after the transfer:
zfs destroy -Rv tank at transfer
zpool set bootfs=tank/the/correct/boot/environment tank

On computer B:

nc -N computer.A.some.domain 1234 < some.file.zfs

Or, still on computer B:

zfs snap somepool at transfer # If you removed the previous @transfer snapshot
zfs send -RLev somepool at transfer | nc -N computer.A.some.domain 1234


More information about the freebsd-questions mailing list