Re: Unusual ZFS behaviour

From: Eugene Grosbein <eugen_at_grosbein.net>
Date: Wed, 22 Nov 2023 07:04:39 UTC
22.11.2023 13:49, Jonathan Chen wrote:
> Hi,
> 
> I'm running a somewhat recent version of STABLE-13/amd64: stable/13-n256681-0b7939d725ba: Fri Nov 10 08:48:36 NZDT 2023, and I'm seeing some unusual behaviour with ZFS.
> 
> To reproduce:
>  1. one big empty disk, GPT scheme, 1 freebsd-zfs partition.
>  2. create a zpool, eg: tank
>  3. create 2 sub-filesystems, eg: tank/one, tank/two
>  4. fill each sub-filesystem with large files until the pool is ~80% full. In my case I had 200 10Gb files in each.
>  5. in one session run 'md5 tank/one/*'
>  6. in another session run 'md5 tank/two/*'
> 
> For most of my runs, one of the sessions against a sub-filesystem will be starved of I/O, while the other one is performant.
> 
> Is anyone else seeing this?

Please try repeating the test with atime updates disabled:

zfs set atime=off tank/one
zfs set atime=off tank/two

Does it make any difference?
Does it make any difference, if you import the pool with readonly=on instead?

Writing to ~80% pool is almost always slow for ZFS.