Re: ZFS pool balance and performance
- Reply: void : "Re: ZFS pool balance and performance"
- Reply: Frank Leonhardt : "Re: ZFS pool balance and performance"
- Reply: Gerrit Kühn : "Re: ZFS pool balance and performance"
- In reply to: Karl Denninger : "Re: ZFS pool balance and performance"
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Sun, 24 Aug 2025 14:41:40 UTC
> On Aug 24, 2025, at 08:11, Frank Leonhardt <freebsd-doc@fjl.co.uk> wrote:
>
> It looks like one of the drives has been replaced. I've bought replacement drives of the same model only to discover they've changed to SMR - the array ran very badly until removed.
Well, all of the disk have been replaced. You mention one, but that may just
be because one of them is partitioned rather than whole?
And, da1-da7 are all exactly the same part number
da1: <WDC WUH721414AL4200 A07G> Fixed Direct Access SPC-4 SCSI device
The front end of da1 is mirrored with a 120G da0 as the system pool. The
rest of da1 and all of da2-7 are this pool.
How can I tell if any of these are SMR? And wouldn’t they all be? I
suppose like Frank notes, I can’t be sure that didn’t happen.
> You also mentioned you were using NFS, but not what for other than reads. You might want to take a look at this if there are any synchronous writes going on:
I’m almost not using NFS for anything other than reads. Small writes
maybe, inode (atime) updates, small files. Most of the NFS is just
reads of large data files. On NFS mostly a SAN, but 99% of the writes
are by local processes on it. There are some other accesses to it over
the network that write, but not NFS. SMB and AppleTalk backups of things
do a bunch of writes. I suppose some of the performance problems I was
seeing with my NFS reads last night were affected by other writers I
am not able to directly witness. I should find a way to trace that
activity?
> On Aug 24, 2025, at 08:24, Karl Denninger <karl@denninger.net> wrote:
>
> I generally use mirror sets rather than raidzs and on the "large" pools I run I haven't run into this sort of imbalance, despite doing incremental pool expansions (e.g. replace all the vdev elements in one with larger disks, thus expanding the storage) several times.
Yeah. And the replacement system I’ve been building will be a bunch of
mirror sets, as I’ve heard that recommended. But that replacement is
still a ways out. So was looking for any short-term improvements I can
get into this system. This system really doesn’t have much memory
either, so I know that’s hurting me. It’s a 40GB system I see now.
Yet another improvement of the new system that’s coming, it has 256
or 512GB.
> As someone else noted you might want to see if one of those elements in radidz1-0 has some sort of problem (e.g. being an SMR disk where the others are not, etc.) zpool status -s might be useful (shows any vdev elements that are slow but do complete the I/O) which could cause zfs to disfavor that vdev.
Okay. Nothing to see there, at least.
NAME STATE READ WRITE CKSUM SLOW
tank ONLINE 0 0 0 -
raidz1-0 ONLINE 0 0 0 -
da1p4 ONLINE 0 0 0 0
diskid/DISK-QGH0S3UTp1 ONLINE 0 0 0 0
diskid/DISK-QGH0Y5ATp1 ONLINE 0 0 0 0
raidz1-1 ONLINE 0 0 0 -
diskid/DISK-9JG7REXTp1 ONLINE 0 0 0 0
diskid/DISK-9JG3M05Tp1 ONLINE 0 0 0 0
diskid/DISK-9JG7RRNTp1 ONLINE 0 0 0 0
errors: No known data errors
> The only place I don't get hinky with fragmentation is on SSDs which of course have no rotational or head movement latency; on any rotating media it hurts you with only caching (e.g. ARC buffering) being of use against it. I'm unaware of any good means to rebalance allocation "in-place."
Yeah, google search suggested there isn’t any way to rebalance. Is there
a way I can identify which vdev files are on? If so, I could delete some
files that are principally on raidz1-1, which would help some.
Thanks all.
- Chris