Re: zfs with operations like rm -rf takes a very long time recently

From: Mark Millard <marklmi_at_yahoo.com>
Date: Sun, 16 Oct 2022 18:24:05 UTC
void <void_at_f-m.fm> wrote on
Date: Sun, 16 Oct 2022 16:48:39 UTC :

> On Sun, 16 Oct 2022, at 15:42, Mark Millard wrote:
> > The book is not explicit about RAM subsystem performance
> > tradeoffs for ZFS. One property of the RPi4B's is that they
> > have very small RAM caches and one core can saturate the
> > memory subsystem if the RAM caches are being fairly
> > ineffective overall. In such contexts, multi-core need not
> > cut the time things take. (But I've no clue how likely such
> > conditions would be for your context.) A cache-busting
> > access pattern over much more than 1 MiByte memory range
> > drops the RPi4B performance greatly compared to such an
> > access pattern fitting in a 1 MiByte or smaller range --no
> > matter if it is 1 core or more cores that is/are trying to
> > be active.
> 
> That's interesting; i cant understand the system has been in 
> use for 9 months or so without this performance penalty. 
> The OS has been updated on the following timeline
> 
> main-n258595-226e41467ee1 on 2022-10-13
> main-n258157-f50274674eb on 2022-09-23
> main-n257818-6f7bc8e7a3d on 2022-09-05
> main-n257229-e9a2e4d1d28 on 2022-08-10
> main-n255150-70910e4b55c on 2022-05-04
> 
> cleaning out /usr/obj and /var/cache/ccache/* is something i'll do 
> periodically if what is required is a completely clean from-scratch 
> build. On at least two of those occasions I'd be doing what I'm trying now.
> but only now this issue has arisen. Of course, it may very well be 
> the hardware. But all the things i'd use to monitor it say it's all fine.

One thing we do not have is a set of before-the-problem data
to compare against. It is hard to tell specifically what time
frames have changed.

> would output of zfs-stats -a be of use?

Earlier you wrote: "Right now it's rm -rf-ing /var/cache/ccache/*
which is 5GB max". For this context, the number of files/directories
is likely more relevant than the total space the files/directories
take.

My experience with spinning rust tradeoff management goes back 15 or
more years at this point, beyond certain backup storage use. Even
backup usage of spinning rust is not in the recent past at this
point. So my help is minimal for such.

I'd guess that each/any of the following could produce interesting
background information during the problem:

# zpool iostat -w

# zpool iostat -l

# zpool iostat -r

# zpool iostat -q

( See: man zpool-iostat )

===
Mark Millard
marklmi at yahoo.com