[Bug 277389] Reproduceable low memory freeze on 14.0-RELEASE-p5

From: <bugzilla-noreply_at_freebsd.org>
Date: Sat, 09 Mar 2024 00:03:44 UTC
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389

Mark Millard <marklmi26-fbsd@yahoo.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |marklmi26-fbsd@yahoo.com

--- Comment #8 from Mark Millard <marklmi26-fbsd@yahoo.com> ---
I tried the basic test in the type of context that I happen to
have access to, for example: main [so: 15]. It is a rather
simple zfs context, really used for bectl, not other typical
zfs reasons. It did not show the problem. Still, for comparison
and contrast, I report some context details, first the iozone
output:

# iozone -i 0,1 -l 512 -r 4k -s 1g
        Iozone: Performance Test of File I/O
                Version $Revision: 3.506 $
                Compiled for 64 bit mode.
                Build: freebsd 

        Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
                     Al Slater, Scott Rhine, Mike Wisner, Ken Goss
                     Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
                     Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
                     Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave
Boone,
                     Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
                     Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer,
                     Vangel Bojaxhi, Ben England, Vikentsi Lapa,
                     Alexey Skidanov, Sudhir Kumar.

        Run began: Fri Mar  8 23:04:51 2024

        Record Size 4 kB
        File size set to 1048576 kB
        Command line used: iozone -i 0,1 -l 512 -r 4k -s 1g
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
        Min process = 512 
        Max process = 512 
        Throughput test with 512 processes
        Each process writes a 1048576 kByte file in 4 kByte records

        Children see throughput for 512 initial writers         = 2155051.28
kB/sec
        Parent sees throughput for 512 initial writers  = 1450918.13 kB/sec
        Min throughput per process                      =    4138.72 kB/sec 
        Max throughput per process                      =    6173.17 kB/sec
        Avg throughput per process                      =    4209.08 kB/sec
        Min xfer                                        =  702788.00 kB

        Children see throughput for 512 rewriters       = 1160623.87 kB/sec
        Parent sees throughput for 512 rewriters        = 1152920.83 kB/sec
        Min throughput per process                      =    2260.53 kB/sec 
        Max throughput per process                      =    2282.09 kB/sec
        Avg throughput per process                      =    2266.84 kB/sec
        Min xfer                                        = 1039540.00 kB



iozone test complete.

# zpool status
  pool: zoptb
 state: ONLINE
  scan: scrub repaired 0B in 00:01:45 with 0 errors on Sun Jun 19 06:50:48 2022
config:

        NAME           STATE     READ WRITE CKSUM
        zoptb          ONLINE       0     0     0
          gpt/OptBzfs  ONLINE       0     0     0

errors: No known data errors

I'll note that I use:

vfs.zfs.per_txg_dirty_frees_percent=5

in /etc/sysctl.conf on the ZFS FreeBSD systems that I've
access to. A different system had an issue that I reported
and the person that had increased the default for this
recommended I set it back to this now-old default. That
worked and I set the same on all such systems. I've no
evidence of it being relevant here but report the
contextual oddity anyway.

I used:

# zfs list -ospace,compression,mountpoint
NAME                                         AVAIL   USED  USEDSNAP  USEDDS 
USEDREFRESERV  USEDCHILD  COMPRESS        MOUNTPOINT
. . .
zoptb/poudriere/data/wrkdirs                  652G   360K        0B    360K    
        0B         0B  off             /usr/local/poudriere/data/wrkdirs
. . .

for the compression-off storage for the iozone activity.

The system has 192 GiBytes of RAM, 32 hardware threads (16 cores).

# gpart show -p
. . .

=>        40  2930277088    nda2  GPT  (1.4T)
          40      532480  nda2p1  efi  (260M)
      532520        2008          - free -  (1.0M)
      534528  1073741824  nda2p2  freebsd-swap  (512G)
  1074276352  1845493760  nda2p3  freebsd-zfs  (880G)
  2919770112    10507016          - free -  (5.0G)

. . .

# swapinfo
Device          1K-blocks     Used    Avail Capacity
/dev/gpt/OptBswp364 536870912        0 536870912     0%

NOTE: There is no evidence that the swap space was ever
used to store anything during the test.

# uname -apKU
FreeBSD 7950X3D-ZFS 15.0-CURRENT FreeBSD 15.0-CURRENT #137
main-n268520-5e248c23d995-dirty: Sat Feb 24 15:46:10 PST 2024    
root@7950X3D-ZFS:/usr/obj/BUILDs/main-amd64-nodbg-clang/usr/main-src/amd64.amd64/sys/GENERIC-NODBG
amd64 amd64 1500014 1500014

The build is a personal build, not an official FreeBSD build.
(I'd be surprised if the distinctions would somehow make a
difference for the type of test.)


Maybe having the mirror involved is important? --Or some other
difference with my context? Amount of RAM? . . .?

-- 
You are receiving this mail because:
You are the assignee for the bug.