Re:_bsdinstall:_system_requir ements:_memory/RAM:_ZFS:_2048 _MB_for_an_ordinary_installation_plus_a_desktop_environment

From: Sulev-Madis Silber <freebsd-stable-freebsd-org730_at_ketas.si.pri.ee>
Date: Wed, 01 Oct 2025 01:05:44 UTC

On October 1, 2025 12:58:05 AM GMT+03:00, Rick Macklem <rick.macklem@gmail.com> wrote:
>On Tue, Sep 30, 2025 at 1:18 PM Warner Losh <imp@bsdimp.com> wrote:
>>
>>
>>
>> On Tue, Sep 30, 2025 at 2:03 PM Rick Macklem <rick.macklem@gmail.com> wrote:
>>>
>>> On Tue, Sep 30, 2025 at 12:50 PM Warner Losh <imp@bsdimp.com> wrote:
>>> >
>>> > Yea, I've not seen that to be the case. ZFS isn't that big of a memory hog these days... There are times you do need to tune the arc, but they are the exception, not the rule.
>>> Unfortunately, using ZFS as an NFS server seems to be an exception.
>>> Peter Errikson still uses 13.5 on his servers, since he doesn't find 14.n
>>> stable enough.
>>> There is this email thread:
>>> https://lists.freebsd.org/archives/freebsd-stable/2025-September/003126.html
>>
>>
>> OK. Since I no longer do NFS, I've not hit that....
>I didn't figure you had hit it.
>I was hoping that you (or someone else reading this) might
>know someone willing to tackle the problem?
>
>rick
>
>>
>>>
>>>
>>> I'd like to see this resolved, but I don't know enough about VM or
>>> ZFS's arc code
>>> and I have miniscule hardware, so I cannot replicate it.
>>>

i've hit it. or at that's at least i assume (= makes ass out of u and me)

i'm also 13.5. i'm 4g ram. but he can do it in >=128g

same thing. arc is low, wired high, kernel kills procs. caused by apperant mmap, now scrub

that thing has one 4t single disk pool and two 2-disk mirrors, 160g and 12t

i tried to choke git and dovecot and it worked. can't limit scrub

i've battled it for years!!!

btw, i've ran zfs in 512m in 2015. worked. buildworld. scrub. 40g pool 1 disk. why 512m is because i had 1g and i skipped memtest86+ which didn't boot from usb and later i found, when booting off actual cd or dos or whatever i used then to find out that one 512m module is bust

that all worked until hdd/mobo/psu/... went balls up. hdd corrupted data in fs (i assumed ram), later apparently failed, mobo had caps visibly bulged and even bust (wasn't there). so i'm in no way unknown to running low resc hw zfs. or stupidly bad hw maybe. but it somehow worked better than now. how can it be?

but zfs has gotten better now. i had c2q, rw was 40mb/s. now rw on zfs is proper >=200mb/s on c2d

things have improved but i wish i could have fix to this cursed wired full problem

btw, if you think that low ram and cpu is a throwaway second hand hw only, then no. nowadays you have embedded systems like that. and they could benefit from checksums, compression or copies=3 (wild guess to "fix" future flash block failures) the zfs brings

yeah, i bet sun's engineers are rolling in their beds and graves, "zfs runs where?!", "they crastrated it!!!"

but there's a reason why that's such a good fs design that noone has replicated yet. at least together with volume manager. which i think is right idea. fs should have direct control to devices because only fs knows where the actual data is

but how to fix memory issues i have no idea. i've tried asking everywhere. i don't think this is low ram either. it's a memory utilization, not amount, issue. you can run out of 2t too i'm sure