ZFS ARC vs Inactive memory on 10-STABLE: is it Ok?

Mark Saad nonesuch at longcount.org
Mon Feb 15 14:55:42 UTC 2016

On Mon, Feb 15, 2016 at 9:29 AM, Lev Serebryakov <lev at freebsd.org> wrote:

> Hash: SHA512
>  I have mostly-storage server with 8GB of physical RAM and 9TB (5x2TB
> HDD) raidz ZFS pool (so, about 6.5TB usable space).
>  ARC is limited to 3GB by vfs.zfs.arc_max.
>  This server runs Samba (of course), CrashPlan backup client (Linux
> Java!), and torrent client (transmission-daemon).
Wow someone else as crazy as I was. :)

>  And I'm noticing this regularly ("screenshot" of top(1)):
> Mem: 1712M Active, 3965M Inact, 2066M Wired, 137M Cache, 822M Buf,
> 4688K Free
> ARC: 421M Total, 132M MFU, 54M MRU, 1040K Anon, 7900K Header, 227M Other
> Swap: 4096M Total, 248M Used, 3848M Free, 6% Inuse
>  As you can see, here are almost 4G of Inactive memory and only 412M
> of ARC!
>  Is it Ok? Why Inactive memory (non-dirty buffers?) are pressed ARC
> out of memory?
Lev so I ran a similar setup on 10.1-RELEASE with 40TB in a raid 1+0 like
zpool . My top looked similar but its been a while
and i had 24G of ram. With a 12G Arc max. I always wondered what was going
on here but I suspected it was due to a interaction of java and arc
The crash plan app is terrible and would "start doing something new" up and
look hung. Disk io went to hell etc . Then things would settle down and
start chugging away.
Keep in mind crash plan would take like a month to back up 2TB of changes
on this thing.  I eventually convinced management to move to a automated
tape library
and a normal backup client ( netbackup )  for the backups.  Also I
abandoned this project about 18 months ago too .

- --
> // Lev Serebryakov
> Version: GnuPG v2
> iQJ8BAEBCgBmBQJWweC0XxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w
> vpJZ8F1jeInIOVXe/XLw07jht04uquTXHsMvw6F0J+WIIqsCld53q1bfj4CWAnl6
> 4TjULTZYUWANv3wK6KxItEN5eMmDEPOW6Eqls57OSCFcZA/32hyf/Y15Nec0L6JD
> sd8wpqUvQs0zb//frbUpjIRcfoVSMO2ip4doGPDtBv9IcE/kDz78IcmU9By2deXU
> IJE8Xlg2hDY+f/NhTR2sCuwtCSvpL9/mBztffYqsKQsAm8oIn0Sz9mNdjVzUR+rN
> lF4GoxcWf6c3HEM/LF4+dgOdb058YwO4amyUI7GoBSFBQq3OlJzvomGeOi2vPAvC
> BkWxOWOcWsmEwfk1b22k00yNAjvaXQsCx6r2L/6vyrAtoQ0moXF4Rks8+MLFRUTu
> FFke93UUPRQPXBdrBtlnFpXX6jpmlEm7g9pazarGc4hteYOKpvHajFvNvAB7RswI
> NQL70+QfLBgtaA5683scCuURNptStf/RfvhwjW/o5DPNLv+NHnT+nPk64MTDuaZD
> 4z9Kcj088KjB++xt9c6BXuCS4zlkyUhas5cNGG+SxupZajtIuaCBTeUv0QwjnDH5
> Pnu44Xe4MCvpDSt9odICdzytxO6yzwL7mLj70o2SsPs2ijN1w/fOlNqS46bekmJ/
> MtvVwObCRnoDg3aMRUL0
> =In6V
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"

mark saad | nonesuch at longcount.org

More information about the freebsd-stable mailing list