is TMPFS still highly experimental?

Attila Nagy bra at fsn.hu
Mon Oct 3 18:34:00 UTC 2011


  On 10/03/2011 04:58 PM, Artem Belevich wrote:
>> For me, the bug is still here:
>> $ uname -a
>> FreeBSD b 8.2-STABLE FreeBSD 8.2-STABLE #5: Wed Sep 14 15:01:25 CEST 2011
>>    root at buildervm:/data/usr/obj/data/usr/src/sys/BOOTCLNT  amd64
>> $ df -h /tmp
>> Filesystem    Size    Used   Avail Capacity  Mounted on
>> tmpfs           0B      0B      0B   100%    /tmp
>>
>> I have no swap configured. The machine has 64 GB RAM.
>> vm.kmem_size=60G; vfs.zfs.arc_max=55G; vfs.zfs.arc_min=20G
> I'm curious -- does your ARC size ever reaches configured limit of
> 55G? My hunch that it's probably hovering around some noticeably lower
> number.
Yes, in some minutes.
Current counters:
kstat.zfs.misc.arcstats.c_min: 21474836480
kstat.zfs.misc.arcstats.c_max: 59055800320
kstat.zfs.misc.arcstats.size: 45691792856

> On my ZFS setups a lot of memory seems to be lost due to
> fragmentation. On a system with 24G of RAM and rac_max=16G, I
> typically see more than 20G of memory wired.
> With kmem_size=60G, ARC is likely to use up most of available kmem
> space and that's probably what affects tmpfs. Besides, with kmem_size
> that close to arc_max you may be risking meeting "kmem too small"
> panic, though, considering that your kmem_size is rather large chances
> of that are smaller than on a system with smaller amounts of memory
> and kmem_size.
Sounds plausible. BTW, it may be possible that the ARC limits are not 
needed anymore, they are here from the times, where on a 64 GB machine 
ARC hovered around 2-5 GBs without setting these (arc_min was even 
higher then).
BTW, the user space programs fit into around 1-2 GB RAM on this machine 
typically. Well, most of the time. :)
> I'd start with doubling kmem_size and, possibly, reducing arc_max to
> the point where it stops putting pressure on tmpfs.
>
I know there are several differences, but it would be very good to have 
similar behaviour with UFS. I guess it's quite evident that tmpfs can 
eat the file system cache, and I know it may be not so trivial to solve 
this with ZFS. :)

Will try it, thanks.


More information about the freebsd-fs mailing list