10.0-RC1: bad mbuf leak?

Michael Tuexen Michael.Tuexen at lurchi.franken.de
Thu Dec 19 09:09:49 UTC 2013


On Dec 19, 2013, at 9:41 AM, Adrian Chadd <adrian at freebsd.org> wrote:

> Hm, try reverting just the em code to that from a 10.0-BETA? Just in
> case something changed there?
I saw a similar behaviour without the patches we are discussing (regarding
ignoring the error).

Best regards
Michael
> 
> 
> 
> -a
> 
> On 16 December 2013 06:35, Mark Felder <feld at freebsd.org> wrote:
>> Hi all,
>> 
>> I think I'm experiencing a bad mbuf leak or something of the sort and I
>> don't know how to diagnose this further.
>> 
>> I have a machine at home that is mostly used for transcoding video for
>> viewing on my TV via the multimedia/plexmediaserver port. This software
>> runs in a jail and gets the actual files from my NAS via NFSv4. It's a
>> pretty simple setup and sits idle unless I am watching TV.
>> 
>> Between the 10.0-BETAs and the 10.0-RC1 did something network related
>> that could affect mbufs change? Ever since I upgraded this machine to
>> RC1 it has been "crashing", which I diagnosed as actually being an mbuf
>> exhaustion. Raising the mbufs brings it back to life, and it does
>> mention the exhaustion on the system console.
>> 
>> Last night, for example, I rebooted the machine and it has been sitting
>> mostly idle. I wake up this morning to see this:
>> 
>> # vmstat -z
>> 
>> ITEM                   SIZE  LIMIT     USED     FREE      REQ FAIL SLEEP
>> mbuf_packet:            256, 6511095,    1023,    1727, 8322474,   0,
>> 0
>> mbuf:                   256, 6511095, 2811247,    1563,56000603,121933,
>> 0
>> mbuf_cluster:          2048, 1017358,    2750,       0,    2750,2740,
>> 0
>> mbuf_jumbo_page:       4096, 508679,       0,     152, 2831466, 137,   0
>> 
>> # netstat -m
>> 812270/3290/2815560 mbufs in use (current/cache/total)
>> 1023/1727/2750/1017358 mbuf clusters in use (current/cache/total/max)
>> 1023/1727 mbuf+clusters out of packet secondary zone in use
>> (current/cache)
>> 0/152/152/508679 4k (page size) jumbo clusters in use
>> (current/cache/total/max)
>> 0/0/0/150719 9k jumbo clusters in use (current/cache/total/max)
>> 0/0/0/84779 16k jumbo clusters in use (current/cache/total/max)
>> 705113K/4884K/709998K bytes allocated to network (current/cache/total)
>> 121933/2740/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
>> 0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters)
>> 0/0/0 requests for jumbo clusters delayed (4k/9k/16k)
>> 137/0/0 requests for jumbo clusters denied (4k/9k/16k)
>> 0 requests for sfbufs denied
>> 0 requests for sfbufs delayed
>> 0 requests for I/O initiated by sendfile
>> 
>> 
>> The network interface is em(4).
>> 
>> Things I've tried:
>> 
>> - restarting all software/services including the jail
>> - down/up the network interface
>> 
>> The only thing that works is rebooting.
>> 
>> Also, the only possible "strange" part of this setup is that the NFS
>> mounts used by the jail are not direct. They're actually nullfs mounted
>> into the jail as I want access to them outside of the jail as well. Not
>> sure if nullfs+nfs could do something this bizarre.
>> 
>> If anyone has any hints on what I can do to track this down it would be
>> appreciated.
>> _______________________________________________
>> freebsd-net at freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-net
>> To unsubscribe, send any mail to "freebsd-net-unsubscribe at freebsd.org"
> _______________________________________________
> freebsd-net at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "freebsd-net-unsubscribe at freebsd.org"
> 



More information about the freebsd-net mailing list