[Bug 270636] Netmap leaks mbufs when receiving frames in generic mode on AMD Ryzen
- Reply: bugzilla-noreply_a_freebsd.org: "[Bug 270636] Netmap leaks mbufs when receiving frames in generic mode on AMD Ryzen"
- Reply: bugzilla-noreply_a_freebsd.org: "[Bug 270636] Netmap leaks mbufs when receiving frames in generic mode on AMD Ryzen"
- Reply: bugzilla-noreply_a_freebsd.org: "[Bug 270636] Netmap leaks mbufs when receiving frames in generic mode on AMD Ryzen"
- Reply: bugzilla-noreply_a_freebsd.org: "[Bug 270636] Netmap leaks mbufs when receiving frames in generic mode on AMD Ryzen"
- Reply: bugzilla-noreply_a_freebsd.org: "[Bug 270636] Netmap leaks mbufs when receiving frames in generic mode on AMD Ryzen"
- Reply: bugzilla-noreply_a_freebsd.org: "[Bug 270636] Netmap leaks mbufs when receiving frames in generic mode on AMD Ryzen"
- Reply: bugzilla-noreply_a_freebsd.org: "[Bug 270636] Netmap leaks mbufs when receiving frames in generic mode on AMD Ryzen"
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Tue, 04 Apr 2023 12:54:13 UTC
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=270636
Bug ID: 270636
Summary: Netmap leaks mbufs when receiving frames in generic
mode on AMD Ryzen
Product: Base System
Version: 13.1-STABLE
Hardware: amd64
OS: Any
Status: New
Severity: Affects Only Me
Priority: ---
Component: kern
Assignee: bugs@FreeBSD.org
Reporter: koverskeid@gmail.com
$uname -a
FreeBSD freebsd 13.1-RELEASE FreeBSD 13.1-RELEASE
releng/13.1-n250148-fc952ac2212 GENERIC amd64
$sysctl hw.model hw.machine hw.ncpu
hw.model: AMD Ryzen Embedded V1202B with Radeon Vega Gfx
hw.machine: amd64
hw.ncpu: 2
$pciconf -lv | grep -A1 -B3 network
igb0@pci0:1:0:0: class=0x020000 rev=0x03 hdr=0x00 vendor=0x8086
device=0x1533 subvendor=0x8086 subdevice=0x1533
vendor = 'Intel Corporation'
device = 'I210 Gigabit Network Connection'
class = network
subclass = ethernet
$sysctl net.inet.tcp.functions_available
net.inet.tcp.functions_available:
Stack D Alias PCB count
freebsd * freebsd 6
In netmap native mode after bombarding the machine with packets:
$netstat -m
6145/2240/8385 mbufs in use (current/cache/total)
2048/2270/4318/438928 mbuf clusters in use (current/cache/total/max)
0/508 mbuf+clusters out of packet secondary zone in use (current/cache)
0/508/508/219464 4k (page size) jumbo clusters in use (current/cache/total/max)
0/0/0/65026 9k jumbo clusters in use (current/cache/total/max)
0/0/0/36577 16k jumbo clusters in use (current/cache/total/max)
5632K/7132K/12764K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters delayed (4k/9k/16k)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0 sendfile syscalls
0 sendfile syscalls completed without I/O request
0 requests for I/O initiated by sendfile
0 pages read by sendfile as part of a request
0 pages were valid at time of a sendfile request
0 pages were valid and substituted to bogus page
0 pages were requested for read ahead by applications
0 pages were read ahead by sendfile
0 times sendfile encountered an already busy page
0 requests for sfbufs denied
0 requests for sfbufs delayed
In generic mode:
$sysctl dev.netmap.admode=2
***bombarding the machine with frames***
$netstat -m
442160/1330/443490 mbufs in use (current/cache/total)
438080/848/438928/438928 mbuf clusters in use (current/cache/total/max)
0/508 mbuf+clusters out of packet secondary zone in use (current/cache)
0/508/508/219464 4k (page size) jumbo clusters in use (current/cache/total/max)
0/0/0/65026 9k jumbo clusters in use (current/cache/total/max)
0/0/0/36577 16k jumbo clusters in use (current/cache/total/max)
986700K/4060K/990760K bytes allocated to network (current/cache/total)
0/264033/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters delayed (4k/9k/16k)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0 sendfile syscalls
0 sendfile syscalls completed without I/O request
0 requests for I/O initiated by sendfile
0 pages read by sendfile as part of a request
0 pages were valid at time of a sendfile request
0 pages were valid and substituted to bogus page
0 pages were requested for read ahead by applications
0 pages were read ahead by sendfile
0 times sendfile encountered an already busy page
0 requests for sfbufs denied
0 requests for sfbufs delayed
$dmesg
[zone: mbuf_cluster] kern.ipc.nmbclusters limit reached
The number of mbuf clusters in use doesn't decrease even if I remove the
network traffic or close the netmap port.
I am testing withouth sending any frames from the machine. I have reproduced
the behavior on 13.2-STABLE and 14.0-CURRENT.
I am running exactly the same testprogram on HW with Intel Atom processor and
the same NIC(I210) without a problem.
--
You are receiving this mail because:
You are the assignee for the bug.