GET BUF: dmamap load failure - 12 + kernel panic [was: NETIO UDP benchmark problem]

Viktor CISTICZ viktor at cisti.cz
Fri Aug 21 12:26:50 UTC 2009


Hello! Thanks for changing subject, this is better.

Dne 20/8/2009, napsal "Miroslav Lachman" <000.fbsd at quip.cz>:

>I changed subject to better describing one, because it is not problem in
>netio itself.
>
>Viktor CISTICZ wrote:
>> Hello,
>> recently i have been testing two servers  via crosslink. Both have
>> Freebsd installed >
>>
>> twin1$ uname -a
>> FreeBSD twin1 8.0-BETA2 FreeBSD 8.0-BETA2 #0: Sun Aug 16
>> 22:57:29 CEST 2009
>> viktor at twin1:/usr/obj/usr/src/sys/GEN_NO_DBG  amd64
>>
>> twin2$ uname -a
>> FreeBSD kitt.twin2 8.0-BETA2 FreeBSD 8.0-BETA2 #0: Wed Jul 22 15:05:19
>> CEST 2009     viktor at kitt.twin2:/usr/obj/usr/src/sys/GEN_NO_DBG  amd64
>>
>> GEN_NO_DBG is GENERIC kernel without debugging options
>>
>> Twin2 is virtual machine on the same hardware as twin1, vmware used.
>>
>> I have this set installed on both machines >
>> twin1$ pkg_info
>> NetPIPE-3.7.1       A self-scaling network benchmark
>> gettext-0.17_1      GNU gettext package
>> gmake-3.81_3        GNU version of 'make' utility
>> iperf-2.0.4         A tool to measure maximum TCP and UDP bandwidth
>> libiconv-1.13.1     A character set conversion library
>> libtool-2.2.6a      Generic shared library support script
>> netio-1.26          Network benchmark
>> netperf-2.4.5       Network performance benchmarking package
>> portaudit-0.5.13    Checks installed ports against a list of security
>> vulnerabi
>> portmaster-2.9      Manage your ports without external databases or
>> languages
>> screen-4.0.3_6      A multi-screen window manager
>> ttcp-1.12           Benchmarking tool for analysing TCP and UDP
>> performance
>> unzip-5.52_5        List, test and extract compressed files in a ZIP
>> archive
>>
>> Both machines are connected via crosslink >
>> twin1# ifconfig
>> igb0 : public interface
>> igb1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
>>         options=13b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,TSO4>
>>         ether 00:30:48:c8:f3:91
>>         inet 10.10.10.10 netmask 0xffffff00 broadcast 10.10.10.255
>>         media: Ethernet autoselect (1000baseT <full-duplex>)
>>         status: active
>>
>> twin2# ifconfig
>> em0: public interface
>> em1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
>>         options=9b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM>
>>         ether 00:0c:29:4b:f1:76
>>         inet 10.10.10.20 netmask 0xffffff00 broadcast 10.10.10.255
>>         media: Ethernet autoselect (1000baseT <full-duplex>)
>>         status: active
>>
>> I have set up twin2 as a server for netio >
>> kitt.twin2# netio -s -p 1122
>>
>> And then tested from twin1 tcp test >
>> twin1# netio -p 1122 -t 10.10.10.20
>>
>> It was allright, i've got some results.
>>
>> But when I tried UDP, it failed.
>>
>> The server is still the same >
>> kitt.twin2# netio -s -p 1122
>>
>> Client
>> twin1# netio -p 1122 -u 10.10.10.20
>>
>> After around 1 minute, twin1 server stopped responding on public
>> interface, i've been disconnected. Via remote console i could access
>> the machine, it was acting normally. I could ping both interfaces. I
>> could even ping from other side of crosslink, from twin2 the privat
>> interface, but no reply on public interface.
>> The interface was shown UP in ifconfig, no messages in /var/log/messages.
>>
>> Then i executed ifconfig igb1 down && ifconfig igb1 up and it worked
>> again.
>
>Did you ifconfig down + up igb1 or igb0? As you said "server stopped
>responding on public interface", but you have igb0 marked as public
>interface and igb1 as private crosslink interface in ifconfig above.
Server stopped responding on igb0(public interface) from outside of the
box. Via igb1 there was ping response.
>Are you really saying that heavy UDP load on private interface igb1
>caused freez of public interface igb0 and then kernel panic?
Yeah, test was run via igb1, after it crashed, the server was not
responding on public interface igb0 and crashed.

>
>> I have then executed netio udp test again with 2k udp packets
>>
>> The server is still the same >
>> kitt.twin2# netio -s -p 1122
>>
>> twin1# netio -p 1122 -u -b 2k 10.10.10.20
>>
>> The same problem on twin1, but now it has crashed the computer >
>> via remote console i could see this >
>> twin1# GET BUF: dmamap load failure - 12
>> GET BUF: dmamap load failure - 12
>> GET BUF: dmamap load failure - 12
>> GET BUF: dmamap load failure - 12
>> GET BUF: dmamap load failure - 12
>> GET BUF: dmamap load failure - 12
>> GET BUF: dmamap load failure - 12
>> GET BUF: dmamap load failure - 12
>> GET BUF: dmamap load failure - 12
>> GET BUF: dmamap load failure - 12
>>
>> then it executed core dump and restarted.
>>
>> I am lost, thanks for any advise.
>
>Did you run it as root or regular user?
>
>Can you add output of netstat -m right before interface freez / kernel
>panic?
>
>And last - can you reproduce it with kernel with debugging options enabled?
>
>Miroslav Lachman
>
I have done another testing, this time i've booted twin1 server with
GENERIC kernel from 8.0-BETA2-amd64 and it has happened again.
this is what i've done on twin1 (client), i have done it as a normal
user >

twin1$ netio -u -p 1122  10.10.10.20



NETIO - Network Throughput Benchmark, Version 1.26

(C) 1997-2005 Kai Uwe Rommel



UDP connection established.

Packet size  1k bytes:  6494 KByte/s (32%) Tx,  114228 KByte/s (73%) Rx.

Packet size  2k bytes:  115122 KByte/s (0%) Tx,


on kitt.twin2 (server)

kitt.twin2# netio -s -p 1122

this appeared on kitt.twin2 >

sendto(): No buffer space available

sendto(): No buffer space available

sendto(): No buffer space available

sendto(): No buffer space available

sendto(): No buffer space available

sendto(): No buffer space available

sendto(): No buffer space available

sendto(): No buffer space available

sendto(): No buffer space available

sendto(): No buffer space available

sendto(): No buffer space available

sendto(): No buffer space available

sendto(): No buffer space available

sendto(): No buffer space available

sendto(): No buffer space available

sendto(): No buffer space available

sendto(): No buffer space available

sendto(): No buffer space available

sendto(): No buffer space available

sendto(): No buffer space available

sendto(): No buffer space available

sendto(): No buffer space available

sendto(): No buffer space available

sendto(): No buffer space available

Sending to client, packet size  1k ...

Receiving from client, packet size  2k ...

Then twin1 stopped working and kernel panic.

On another terminal on kitt.twin2 I ran (while 1, while netstat -m, while
sleep 1, while end) >
67272K/12510K/79782K bytes allocated to network (current/cache/total)

0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)

0/0/0 requests for jumbo clusters denied (4k/9k/16k)

0/0/0 sfbufs in use (current/peak/max)

0 requests for sfbufs denied

0 requests for sfbufs delayed

0 requests for I/O initiated by sendfile

0 calls to protocol drain routines

35585/8200/43785 mbufs in use (current/cache/total)

3892/4926/8818/25600 mbuf clusters in use (current/cache/total/max)

3892/4812 mbuf+clusters out of packet secondary zone in use
(current/cache)

12648/152/12800/12800 4k (page size) jumbo clusters in use
(current/cache/total/max)

0/0/0/6400 9k jumbo clusters in use (current/cache/total/max)

0/0/0/3200 16k jumbo clusters in use (current/cache/total/max)

67272K/12510K/79782K bytes allocated to network (current/cache/total)

0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)

0/0/0 requests for jumbo clusters denied (4k/9k/16k)

0/0/0 sfbufs in use (current/peak/max)

0 requests for sfbufs denied

0 requests for sfbufs delayed

0 requests for I/O initiated by sendfile

0 calls to protocol drain routines

I have captured via remote console what happened on twin1 (rewritten from
the picture)>
twin1#
the same netstat -m statistic procedure as at kitt.twin2>

0/0/0/3200 16k jumbo clusters in use (current/cache/total/max)
60243K/5841K/66085K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile
0 calls to protocol drain routines
34079/2791/36870 mbufs in use
514/2320/2834/25600 mbuf clusters in use
0/512 mbuf +clusters out of packet secondary
12674/126/12800/12800 4k (page size) jumbo clusters in use
0/0/0/6400 9k jumbo clusters in use
0/0/0/3200 16k jumbo clusters in use
60243K/5841K/66085K bytes alocated to network
..

and during the crash >
twin1#

Memory modified after free 0xffffff0083eb800(256) val=a8 @
0xffffff00083eb818
Memory modified after free 0xffffff0083ec600(256) val=a8 @
0xffffff00083ec618
there was many lines of this similar text.

VC


More information about the freebsd-current mailing list