Odp: Re: patm, idt, ipfw - next adentures

Harti Brandt brandt at fokus.fraunhofer.de
Mon Oct 6 03:14:06 PDT 2003


On Fri, 3 Oct 2003, Franky wrote:

I suppose there is a bad interaction between HARP and IPFW. Can you
tell me what I should need to configure IPFW (the simplest configuration
(I suppose this would be to pass all packets)).

harti

F>> As every non-paid open source project FreeBSD is developed by volunteers.
F>> If someone finds a problem and helps the developer to get the problem
F>> solved things will get better with time. I have asked you for panic
F>> message and stack trace. These are really simple to get. How do you expect
F>> I'm going to fix your problem if you're not going to help me fixing it?
F>ok, but now I have not problem with panic (I don't use patm driver and
F>PROATM-155 card - I will back to this but later) I have problem with idt
F>driver and ipfw on FreeBSD5.1.
F>At this night I did new tests on FreeBSD 5.1 with ForeRunner LE155 and idt
F>driver:
F>-a part of kernel config:
F>ptions         DDB                     #Enable the kernel debugger
F>options         INVARIANTS              #Enable calls of extra sanity
F>checkin
F>options         INVARIANT_SUPPORT       #Extra sanity checks of internal
F>stru
F>options         WITNESS                 #Enable checks to detect deadlocks
F>an
F>options         WITNESS_SKIPSPIN        #Don't run witness on spinlocks for
F>s
F>device          isa
F>device          pci
F>#device         patm
F>#device         utopia
F>device          atm
F>device          harp
F>options         ATM_CORE
F>options         ATM_IP
F>options         ATM_SIGPVC
F>options         LIBMBPOOL
F>options         NATM
F>-idt is now like a module
F># kldstat
F>Id Refs Address    Size     Name
F>1    2 0xc0400000 31d0c4   kernel
F>2    1 0xcb4ac000 9000     idt.ko
F>-kernel is debug version (size 15827K)
F>-ipfw have this lines
F>ipfw pipe 1 config bw 5000Kbit/s queue 4Kbytes
F>ipfw queue 10 config weight 65 pipe 1 buckets 4096 mask dst-ip 0x0000ffff
F>ipfw queue 11 config weight 35 pipe 1 buckets 4096 mask dst-ip 0x0000ffff
F>
F>ipfw add 510 queue 10 all from 192.168.192.0/26 to any out via x0
F>ipfw add 511 queue 11 all from not 192.168.192.0/26 to any out via x0
F>
F>About 5 min. after boot all PVC stop transmit, I tcpdump each network
F>interface - on all are in/out packets, but "atm show stats vcc" show then
F>only IN couters are change. Out counters are halt.
F>Input    Input  Input  Output   Output Output
F>Interface  VPI   VCI     PDUs    Bytes   Errs    PDUs    Bytes   Errs
F>idt0         0   140    54126  6010496      1  105624 134713668      0
F>idt0         0   141     1137    81719      0   30811  8441340      0
F>idt0         0   142    25280 11764794      0   17640  3217976      0
F>idt0         0   143       30     2520      0       8      800      0
F>idt0         0   144    12658 13571079      0   10451  5502752      0
F>idt0         0   145        0        0      0       3      168      0
F>idt0         0   146     2648   198906      0    6257  8558032      0
F>idt0         0   147    39718 16771801      0   23808 16353952      0
F>idt0         0   148       19     4344      0      54     4704      0
F>idt0         0   149       10      896      0     108    10916      0
F>idt0         0   150    16403 10339586      0   13689  4619084      0
F>idt0         0   151     9363  4467235      0    6046  1040544      0
F>idt0         0   152        0        0      0       0        0      0
F>idt0         0   153     5336   361945      0    7903 10678104      0
F>idt0         0   154     5518  1276588      0    9960 12457632      0
F>idt0         0   155        0        0      0       0        0      0
F>
F>
F>at this moment  netstat -m show this:
F>mbuf usage:
F>GEN cache:      0/0 (in use/in pool)
F>CPU #0 cache:   52155/52160 (in use/in pool)
F>Total:          52155/52160 (in use/in pool)
F>Mbuf cache high watermark: 512
F>Maximum possible: 131072
F>Allocated mbuf types:
F>52155 mbufs allocated to data
F>39% of mbuf map consumed
F>mbuf cluster usage:
F>GEN cache:      0/0 (in use/in pool)
F>CPU #0 cache:   51537/51544 (in use/in pool)
F>Total:          51537/51544 (in use/in pool)
F>Cluster cache high watermark: 128
F>Maximum possible: 65536
F>14% of cluster map consumed
F>116128 KBytes of wired memory reserved (27% in use)
F>0 requests for memory denied
F>0 requests for memory delayed
F>0 calls to protocol drain routines
F>after next 5 min. :
F>mbuf usage:
F>GEN cache:      0/0 (in use/in pool)
F>CPU #0 cache:   66153/66176 (in use/in pool)
F>Total:          66153/66176 (in use/in pool)
F>Mbuf cache high watermark: 512
F>Maximum possible: 131072
F>Allocated mbuf types:
F>66153 mbufs allocated to data
F>50% of mbuf map consumed
F>mbuf cluster usage:
F>GEN cache:      0/0 (in use/in pool)
F>CPU #0 cache:   65535/65536 (in use/in pool)
F>Total:          65535/65536 (in use/in pool)
F>Cluster cache high watermark: 128
F>Maximum possible: 65536
F>4% of cluster map consumed
F>147616 KBytes of wired memory reserved (14% in use)
F>716877 requests for memory denied
F>0 requests for memory delayed
F>0 calls to protocol drain routines
F>after this all interface halt (even etherne fxp0, fxp1) and in
F>/var/log/messages at first time system message:
F>Oct  3 07:24:29 ordos kernel: Out of mbuf address space!
F>Oct  3 07:24:30 ordos kernel: Consider increasing NMBCLUSTERS
F>Oct  3 07:24:30 ordos kernel: All mbufs or mbuf clusters exhausted, please
F>se
F>e tuning(7).
F>
F>This is the end.
F>Maybe bug is in ipfw, but I use very often queue/pipe on Intel Gbit
F>interface with vlan and it works.
F>
F>
F>
F>
F>
F>
F>
F>________________________________________________
F>http://www.is.net.pl
F>

-- 
harti brandt,
http://www.fokus.fraunhofer.de/research/cc/cats/employees/hartmut.brandt/private
brandt at fokus.fraunhofer.de, harti at freebsd.org


More information about the freebsd-atm mailing list