FreeBSD 8.2 and MPD5 stability issues - update
Mike Tancsa
mike at sentex.net
Mon Jul 4 17:21:28 UTC 2011
What do you have net.graph.threads set to ? With the load avg so high,
perhaps you are just running into processing limits with so many
connections ? amotin would know.
---Mike
On 7/4/2011 1:16 PM, Adrian Minta wrote:
>> It seems, enough. But, are you sure your L2TP client will wait
>> for overloaded daemon to complete connection? The change will
>> proportionally increase responsiveness of mpd - it has not enough CPU
>> horsepower to process requests timely.
>>
>> Eugene Grosbein
>
> Actually something else is happening.
>
> I increased the queue in msg.c
> #define MSG_QUEUE_LEN 65536
> ... and in the ppp.h:
> #define SETOVERLOAD(q) do { \
> int t = (q); \
> if (t > 600) { \
> gOverload = 100; \
> } else if (t > 100) { \
> gOverload = (t - 100) * 2; \
> } else { \
> gOverload = 0; \
> } \
> } while (0)
>
> Now the overload message is very rare, but the behaviour is the same.
> Around 5500 sessions the number don't grow anymore, but instead begin to
> decrease.
>
> The mpd log say something like this:
> #tail -f /var/log/mpd.log | grep -v "\["
> Jul 4 19:56:46 lns mpd: Incoming L2TP packet from 10.42.10.16 1701
> Jul 4 19:56:46 lns mpd: L2TP: Incoming call #32 via connection
> 0x80ae96c10 received
> Jul 4 19:56:46 lns mpd: Link: packet from unexisting link "6310"
> Jul 4 19:56:46 lns mpd: Link: packet from unexisting link "6251"
> Jul 4 19:56:46 lns mpd: Link: packet from unexisting link "6250"
> Jul 4 19:56:46 lns mpd: L2TP: Control connection 0x80b06b710 10.42.1.48
> 1701 <-> 10.42.9.210 1701 connected
> Jul 4 19:56:46 lns mpd: Incoming L2TP packet from 10.42.10.4 1701
> Jul 4 19:56:46 lns mpd: Incoming L2TP packet from 10.42.10.10 1701
> Jul 4 19:56:46 lns mpd: L2TP: Incoming call #48 via connection
> 0x80b06b710 received
> Jul 4 19:56:46 lns mpd: Link: packet from unexisting link "6311"
> Jul 4 19:56:46 lns mpd: Link: packet from unexisting link "6312"
> Jul 4 19:56:46 lns mpd: Link: packet from unexisting link "6252"
> Jul 4 19:56:46 lns mpd: L2TP: Control connection 0x80ad99110 10.42.1.23
> 1701 <-> 10.42.9.244 1701 connected
> Jul 4 19:56:46 lns mpd: L2TP: Control connection 0x80ad99410 10.42.1.4
> 1701 <-> 10.42.10.16 1701 connected
> Jul 4 19:56:46 lns mpd: Incoming L2TP packet from 10.42.9.234 1701
> Jul 4 19:56:46 lns mpd: Incoming L2TP packet from 10.42.10.2 1701
> Jul 4 19:56:47 lns mpd: L2TP: Incoming call #4 via connection 0x80ad99410
> received
> Jul 4 19:56:47 lns mpd: L2TP: Incoming call #23 via connection
> 0x80ad99110 received
> Jul 4 19:56:47 lns mpd: Link: packet from unexisting link "6253"
> Jul 4 19:56:47 lns mpd: L2TP: Control connection 0x80ad99a10 10.42.1.7
> 1701 <-> 10.42.10.4 1701 connected
> Jul 4 19:56:47 lns mpd: Incoming L2TP packet from 10.42.9.214 1701
> Jul 4 19:56:47 lns mpd: Incoming L2TP packet from 10.42.9.220 1701
> Jul 4 19:56:47 lns mpd: L2TP: Incoming call #7 via connection 0x80ad99a10
> received
> Jul 4 19:56:47 lns mpd: L2TP: Control connection 0x80ad99d10 10.42.1.7
> 1701 <-> 10.42.10.10 1701 connected
> Jul 4 19:56:47 lns mpd: Link: packet from unexisting link "6254"
> Jul 4 19:56:47 lns mpd: Link: packet from unexisting link "6303"
> Jul 4 19:56:47 lns mpd: Link: packet from unexisting link "6302"
> Jul 4 19:56:47 lns mpd: L2TP: Control connection 0x80ab22b10 10.42.1.32
> 1701 <-> 10.42.9.234 1701 connected
> Jul 4 19:56:47 lns mpd: L2TP: Control connection 0x80ab22810 10.42.1.13
> 1701 <-> 10.42.10.2 1701 connected
> Jul 4 19:56:47 lns mpd: Incoming L2TP packet from 10.42.10.14 1701
>
> A top command reveal that the server is around 50% free:
>
> last pid: 63542; load averages: 4.93, 2.98, 1.40 up 0+22:32:42 19:44:23
> 24 processes: 2 running, 22 sleeping
> CPU 0: 4.5% user, 0.0% nice, 5.6% system, 36.8% interrupt, 53.0% idle
> CPU 1: 2.6% user, 0.0% nice, 7.5% system, 48.5% interrupt, 41.4% idle
> CPU 2: 3.7% user, 0.0% nice, 7.9% system, 32.6% interrupt, 55.8% idle
> CPU 3: 3.0% user, 0.0% nice, 7.9% system, 33.5% interrupt, 55.6% idle
> CPU 4: 5.6% user, 0.0% nice, 13.9% system, 33.8% interrupt, 46.6% idle
> CPU 5: 2.3% user, 0.0% nice, 7.5% system, 36.1% interrupt, 54.1% idle
> CPU 6: 3.0% user, 0.0% nice, 9.8% system, 36.1% interrupt, 51.1% idle
> CPU 7: 0.8% user, 0.0% nice, 2.6% system, 43.2% interrupt, 53.4% idle
> Mem: 148M Active, 695M Inact, 753M Wired, 108K Cache, 417M Buf, 2342M Free
> Swap: 4096M Total, 4096M Free
>
> PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
> 75502 root 2 76 0 194M 168M select 4 4:32 63.57% mpd5
> 2131 root 1 46 0 7036K 1544K select 1 4:27 5.18% syslogd
> 1914 root 1 44 0 5248K 3176K select 2 0:17 0.00% devd
> 73229 root 1 44 0 16384K 8464K wait 1 0:02 0.00% bash
> 2434 root 1 44 0 12144K 4156K select 2 0:01 0.00% sendmail
> 73222 media 1 44 0 28500K 4360K select 6 0:00 0.00% sshd
> 2445 root 1 76 0 7964K 1624K nanslp 0 0:00 0.00% cron
> 1861 root 1 44 0 8060K 1372K select 0 0:00 0.00% moused
> 73219 root 1 44 0 28500K 4284K sbwait 4 0:00 0.00% sshd
> 2438 smmsp 1 44 0 12144K 3952K pause 1 0:00 0.00% sendmail
> 73225 root 1 45 0 10300K 2756K pause 5 0:00 0.00% csh
> 73224 media 1 44 0 21680K 2024K wait 0 0:00 0.00% su
> 2419 root 1 44 0 16532K 3768K select 0 0:00 0.00% sshd
> 2538 root 1 76 0 6904K 1284K ttyin 3 0:00 0.00% getty
> 2536 root 1 76 0 6904K 1284K ttyin 1 0:00 0.00% getty
> 2539 root 1 76 0 6904K 1284K ttyin 2 0:00 0.00% getty
> 63542 root 1 44 0 9368K 2444K CPU0 0 0:00 0.00% top
> 2533 root 1 76 0 6904K 1284K ttyin 0 0:00 0.00% getty
> 2537 root 1 76 0 6904K 1284K ttyin 6 0:00 0.00% getty
> 73223 media 1 45 0 8336K 1900K wait 4 0:00 0.00% sh
> 2535 root 1 76 0 6904K 1284K ttyin 5 0:00 0.00% getty
> 2540 root 1 76 0 6904K 1284K ttyin 7 0:00 0.00% getty
> 2534 root 1 76 0 6904K 1284K ttyin 4 0:00 0.00% getty
>
>
> The incoming calls rate is around 30/sec. If I lower this to 10/sec I'm
> able to achieve 7000 sessions.
>
--
-------------------
Mike Tancsa, tel +1 519 651 3400
Sentex Communications, mike at sentex.net
Providing Internet services since 1994 www.sentex.net
Cambridge, Ontario Canada http://www.tancsa.com/
More information about the freebsd-net
mailing list