ACK and RST packets sent after successfully terminating TCP connection

n j nino80 at gmail.com
Tue Feb 16 09:25:08 UTC 2010


> Packet #9:  client --> server: client requests TCP connection close (FIN+ACK)
> Packet #10: server --> client: server sends ACK
> <approximately 0.6 seconds passes>
> Packet #11: server --> client: server announces TCP window size of 0,
>            indicating TCP receive buffers are exhausted and that the
>            client should wait before doing anything more
> Packet #12: server --> client: identical re-sent ACK of packet #10

That is exactly the point: why is server sending any packets when
connection was FIN'ned successfully by both sides?
To my understanding of networking protocols, packets #11 and #12
should have never been sent.

> <approximately 0.75 seconds passes>
> Packet #5: server --> client: server announces TCP window size of 0,
>           indicating TCP receive buffers are exhausted and that the
>           client should wait before doing anything more
> Packet #6: server --> client: identical re-sent RST of packet #4
> Packet #7: client --> server: confirms reset (RST+ACK)

Actually, it seems server confirm SYN-ACK packet and not the RST
(packet #7 is "ack 2849043653" which is packet #2 - I'm running
tcpdump with -S to see absolute sequence numbers).

> Whatever this client/server protocol is, it isn't normal/standard.  It's
> not something like, for example, HTTP, SSH, or FTP; It's a custom
> protocol and one I haven't seen before.

It is a custom protocol at the application layer, meaning that instead
of HTTP payload it has a different (XML) payload, but everything below
is (hopefully) standard.

> Do you see the above awkward behaviour (zero-sized TCP window packets
> followed by a retransmission of a prior packet) when using standardised
> protocols or software, such as Apache (HTTP), OpenSSH (SSH), or FTP?

I'll see what I can find out.

> If not, then the client/server software is probably to blame.  It may be
> operating on a raw socket level, populating IP and/or TCP portions of
> the packet itself rather than relying on socket(2) entirely.

Client software is Java on Windows, but according to the pcaps, the
client is not misbehaving. On the other hand, server software is Perl
IO::Socket::INET which is pretty much a standard library and shouldn't
be the problem as well.

> If it uses standard kernel socket(2) functionality with PF_INET and
> SOCK_STREAM, then I'd ask if the source is available publicly to be
> analysed to determine if this behaviour is intentional or not.

If needed, posting the relevant code snippets shouldn't be a problem.

> Is there VPN and/or NAT involved between the client and server
> (re: NAT: particularly around the server)?

No.

> Finally, is it possible to get "ifconfig -a" and "netstat -m" output
> from the server?

Certainly.

# ifconfig -a
(inet addr anonymized)
em0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=9b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM>
        ether 00:0b:db:92:51:ec
        inet aaa.bbb.cc.dd netmask 0xffffff00 broadcast aaa.bbb.cc.255
        media: Ethernet autoselect (1000baseTX <full-duplex>)
        status: active
rl0: flags=8802<BROADCAST,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=8<VLAN_MTU>
        ether 00:c0:df:06:4e:8c
        media: Ethernet autoselect
        status: no carrier
plip0: flags=108810<POINTOPOINT,SIMPLEX,MULTICAST,NEEDSGIANT> metric 0 mtu 1500
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
        inet6 fe80::1%lo0 prefixlen 64 scopeid 0x4
        inet6 ::1 prefixlen 128
        inet 127.0.0.1 netmask 0xff000000

# netstat -m
259/1526/1785 mbufs in use (current/cache/total)
256/1360/1616/25600 mbuf clusters in use (current/cache/total/max)
256/768 mbuf+clusters out of packet secondary zone in use (current/cache)
0/552/552/12800 4k (page size) jumbo clusters in use (current/cache/total/max)
0/0/0/6400 9k jumbo clusters in use (current/cache/total/max)
0/0/0/3200 16k jumbo clusters in use (current/cache/total/max)
576K/5309K/5886K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0/22/6656 sfbufs in use (current/peak/max)
0 requests for sfbufs denied
0 requests for sfbufs delayed
209 requests for I/O initiated by sendfile
0 calls to protocol drain routines

Your help is really appreciated.

Regards,
-- 
Nino


More information about the freebsd-stable mailing list