HEADS UP: SACK committed to HEAD

Kevin Oberman oberman at es.net
Fri Jun 25 08:24:37 PDT 2004

> Date: Fri, 25 Jun 2004 09:59:24 -0400 (EDT)
> From: Robert Watson <rwatson at freebsd.org>
> Sender: owner-freebsd-current at freebsd.org
> On Fri, 25 Jun 2004, Daniel Lang wrote:
> > liamfoy at sepulcrum.org wrote on Fri, Jun 25, 2004 at 11:24:01AM +0200:
> > [..]
> > > > > I just commited the work done at Yahoo! to implement SACK in our tcp
> > > > > stack.  Please report any bugs or problems and we'll work on getting
> > > > > them addressed.
> > [..]
> > > What is SACK anyone?
> > [..]
> > 
> > "Selective Acknowledgement", it allows a host/router to explicitly
> > acknowledge TCP segments and retransmit them, such that if a segment
> > gets lost, it can be retransmitted from the last hop instead of the
> > connection endpoint, which would result in a much larger delay.
> > Especially if you have wireless links, SACK can be a huge improvement. 
> > 
> > Please correct/elaborate, I'm not sure if I got that entirely right,
> > except for the idea. ;-) 
> Mostly right, except that it's only the end-hosts in the TCP connection. 
> Technically, one can be a router, but I'm guessing that's not the common
> case.  Basically, the original TCP said "retransmit everything" when it
> realized a packet was dropped, and TCP SACK allows it to be more
> selective, which conserves bandwidth, which has the effective of reducing
> load, reducing latency, etc. 

The issue with wireless networks is questionable, too. SACK is needed
for good throughput on fast links with relatively large latencies. It is
valuable any time a large number of packets are launched before an ACK
is received.

Satellite links are a valid (if fairly uncommon) case where SACK is
really valuable. More important in my case is when you have very high
bandwidth streams over a geographically long distance. E.g. Los Angeles
to Boston or across any ocean. Even if the line is running clean, a
single dropped packet can really kill performance by dropping the stream
into slow start. Even with fast recovery options, just the loss in
retransmitting the large number of frames in transit can be a bit hit.

Single streams of > 1 Gbps are not common on the commercial network, but
in the research community where a physics experiment at Stanford can
generate Terabytes of data that has to be sent to FermiLab and CERN,
this is a very big issue. The lack of SACK in the FreeBSD stack has
caused many to switch to Linux and FreeBSD is no longer the "standard"
for high performance networking.
R. Kevin Oberman, Network Engineer
Energy Sciences Network (ESnet)
Ernest O. Lawrence Berkeley National Laboratory (Berkeley Lab)
E-mail: oberman at es.net			Phone: +1 510 486-8634

More information about the freebsd-current mailing list