curl question - not exactly on-topic

Dan Nelson dnelson at
Wed Feb 10 18:03:31 UTC 2010

In the last episode (Feb 10), Kurt Buff said:
> On Tue, Feb 9, 2010 at 21:05, Dan Nelson <dnelson at> wrote:
> > In the last episode (Feb 09), Kurt Buff said:
> >> Actually, it's not merely a curl question, it's a "curl and squid"
> >> question.
> >>
> >> I'm trying to determine the cause of a major slowdown in web browsing on
> >> our network, so I've put curl on the squid box, and am using the following
> >> incantations to see if I can determine the cause of the slowdown:
> >>
> >>   curl -s -w "%{time_total}\n" "%{time_namelookup}\n" -o /dev/null
> >>
> >> and
> >>
> >>   curl -s -w "%{time_total}\n" "%{time_namelookup}\n" -o /dev/null -x
> >>
> >> The problem arises with the second version, which uses the proxy. The
> >> first incantation just returns the times, which is exactly what I want.
> >>
> >> However, when I use the -x parameter, to use the proxy, I get html
> >> returned as well as the times, which is a pain to separate out.
> >
> > Your problem is what's after -w.  You want one argument:
> > "%{time_total}\n%{time_namelookup}\n", not two.   With your original
> > command, "%{time_namelookup}\n" is treated as another URL to fetch. 
> > With no proxy option, curl realizes it's not an url immediately and
> > skips to the next argument on the commandline - 
> > With a proxy, curl has to send each url to the proxy for processing. 
> > The proxy probably returns a "400 Bad Request" error on the first
> > (invalid) url, which is redirected to /dev/null.   The next url doesn't
> > have another -o so it falls back to printing to stdout.
> >
> > Adding -v to the curl commandline will help you diagnose problems like
> > this.
> Thanks for that, though it's unfortunate.
> I would really like a better understanding of the times, to help further
> diagnose the problem, and 'man curl' says that multiple invocations of
> '-w' will result in the last one winning, which I've verified.
> Do you have any suggestions for a way to get the timing of these
> operations without resorting to tcpdump?

Does -w "%{time_total}\n%{time_namelookup}\n" not do what you want?  There
are a bunch of other time_* variables you could add, too.

Also, there's nothing wrong with tcpdump (or wireshark).  If your traffic
passes through multiple proxies or content-analyzing firewalls, you can run
multiple simultaneous tcpdumps, one on each interface.  Then you can run
your curl command, and compare the traces side-by-side and see if any
servers are taking longer than expected to forward the data.  If you have a
managed switch, you might even be able to configure a "monitor" port that
will forward all traffic it sees to that port, and you can run just one
tcpdump and see the same packet multiple times as it passes from server to

	Dan Nelson
	dnelson at

More information about the freebsd-questions mailing list