curl question - not exactly on-topic

Kurt Buff kurt.buff at gmail.com
Wed Feb 10 18:36:44 UTC 2010


On Wed, Feb 10, 2010 at 10:03, Dan Nelson <dnelson at allantgroup.com> wrote:
> In the last episode (Feb 10), Kurt Buff said:
>> On Tue, Feb 9, 2010 at 21:05, Dan Nelson <dnelson at allantgroup.com> wrote:
>> > In the last episode (Feb 09), Kurt Buff said:
>> >> Actually, it's not merely a curl question, it's a "curl and squid"
>> >> question.
>> >>
>> >> I'm trying to determine the cause of a major slowdown in web browsing on
>> >> our network, so I've put curl on the squid box, and am using the following
>> >> incantations to see if I can determine the cause of the slowdown:
>> >>
>> >>   curl -s -w "%{time_total}\n" "%{time_namelookup}\n" -o /dev/null http://www.example.com
>> >>
>> >> and
>> >>
>> >>   curl -s -w "%{time_total}\n" "%{time_namelookup}\n" -o /dev/null -x 192.168.1.72 http://www.example.com
>> >>
>> >> The problem arises with the second version, which uses the proxy. The
>> >> first incantation just returns the times, which is exactly what I want.
>> >>
>> >> However, when I use the -x parameter, to use the proxy, I get html
>> >> returned as well as the times, which is a pain to separate out.
>> >
>> > Your problem is what's after -w.  You want one argument:
>> > "%{time_total}\n%{time_namelookup}\n", not two.   With your original
>> > command, "%{time_namelookup}\n" is treated as another URL to fetch.
>> > With no proxy option, curl realizes it's not an url immediately and
>> > skips to the next argument on the commandline - http://www.example.com.
>> > With a proxy, curl has to send each url to the proxy for processing.
>> > The proxy probably returns a "400 Bad Request" error on the first
>> > (invalid) url, which is redirected to /dev/null.   The next url doesn't
>> > have another -o so it falls back to printing to stdout.
>> >
>> > Adding -v to the curl commandline will help you diagnose problems like
>> > this.
>>
>> Thanks for that, though it's unfortunate.
>>
>> I would really like a better understanding of the times, to help further
>> diagnose the problem, and 'man curl' says that multiple invocations of
>> '-w' will result in the last one winning, which I've verified.
>>
>> Do you have any suggestions for a way to get the timing of these
>> operations without resorting to tcpdump?
>
> Does -w "%{time_total}\n%{time_namelookup}\n" not do what you want?  There
> are a bunch of other time_* variables you could add, too.
>
> Also, there's nothing wrong with tcpdump (or wireshark).  If your traffic
> passes through multiple proxies or content-analyzing firewalls, you can run
> multiple simultaneous tcpdumps, one on each interface.  Then you can run
> your curl command, and compare the traces side-by-side and see if any
> servers are taking longer than expected to forward the data.  If you have a
> managed switch, you might even be able to configure a "monitor" port that
> will forward all traffic it sees to that port, and you can run just one
> tcpdump and see the same packet multiple times as it passes from server to
> server.

Sigh. A failure of imagination on my part. Putting multiple parameters
inside a single set of quote marks works exactly as I need.

Nothing wrong with wireshark/tcpdump, but I'm not nearly as competent
with them as I'd like to be, and curl offers a pretty easy way to
break out the timing of various parts of the conversation - in
particular the name resolution vs. the rest of the conversation for
any given transaction.

I do indeed have monitor ports set up on my switches, and use them to
feed ntop, with an occasional tcpdump capture to figure out problems.
If it comes to that, I'll get those traces and work on the comparisons
as you suggest. However, given what I've seen so far, I'm looking at
my firewall as being the culprit, and further testing with curl should
give me the [dis]confirmation I need.

Thanks ever so much for your help. Greatly appreciated.

Kurt


More information about the freebsd-questions mailing list