fetch: Non-recoverable resolver failure

Miroslav Lachman 000.fbsd at quip.cz
Tue Sep 28 20:59:10 UTC 2010

Jeremy Chadwick wrote:
> On Tue, Sep 28, 2010 at 08:12:00PM +0200, Miroslav Lachman wrote:
>> Hi,
>> we are using fetch command from cron to run PHP scripts periodically
>> and sometimes cron sends error e-mails like this:
>> fetch: https://hiden.example.com/cron/fiveminutes: Non-recoverable
>> resolver failure


>> Note: target domains are hosted on the server it-self and named too.
>> The system is FreeBSD 7.3-RELEASE-p2 i386 GENERIC
>> Can somebody help me to diagnose this random fetch+resolver issue?
> The error in question comes from the resolver library returning
> EAI_FAIL.  This return code can be returned to all sorts of applications
> (not just fetch), although how each app handles it may differ.  So,
> chances are you really do have something going on upstream from you (one
> of the nameservers you use might not be available at all times), and it
> probably clears very quickly (before you have a chance to
> manually/interactively investigate it).

The strange thing is that I have only one nameserver listed in 
resolv.conf and it is the local one! ( (there were two 
"remote" nameservers, but I tried to switch to local one to rule out 
remote nameservers / network problems)

> You're probably going to have to set up a combination of scripts that do
> tcpdump logging, and ktrace -t+ -i (and probably -a) logging (ex. ktrace
> -t+ -i -a -f /var/log/ktrace.fetch.out fetch -qo ...) to find out what's
> going on behind the scenes.  The irregularity of the problem (re:
> "sometimes") warrants such.  I'd recommend using something other than
> as your resolver if you need to do tcpdump.

I will try it... there will be a lot of output as there are many 
cronjobs and relativelly high traffic on the webserver. But fetch 
resolver failure occurred only few times a day.

> Providing contents of your /etc/resolv.conf, as well as details about
> your network configuration on the machine (specifically if any
> firewall stacks (pf or ipfw) are in place) would help too.  Some folks
> might want netstat -m output as well.

There is nothing special in the network, the machine is Sun Fire X2100 
M2 with bge1 NIC connected to Cisco Linksys switch (100Mbps port) with 
uplink (1Gbps port) connected to Cisco router with dual 10Gbps 
connectivity. No firewalls in the path. There are more than 10 other 
servers in the rack and we have no problems / error messages in logs 
from other services / daemons related to DNS.

# cat /etc/resolv.conf

/# netstat -m
279/861/1140 mbufs in use (current/cache/total)
257/553/810/25600 mbuf clusters in use (current/cache/total/max)
257/313 mbuf+clusters out of packet secondary zone in use (current/cache)
5/306/311/12800 4k (page size) jumbo clusters in use 
0/0/0/6400 9k jumbo clusters in use (current/cache/total/max)
0/0/0/3200 16k jumbo clusters in use (current/cache/total/max)
603K/2545K/3149K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
13/470/6656 sfbufs in use (current/peak/max)
0 requests for sfbufs denied
0 requests for sfbufs delayed
3351782 requests for I/O initiated by sendfile
0 calls to protocol drain routines

(real IPs were replaced)

# ifconfig bge1
bge1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
         ether 00:1e:68:2f:71:ab
         inet netmask 0xffffff80 broadcast
         inet netmask 0xffffffff broadcast
         inet netmask 0xffffffff broadcast
         media: Ethernet autoselect (100baseTX <full-duplex>)
         status: active

NIC is:

bge1 at pci0:6:4:1:        class=0x020000 card=0x534c108e chip=0x167814e4 
rev=0xa3 hdr=0x00
     vendor     = 'Broadcom Corporation'
     device     = 'BCM5715C 10/100/100 PCIe Ethernet Controller'
     class      = network
     subclass   = ethernet

There is PF with some basic rules, mostly blocking incomming packets, 
allowing all outgoing and scrubbing:

scrub in on bge1 all fragment reassemble
scrub out on bge1 all no-df random-id min-ttl 24 max-mss 1492 fragment 

pass out on bge1 inet proto udp all keep state
pass out on bge1 inet proto tcp from to any flags S/SA modulate 
pass out on bge1 inet proto tcp from to any flags S/SA modulate 
pass out on bge1 inet proto tcp from to any flags S/SA modulate 

modified PF options:

set timeout { frag 15, interval 5 }
set limit { frags 2500, states 5000 }
set optimization aggressive
set block-policy drop
set loginterface bge1
# Let loopback and internal interface traffic flow without restrictions
set skip on lo0

Thank you for your suggestions

Miroslav Lachman

More information about the freebsd-stable mailing list