Blocking undesirable domains using BIND
mkhitrov at gmail.com
Sun Dec 30 15:44:14 PST 2007
On Dec 30, 2007 12:31 PM, Darren Spruell <phatbuckett at gmail.com> wrote:
> On Dec 30, 2007 9:52 AM, Maxim Khitrov <mkhitrov at gmail.com> wrote:
> > > I was trying to do something similar. I didn't research too hard, but figured the only way to use Bind would be to make my server authoritative for all those domains, which meant a huge config file and potential overhead, as well as
> > > possibly breaking access to desirable servers in the domains.
> > >
> > > So hosts seemed easier, but apparently Bind never looks at hosts. I did find that Squid (which I already had installed and in limited use) has its own DNS resolver, and it does look at hosts first before going to the nameserver.
> > >
> > > Then I found this site: http://everythingisnt.com/hosts.html and put their list in hosts, and now client PCs get a squid error in place of ad junk. Works ok for me ;)
> > Well... you were right about overhead. In the last two days I wrote a
> > script that would fetch a list of domains from several different
> > sites, and output a valid BIND configuration file that could be
> > included in the main config. I just ran the second test and the
> > results are extremely poor. With only 27,885 blocked domains the
> > server is now consuming 208 MB of ram. The first time I tried
> > reloading the full list of domains (91,137 of them) and that nearly
> > crashed my server. Had to kill bind, remove two of the largest
> > sources, and try a second time.
> Nearly 100,000 zones on that server is a fairly impressive amount.
> Give it credit for what you're trying to do. :) Nonetheless, crashing
> is unacceptable.
> > Honestly, I can't figure out what BIND could possibly be using so much
> > memory for. It's taking up about 7 KB for each zone. The zone file
> > itself is not even 1 KB, and given that all the records are pointing
> > to the exact same thing it seems to be needlessly wasting memory. In
> > addition to that, if I comment out the blacklist config file and run
> > rndc reload, it only frees up about 16 MB. So it doesn't even release
> > memory when it is no longer needed.
> My experience, albeit with a smaller number of zones, is a bit different.
> First you need to account for main program memory and memory utilized
> by the nameserver's cache, if any. You may also be running your own
> authoritative zones which will add memory utilization outside of that.
> You can't account for all of the utilized memory in your additional
> blocking zones.
> Without my blocking zones loaded, I have 6 native zones on my
> nameserver and the resident memory size of named is 2.2 MB. After a
> fresh server startup, I expect minimum memory for cached records, so
> that comes out to be about 375 KB/zone, unscientifically. If I restart
> named (kill and start server fresh) with my blocking zones in the
> config, I come out with 17239 zones and a resident process memory size
> of 59 MB. (Unscientifically again,) this breaks down to about 3.5
> In my configuration, each of these blocking zones points to a simple
> zone file 244B in size on disk:
> $TTL 86400
> @ IN SOA ns.local. admin.local. (
> 1 ; serial
> 1h ; refresh
> 30m ; retry
> 7d ; expiration
> 1h ) ; minimum
> IN NS ns.local.
> IN A 127.0.0.1
> * IN A 127.0.0.1
> So all told, I seem to notice somewhat slimmer utilization than you
> (roughly half the memory utilization per zone, and though I have 61%
> as many zones loaded my named takes only 28% of the memory yours
> > It looks like my plan of using BIND for filtering purposes will not
> > work. Given how poorly it performed on this test I'm actually inclined
> > to try another name server to see if something else would be more
> > memory-efficient.
> You will almost certainly find most of the popular alternatives to be
> much more resource efficient. djbdns in particular would be my next
> choice if memory efficiency and stability are concerns.
I was using the exact same zone file as you, one real master zone, and
the three slave root zones from the default config. Not sure why it
reacted as it did to the blacklist config, but I think I now found a
perfect solution. This morning I played around with MaraDNS, which is
actually a pretty good DNS server. One problem with it was that it
didn't allow includes in the main config. That means that everything
has to be in a single file and that's a bit messy. It did a lot better
with memory usage, taking up about 70MB for 27 or 28 thousand domains,
but still not great.
I then installed dnsmasq, which is able to read domain info from the
hosts file. Just for the fun of it, I loaded domains from all the
sources I've gathered into a separate hosts file - a total of 155,150
entries. Dnsmasq loaded that file and has been running for several
minutes now. It's currently taking up a total of 17MB! Now granted, it
doesn't need to deal with whole zone files, but this still goes to
show the level of efficiency that can be achieved in theory even with
this many entries.
Dnsmasq also provides a DHCP server, which was the next item on my
to-configure list. Unfortunately, it can only forward DNS requests to
an upstream server (like the one provided by your ISP). So here's what
I'll do... BIND will stay, but only for the purposes of serving the
root zones and my local zone file. I'll bring its caching to a
minimum, and have it listen only on 127.0.0.1:54. Dnsmasq will then
listen on *:53 and use BIND as its upstream server. It will be
responsible for filtering domains and caching query results. BIND will
serve the real zone files and resolve any queries received from
dnsmasq. The memory usage for all DNS-related processes should be no
more than 30MB and I have my filtering solution in place :)
More information about the freebsd-questions