Blocking undesirable domains using BIND

Darren Spruell phatbuckett at
Sun Dec 30 09:31:37 PST 2007

On Dec 30, 2007 9:52 AM, Maxim Khitrov <mkhitrov at> wrote:
> > I was trying to do something similar.  I didn't research too hard, but figured the only way to use Bind would be to make my server authoritative for all those domains, which meant a huge config file and potential overhead, as well as
> > possibly breaking access to desirable servers in the domains.
> >
> > So hosts seemed easier, but apparently Bind never looks at hosts.  I did find that Squid (which I already had installed and in limited use) has its own DNS resolver, and it does look at hosts first before going to the nameserver.
> >
> > Then I found this site: and put their list in hosts, and now client PCs get a squid error in place of ad junk.  Works ok for me ;)
> Well... you were right about overhead. In the last two days I wrote a
> script that would fetch a list of domains from several different
> sites, and output a valid BIND configuration file that could be
> included in the main config. I just ran the second test and the
> results are extremely poor. With only 27,885 blocked domains the
> server is now consuming 208 MB of ram. The first time I tried
> reloading the full list of domains (91,137 of them) and that nearly
> crashed my server. Had to kill bind, remove two of the largest
> sources, and try a second time.

Nearly 100,000 zones on that server is a fairly impressive amount.
Give it credit for what you're trying to do. :) Nonetheless, crashing
is unacceptable.

> Honestly, I can't figure out what BIND could possibly be using so much
> memory for. It's taking up about 7 KB for each zone. The zone file
> itself is not even 1 KB, and given that all the records are pointing
> to the exact same thing it seems to be needlessly wasting memory. In
> addition to that, if I comment out the blacklist config file and run
> rndc reload, it only frees up about 16 MB. So it doesn't even release
> memory when it is no longer needed.

My experience, albeit with a smaller number of zones, is a bit different.

First  you need to account for main program memory and memory utilized
by the nameserver's cache, if any. You may also be running your own
authoritative zones which will add memory utilization outside of that.
You can't account for all of the utilized memory in your additional
blocking zones.

Without my blocking zones loaded, I have 6 native zones on my
nameserver and the resident memory size of named is 2.2 MB. After a
fresh server startup, I expect minimum memory for cached records, so
that comes out to be about 375 KB/zone, unscientifically. If I restart
named (kill and start server fresh) with my blocking zones in the
config, I come out with 17239 zones and a resident process memory size
of 59 MB. (Unscientifically again,) this breaks down to about 3.5

In my configuration, each of these blocking zones points to a simple
zone file 244B in size on disk:

$TTL 86400
@               IN      SOA     ns.local. admin.local. (
                                1       ; serial
                                1h      ; refresh
                                30m     ; retry
                                7d      ; expiration
                                1h )    ; minimum

                IN      NS      ns.local.

                IN      A
*               IN      A

So all told, I seem to notice somewhat slimmer utilization than you
(roughly half the memory utilization per zone, and though I have 61%
as many zones loaded my named takes only 28% of the memory yours

> It looks like my plan of using BIND for filtering purposes will not
> work. Given how poorly it performed on this test I'm actually inclined
> to try another name server to see if something else would be more
> memory-efficient.

You will almost certainly find most of the popular alternatives to be
much more resource efficient. djbdns in particular would be my next
choice if memory efficiency and stability are concerns.


More information about the freebsd-questions mailing list