Small Redundant web/mail setup

Ted Mittelstaedt tedm at toybox.placo.com
Tue Oct 24 06:11:17 UTC 2006


----- Original Message ----- 
From: "Damian Wiest" <dwiest at vailsys.com>
To: <freebsd-questions at freebsd.org>
Sent: Monday, October 23, 2006 1:00 PM
Subject: Re: Small Redundant web/mail setup


> On Wed, Oct 18, 2006 at 11:57:04PM -0700, Ted Mittelstaedt wrote:
> >
> > ----- Original Message ----- 
> > From: "Ian Lord" <mailing-lists at msdi.ca>
> > To: <freebsd-questions at freebsd.org>
> > Sent: Wednesday, October 18, 2006 5:34 AM
> > Subject: Small Redundant web/mail setup
> >
> >
> > > Hi,
> > >
> > > I need to setup a high-availability setup for mail/web setup
> > >
> > > I was thinking about the following setup:
> > >
> > > 4 servers total:
> > >
> >
> > overkill, just asking for trouble.
> >
> > > Data Servers:
> > >          1 Server holding all the websites data and mail messages. It
> > > would serve these files via nfs to the application servers.
> > >          It would also run mysql
> > >
> > >          A second server Also sharing it's content via nfs,
> > > replicating it's data though rsync each ?? minutes. The mysql would
> > > run as a slave of the    primary
> > >
> > > Application Servers:
> > >          Both servers would be running apache, php, sendmail and
> > > posfix and would serve content from the share nfs drive.
> > >
> > > 1- Is this a viable solution, I mean by that, Is it Like this big ISP
> > > are set up ?
> > >
> >
> > no
> >
> > The really big ISP's use proprietary commercial clustering solutions
> > that make multiple systems appear as one single system.  We are talking
> > hundreds of thousands to millions of users.  We are not talking 5000
> > users or fewer.
> >
> > You can easily serve 5K users on a single server.  You just need to
> > get good hardware.  In other words, costs start at $5000 and go up.
> >
> > A lot of people are under the misconception that they can get several
> > cheap $900 servers and assemble them into a redundant setup that is
> > highly reliable.
> >
> > The real secret is in getting expensive name-brand hardware that
> > doesen't go down.  If you can afford that, your fine.  If you can't,
> > then you need to find a different table to play at.
> >
> > Ted
>
> Isn't part of the point in running a redundent configuration that you
> can buy cheap(er) hardware?

No.  The point of a redundant setup is to attain 100% uptime.

All hardware eventually dies it is just a question of how good the
chances are.  Cheaper hardware has a much higher chance of dying
unexpectedly or having incompatabilities or problems.  More expensive
hardware has a lower chance.

A $600 machine that does not have a good 6 months of burn in time
on it in my experience has about a 30% chance of unexpectedly failing.
If you put two of them together the chances of both dying at the same time
are much lower of course - but it is still higher than the chances of
a $5,000 machine dying after 24 hours of burn in time.

And once the machine does die, it costs tech time to put things back
together.  Ultimately, the pursuit of clustering as a cost-effective way
of increasing reliability is doomed.

Clustering works great if what your initending to do with it is increase
power of the cluster beyond what is attainable by a single machine.  It
also works great in life and health situations where you cannot afford
anything
less than 99.999999999% uptime.

> A $600 machine should be powerful enough
> to handle that many users.  Just make sure you are using RAID 1+0
> filesystems, keep replacement parts on hand and are performing regular
> backups.

Baloney.

> The real question to ask is what is the provider's SLA and
> how much does an hour of downtime cost the provider.
>
> In my experience, the only things to die on servers have been fans,
> disks (really the motors), and the occasional power supply.  The only
> things a more expensive system may give you are additional power
> supplies, hot-swap drive bays and multiple CPUs.  Other than the system
> board and possibly the processors, the server's components come from the
> same sources as your commodity hardware.
>

It's irrelevant.  It may come as a surprise to you but a Seagate ST11950N
purchased from someplace like Walmart or Costco is different than a
Seagate ST11950N that is shipped from Dell in a server, this is true of most
other expensive computer components.  The component manufacturers
make the components from cheaper materials and sloppier tolerances
for the retail/desktop market than for the server market.  For example a
builder
like Dell may spec a 20,000 MTBF sleeve bearing case fan from Panasonic for
the
desktop, and spec a 70,000 MTBF  Panasonic Panaflo hydro wave fan for the
servers.

You really need to read up on hardware, there's tons of info on the
Internet.  It is possible to spec your own system and build a clone that is
as reliable as a name-brand server, I've done it.  But it won't cost $600.

> I think the setup described above is viable, though I would consider
> running the database (with master-slave replication) and application
> services on the same server assuming it can handle the load.  Also, you
> can probably get away with using something like rsync to push changes to
> your WWW servers.  I'm not sure about email, but you could NFS export
> your mail directories from a central server to the two application
> servers.  Just be aware of NFS' failure modes.
>
> So, I'd go with two, user-facing systems and an administrative
> system that receives email and possibly hosts your code repository.
> If you can afford it, get systems with redundent power supplies and
> hot-swap drive bays.

That's not a $600 system.

> Depending on your userbase, you may want to
> consider a robotic tape library so you don't have to manually change
> tapes.  I've heard some talk of people using raw disks for backups, but
> I don't have any experience with that type of setup.
>

The cost per megabyte for backup to hard disk is cheaper than to tape,
nowadays.

Ted



More information about the freebsd-questions mailing list