Backup advice

Jason Lixfeld jason+lists.freebsd-questions at lixfeld.ca
Thu May 24 07:10:52 UTC 2007


On 24-May-07, at 12:33 AM, Doug Hardie wrote:

>
> On May 23, 2007, at 19:03, Jason Lixfeld wrote:
>
>>
>> On 23-May-07, at 9:23 PM, Doug Hardie wrote:
>>
>>> The criteria for selecting a backup approach is not the backup  
>>> methodology but the restore methodology.
>>
>> Excellent point.
>>
>> Perhaps I'm asking the wrong question, so let me try it this way  
>> instead:
>>
>> I'm looking for a backup solution that I can rely on in the event  
>> I have a catastrophic server failure.  Ideally this backup would  
>> look and act much like a clone of the production system.  In the  
>> worse case, I'd re-format the server array and copy the clone back  
>> to the server, setup the boot blocks, and that would be it.
>>
>> Ideally this clone should be verifiable, meaning I should be able  
>> to verify it's integrity so that it's not going to let me down if  
>> I need it.
>>
>> I'm thinking external USB hard drive of at least equal size to the  
>> server array size as far as hardware goes, but I'm lost as far as  
>> software goes.
>
> What kind of data are you backing up?  If you are backing up the  
> system and your data then you have to be very careful about links.   
> Some backup solutions will copy the files as separate files.  When  
> you restore the link is gone.  An update to one of the linked files  
> will no longer be seen by the other names.  The OS uses a lot of  
> links.  If all you are backing up is data, its probably not an  
> issue.  I have used both dump and tar successfully.  I currently  
> use tar as I have many directories I don't want to backup.  Tar  
> requires some care and feeding to handle links properly.  It  
> doesn't do it by default.  Dump does handle them properly by  
> default.  Another option is rsync.  The advantage it has is that it  
> only copies the changes in the file.  It will run a lot faster than  
> dump or tar which will copy everything each time.  You do have to  
> be careful with /dev if you are copying the root partition.

I'm backing up my entire system.  To me, it's easier this way in the  
long run.  In the event of a failure, you just copy everything from  
the backup back to the system without the need to worry about re- 
installing applications, library dependencies, configuration files,  
nuances, idiosyncrasies, etc.  I've been doing this for years with my  
OS X laptop and it's the quickest way to get back on your feet in a  
worst case scenario.

Dump seems to be the best at doing what I'm looking to do.  Better  
than tar or rsync.  I think dd would beat out dump, but dd is far  
less of a backup tool than dump is, so I think dump is still the  
winner.  The caveat of a full dump taking the most time and resources  
can be reasonably mitigated by doing a full dump every X intervals  
and incremental in between.  It seems to be a fair compromise seeing  
as how cheap hard drive space is these days.

2 x system space would be enough for a full dump plus plenty of  
increments, I'd say.  No?  Is there a rule of thumb?  3x?  4x?

As far as restoring goes, let's assume my machine blew up one full  
backup and 15 increments ago and I want to restore the entire system  
in it's entirety from my backup.  How is that done?  Point restore to  
the last incremental and it figures it out for itself, or is it a  
manual process where I have to figure out what backups consist of the  
complete system?

> One backup disk is not all that great a safety approach.  You will  
> never know if that drive has failed till you try and use it.  Then  
> its too late.  Failures do not require that the drive hardware has  
> failed.  Any interruption in the copy can cause an issue that may  
> not be detected during the backup.  Sectors generally don't just go  
> bad sitting on the shelf, but it does happen.  That was a  
> significant problem with tapes.  Generally 25% of the tapes I used  
> to get back from off-site storage after a month were no longer  
> readable.

There has to be some way for the OS to know if a drive is bad, or to  
verify the state of the data that was just copied from one location  
to another.  Is there no method of doing error correction?  My laptop  
backup programs I've been using for years shows me information at the  
end of the run:  Files copied, Speed, Time, Errors, etc.

If a UNIX backup process is as unreliable as you're making it out to  
be, then I could buy 10 drives and still potentially have each one  
fail and be screwed if I were to need to rely on it at some point.

I'd feel more comfortable backing up off a RAID1 to a single backup  
drive that provided some sort of error protection/correction/ 
notification than backing up off a RAID1 to 100 backup drives that  
didn't give me any indication as to the success of the copy.



More information about the freebsd-questions mailing list