dump directly to remote HDD over ssh <-- take it up a notch

Dave [Hawk-Systems] dave at hawk-systems.com
Fri Aug 1 05:37:21 PDT 2003


>> to ensure that we don't get too many servers trying to back up to the big
>> archive server at once, we want to run a script from the controller server...
>>
>> #!/bin/sh
>> ssh server1 "dump -3uf - /usr | ssh big_backup_server dd
>> of=/backups/server1.usr.dump"
> In this case you allow logon without password  as _root_ to your server[12]
>AND allow logon without password as _user_who_can_read_dumps_ to your
>big_backup_server. It's too many security risks, isn't it?

not if the process is being run a user "backup" which exists on all systems, and
the remote systems have the the public key for that user distributed to the
machines.  That way only user backup can connect to the servers and run these
tasks from the controller server. no?

Dave

>> ssh server1 "dump -3uf - / | ssh big_backup_server dd
>> of=/backups/server1.root.dump"
>> ssh server2 "dump -3uf - /usr | ssh big_backup_server dd
>> of=/backups/server2.usr.dump"
>> ssh server2 "dump -3uf - / | ssh big_backup_server dd
>> of=/backups/server2.root.dump"
>>
>> running it in this way should ensure that each dump completes before the next
>> one is started, keeping them stacked, but not overlaying each other and
>> effectively DOSing the box with all that data, correct?




More information about the freebsd-isp mailing list