Crash when copying large files

Chuck Swiger cswiger at mac.com
Mon Sep 12 23:42:51 UTC 2011


Hi--

On Sep 12, 2011, at 2:14 PM, Toomas Aas wrote:
> I've mounted the new FS under /mnt and use tar to transfer the files:
> 
> cd /mnt
> tar -c -v -f - -C /docroot . | tar xf -

You probably wanted -p flag on the extract side.
The manpage recommends one of the following constructs:

     To move file hierarchies, invoke tar as
           tar -cf - -C srcdir . | tar -xpf - -C destdir
     or more traditionally
           cd srcdir ; tar -cf - . | (cd destdir ; tar -xpf -)

However, this isn't going to resolve the system panic'ing.
Certainly, that's not a reasonable behavior...  :-)

> It seems that these large files cause a problem. Sometimes when the process reaches one of these files, the machine reboots. It doesn't create a crashdump in /var/crash, which may be because the system has less swap (2 GB) than RAM (8 GB). Fortunately the machine comes back up OK, except that the target FS (/mnt) is corrupt and needs to be fsck'd. I've tried to re-run the process three times now, and caused the machine to crash as it reaches one or another large file. Any ideas what I should do to avoid the crash?

Right, a machine with 8GB of RAM isn't going to be able to dump to a 2GB swap area.  (Although, I seem to recall some folks working on compressed crash dumps, but I don't know what state that is in.)  But you can set hw.physmem in loader.conf to limit the RAM being used to 2GB so you can generate a crash dump if you wanted to debug it further.

How big are your multi-GB files, anyway?

If you want a workaround to avoid the crash, consider using either rsync or dump/restore to copy the filesystem, rather than using tar.

Regards,
-- 
-Chuck



More information about the freebsd-questions mailing list