Crash when copying large files

Polytropon freebsd at
Mon Sep 12 23:33:29 UTC 2011

On Tue, 13 Sep 2011 00:14:45 +0300, Toomas Aas wrote:
> Hello!
> I'm trying to move a filesystem to a new larger RAID volume. The old  
> filesystem was using gjournal, and I have also created the new  
> filesystem with gjournal. The FS in question holds the DocumentRoot of  
> our web server, and in its depths, a couple of fairly large (several  
> gigabytes) files are lurking.
> I've mounted the new FS under /mnt and use tar to transfer the files:
> cd /mnt
> tar -c -v -f - -C /docroot . | tar xf -
> It seems that these large files cause a problem. Sometimes when the  
> process reaches one of these files, the machine reboots. It doesn't  
> create a crashdump in /var/crash, which may be because the system has  
> less swap (2 GB) than RAM (8 GB). Fortunately the machine comes back  
> up OK, except that the target FS (/mnt) is corrupt and needs to be  
> fsck'd. I've tried to re-run the process three times now, and caused  
> the machine to crash as it reaches one or another large file. Any  
> ideas what I should do to avoid the crash?

The par program operates on a per-file basis. In case that
causes a problem, try to leave this route and use the "old-
fashioned" tools dump and restore.

Make sure the file system isn't mounted, then use:

	# cd /your/target/directory
	# dump -0 -f - /dev/<sourcedev> | restore -r -f -

wheree <sourcedev> refers to the device you've initially
mounted /mnt from.

Magdeburg, Germany
Happy FreeBSD user since 4.0
Andra moi ennepe, Mousa, ...

More information about the freebsd-questions mailing list