Sparc64 partitions compatible with PC?

Miles Nordin carton at Ivy.NET
Thu Jul 17 20:17:08 UTC 2008


>>>>> "dm" == Didrik Madheden <didrik at kth.se> writes:

    dm> dd if=/dev/ad0 of=bakfile

dd if=/dev/ad0 of=bakfile bs=512 count=$(( 2 * 1024 ))
dd if=bakfile of=/dev/ad0 bs=512 conv=notrunc

the 'notrunc' is unnecessary for a hard disk 'of' but makes the
restore scheme work also if ad0 is an image of a disk as you'd use
with vnd instead of a block device.

    dm> even for simple stream copying,

I use dd from habit.  For big copies on ATA disks, bs=$(( 56 * 1024 ))
makes the copy go faster, so that is merit to 'dd'.  For recovering
bad disks, 'bs=512 conv=noerror,sync' allows replacing unreadable
blocks with zeroes which cat cannot do.  

For properly working disks I think cat is likely to work fine
but...ymmv.  I'm not sure why dd is so habitual.  For tape, in my
experience with ancient DAT, you must always use dd because the
``files'' on tapes are not ordinary ordered-sequence-of-octets files.
They have a block size recorded on the tape, and they can only be read
with the same block size with which they were written.  If you don't
know at what block size the tape was written, it can actually be
difficult to read the tape, though there is probably some trick to it.
If the tape is written like this:

tar cf - . | gzip > /dev/nrst0

you may never figure out how to read it because gzip will write blocks
of various random sizes, so you need to:

tar cf - . | gzip | dd of=/dev/nrst0 bs=5120

or let tar invoke gzip with the z flag which will do this implicitly
IIRC.

Disks can be read at a variety of block sizes, but depending on the
driver some OS's may be more obstinate than others, so I think it
might be best to always use dd, and use with a block size that's a
multiple of 512 for disks and 2048 for CD's.  If you want to use gzip,
do it as so:

dd if=/dev/ad0a bs=<whatever> | gzip > file

gunzip < file | dd of=/dev/ad0a bs=<whatever>

The last command, I don't fully understand its quirks.  I've actually
had problems with dd through ssh pipes saying things like

gunzip < backup | ssh "dd of=/dev/ad0 bs=$(( 56 * 1024 ))"
2+520894 blocks input
520896+0 blocks output

which means I think it may have written garbage all over ad0, by
splitting the output stream into 56k chunks arbitrarily depending on
how TCP broke up dd's reads of stdin.  I would expect this problem to
happen with conv=sync, but it might happen other times, depending on
your dd version.  I think QNX might have been involved which in my
experience has a lot of bugs.  I forget what I did to fix it.  I
remember trying things like:

gunzip < backup | ssh "dd ibs=1000 obs=$(( 56 * 1024 )) | dd of=/dev/ad0 bs=$(( 56 * 1024 ))"

but I don't know what finally got my disk properly restored, nor how
much of the problem was my not understanding the ways of dd, how much
was getting lucky using one OS improperly while another is less
forgiving, and how much was QNX bugs.  I was really impatient and
didn't fully understand what was going on.  

If someone knows how to tell dd, ``please keep reading the input file
until you get either a full obs block, or an EOF.  Do not write
anything to the output file until one of those two things, EITHER of
those things, has happened,'' I would like to hear it.  but in
practice, stumbling along seems to work ok, for me, if I go back and
check my work with md5sum or 'mount ...; pax -w /mnt > /dev/null'
before deleting the backup file.

HTH. :(
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 304 bytes
Desc: not available
Url : http://lists.freebsd.org/pipermail/freebsd-sparc64/attachments/20080717/4ae51220/attachment.pgp


More information about the freebsd-sparc64 mailing list