[ fbsd_questions ] tar(1) vs. msdos_fs: a death_spiral ?

spellberg_robert emailrob at emailrob.com
Thu Mar 4 23:58:21 UTC 2010

greetings, all ---

i confess that this one has me flummoxed.
the short question:  does tar(1) spit_up when extracting onto an msdos_fs hard_drive ?

[ i tried the mailing_list archives "tar AND msdos", for -questions, -chat, -bugs, -newbies, -performance ]
[ other research as indicated ]

i have no problem using tar(1) on ufs.
large files, small files; if i am on ufs, everything is fine.

i have been creating tarballs from medium_size msdos_fs drives, also.
this worked fine.
i would check them by extracting into a ufs root_point.
no problem.

this week, i tried to do something new.
i wanted to take a tarball, already on ufs, that was created from an msdos_fs drive and
   extract it onto an msdos_fs drive.
this, to me, actually seems like a reaasonable idea; but, what do i know ?

well, it starts out just fine, but, it rapidly degenerates into what is, normally, infinite_loop land.
when ps(1) says cpu_% of 1%, 2%, 5%; ok, it is an active process.
in about ten minutes, tar(1) enters 90% cpu.
after 20 minutes, 99%.

i does not matter if X_windows is running.
foreground or background process, no difference.

it seems to be working correctly because the error_file is always of zero_size.
i suspect that if i left it alone, after a few days, it would finish.

some details
   [ everything is ufs, using 8kB/1kB, except "/mnt", which is clustered as indicated;
     of course, the tarball is not named "ball",
     nor is the path, to the tarball, named "path", but, then, you knew that

mkdir /path_c
mkdir /path_c/88_x

mkdir /path_d
mkdir /path_d/88_x

mount -v -t msdos /dev/ad1s1 /mnt                   [ fat_32, about 6_GB, 4_KB cluster, the "c:\" drive, primary partition. ]
cd /mnt
( tar cvplf     /path_c/99_ball.tar .
             >   /path_c/90_cvpl.out   )
             > & /path_c/91_cvpl.err     &           [ real time 16m 07s, exit_status 0 ]
cd / ; umount /mnt

mount -v -t msdos /dev/ad1s5 /mnt                   [ fat_32, about 12_GB, 8_KB cluster, the "d:\" drive, extended partition. ]
cd /mnt
( tar cvplf     /path_d/99_ball.tar .
             >   /path_d/90_cvpl.out   )
             > & /path_d/91_cvpl.err     &           [ real time 20m 15s, exit_status 0 ]
cd / ; umount /mnt

cd /path_c/88_x
( tar xvplf     ../99_ball.tar
             >   ../92_xvpl.out )
             > & ../93_xvpl.err   &                  [ real time 08m 11s; exit_status 0 ]
diff ../9[02]*                                      [ exit_status 0; the tables_of_contents are the same ]
ls -l ..                                            [ visually inspect the error_files to be of zero_size - verified ]

cd /path_d/88_x
( tar xvplf     ../99_ball.tar
             >   ../92_xvpl.out )
             > & ../93_xvpl.err   &                  [ real time 12m 37s; exit_status 0 ]
diff ../9[02]*                                      [ exit_status 0; the tables_of_contents are the same ]
ls -l ..                                            [ visually inspect the error_files to be of zero_size - verified ]

[ note that this approach works; it is a good excuse to refill my coffee_cup. ]

[ physically replace the source hard_drive w/ 80_GB capacity, 32_KB cluster, primary_partition only, virgin hard_drive.
   this destination hard_drive was "fdisk"ed and "format"ed yesterday_morning;
   this drive was "scandisk"ed yesterday for 12 hours, using the "thorough" option,
   it has zero bad clusters [ i wanted to eliminate the drive as the problem ]

mount -v -t msdos /dev/ad1s1 /mnt

mkdir /mnt/path_cc
cd    /mnt/path_cc

( tar xvplf     /path_c/99_ball.tar
             >        ../92_xvpl.out )
             > &      ../93_xvpl.err   &             [ started this at 18:05_utc, it is now about 21:35_utc;
                                                       the toc_file, from the 8_minute extraction above, has 87517 lines in it;
                                                       the current toc_file has only 12667 lines.

[ this is the second hard_drive i have tried this on, this week;
   i will probably kill the process as xterm is being updated about 8 seconds apart, now.

on the first hard_drive [ i have not done this on the second drive, yet ]
   i noted that i had a successful extraction on the ufs drive.
not being the smartest person around, i had, what i thought to be, a --brilliant-- idea,
   "what if i try a recursive copy of the successful extraction" ?

this is interesting;
   the recursive copy started_out like gang_busters, then, just like the extraction, slowly bogged_down to 99%_cpu.

hmmm..., two different msdos_fs hard_drives, two different normally_reliable utilities, same progressive_hogging of the cpu.
this makes me wonder about the msdos_fs hard_drive, which is, rapidly, becoming the only remaining common factor.

i tried the mailing lists.
right now, i am web_page searching;
   tar(1) seems to be slow in some situations, but, i have not, yet, found --this-- situation.
also, in reading the man_pages for mount(1) and tar(1), i am starting to wonder if this could be a tar(1) "block_size" issue.

i am not doing any encryption or compression, in either direction.
last check at about 22:45_utc:  99.0%_cpu, 0.1%_mem.

does anyone have any thoughts ?

please cc.


More information about the freebsd-questions mailing list