Batch file question - average size of file in directory

James Long list at museum.rain.com
Tue Jan 2 20:21:03 PST 2007


> Message: 28
> Date: Tue, 2 Jan 2007 10:20:08 -0800
> From: "Kurt Buff" <kurt.buff at gmail.com>
> Subject: Batch file question - average size of file in directory
> To: questions at freebsd.org
> Message-ID:
> 	<a9f4a3860701021020g1468af4ah26c8a5fe90610719 at mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
> 
> All,
> 
> I don't even have a clue how to start this one, so am looking for a little help.
> 
> I've got a directory with a large number of gzipped files in it (over
> 110k) along with a few thousand uncompressed files.
> 
> I'd like to find the average uncompressed size of the gzipped files,
> and ignore the uncompressed files.
> 
> How on earth would I go about doing that with the default shell (no
> bash or other shells installed), or in perl, or something like that.
> I'm no scripter of any great expertise, and am just stumbling over
> this trying to find an approach.
> 
> Many thanks for any help,
> 
> Kurt

Hi, Kurt.

Can I make some assumptions that simplify things?  No kinky filenames, 
just [a-zA-Z0-9.].  My approach specifically doesn't like colons or 
spaces, I bet.  Also, you say gzipped, so I'm assuming it's ONLY gzip, 
no bzip2, etc.

Here's a first draft that might give you some ideas.  It will output:

foo.gz : 3456
bar.gz : 1048576
(etc.)

find . -type f | while read fname; do
  file $fname | grep -q "compressed" && echo "$fname : $(zcat $fname | wc -c)"
done


If you really need a script that will do the math for you, then
pip the output of this into bc:

#!/bin/sh

find . -type f | {

n=0
echo scale=2
echo -n "("
while read fname; do
  if file $fname | grep -q "compressed"
  then
    echo -n "$(zcat $fname | wc -c)+"
    n=$(($n+1))
  fi
done
echo "0) / $n"

}

That should give you the average decompressed size of the gzip'ped
files in the current directory.



More information about the freebsd-questions mailing list