Batch file question - average size of file in directory

James Long list at
Tue Jan 2 20:21:03 PST 2007

> Message: 28
> Date: Tue, 2 Jan 2007 10:20:08 -0800
> From: "Kurt Buff" <kurt.buff at>
> Subject: Batch file question - average size of file in directory
> To: questions at
> Message-ID:
> 	<a9f4a3860701021020g1468af4ah26c8a5fe90610719 at>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
> All,
> I don't even have a clue how to start this one, so am looking for a little help.
> I've got a directory with a large number of gzipped files in it (over
> 110k) along with a few thousand uncompressed files.
> I'd like to find the average uncompressed size of the gzipped files,
> and ignore the uncompressed files.
> How on earth would I go about doing that with the default shell (no
> bash or other shells installed), or in perl, or something like that.
> I'm no scripter of any great expertise, and am just stumbling over
> this trying to find an approach.
> Many thanks for any help,
> Kurt

Hi, Kurt.

Can I make some assumptions that simplify things?  No kinky filenames, 
just [a-zA-Z0-9.].  My approach specifically doesn't like colons or 
spaces, I bet.  Also, you say gzipped, so I'm assuming it's ONLY gzip, 
no bzip2, etc.

Here's a first draft that might give you some ideas.  It will output:

foo.gz : 3456
bar.gz : 1048576

find . -type f | while read fname; do
  file $fname | grep -q "compressed" && echo "$fname : $(zcat $fname | wc -c)"

If you really need a script that will do the math for you, then
pip the output of this into bc:


find . -type f | {

echo scale=2
echo -n "("
while read fname; do
  if file $fname | grep -q "compressed"
    echo -n "$(zcat $fname | wc -c)+"
echo "0) / $n"


That should give you the average decompressed size of the gzip'ped
files in the current directory.

More information about the freebsd-questions mailing list