Managing very large files

Giorgos Keramidas keramida at ceid.upatras.gr
Thu Oct 4 08:34:56 PDT 2007


On 2007-10-04 08:43, Steve Bertrand <iaccounts at ibctech.ca> wrote:
> Hi all,
> I've got a 28GB tcpdump capture file that I need to (hopefully) break
> down into a series of 100,000k lines or so, hopefully without the need
> of reading the entire file all at once.
> 
> I need to run a few Perl processes on the data in the file, but AFAICT,
> doing so on the entire original file is asking for trouble.
> 
> Is there any way to accomplish this, preferably with the ability to
> incrementally name each newly created file?

Depending on whether you want to capture only specific parts of the dump
in the 'split output', you may have luck with something like:

	tcpdump -r input.pcap -w output.pcap 'filter rules here'

This will read the file sequentially, which can be slower than having it
all in memory, but with a huge file like this it is probably a good idea :)



More information about the freebsd-questions mailing list