Fast diff command for large files?

Andrew P. infofarmer at gmail.com
Fri Nov 4 20:04:24 GMT 2005


On 11/4/05, Kirk Strauser <kirk at strauser.com> wrote:
> On Friday 04 November 2005 10:22, Chuck Swiger wrote:
>
> > Multigigabyte?  Find another approach to solving the problem, a text-base
> > diff is going to require excessive resources and time.  A 64-bit platform
> > with 2 GB of RAM & 3GB of swap requires ~1000 seconds to diff ~400MB.
>
> There really aren't many options.  For the patient, here's what's happening:
>
> Our legacy application runs on FoxPro.  Our web application runs on a
> PostgreSQL database that's a mirror of the FoxPro tables.
>
> We do the mirroring by running a program that dumps the FoxPro tables out as
> tab-delimited files.  Thus far, we'd been using PostgreSQL's "copy from"
> command to read those files into the database.  In reality, though, a very,
> very small percentage of rows in those tables actually change.  So, I wrote
> a program that takes the output of diff and converts it into a series of
> "delete" and "insert" commands; benchmarking shows that this is roughly 300
> times faster in our use.
>
> And that's why I need a fast diff.  Even if it takes as long as the database
> bulk loads, we can run it on another server and use 20 seconds of CPU for
> PostgreSQL instead of 45 minutes.  The practical upshot is that the
> database will never get sluggish, even if the other "diff server" is loaded
> to the gills.
> --
> Kirk Strauser
>
>
>

Does the overall order of lines change every time
you dump the tables? If not, is there any inexpensive
way to sort them (not alphabetically, but just that
the order stays the same)? If it does/can, then there's
a trivial solution (a few lines in perl, or a hundred
lines in C) that'll make the speed roughly similar
to that of I/O.


More information about the freebsd-questions mailing list