allocating 14KB memory per packet compression/decompression results in v

Giorgos Keramidas keramida at
Fri Nov 4 09:20:09 PST 2005

On 2005-11-04 11:14, Sergey Babkin <babkin at> wrote:
>Giorgos Keramidas <keramida at> wrote:
>>On 2005-11-03 22:56, kamal kc <kamal_ckk at> wrote:
>>> since i am using the adaptive LZW compression scheme it
>>> requires construction of string table for
>>> compression/decompression. So an ip packet of size 1500 bytes
>>> requires a table of size (4KB + 4KB + 2KB = 12KB).
>> I may be stating the obvious or something totally wrong, but
>> couldn't the string table be constructed once instead of each
>> time a packet goes down?  It is my intuition that this would
>> perform much much better than re-doing the work of the string
>> table each time a packet goes out.
> No, the table changes as data is compressed. It records the
> knowledge about the strings that have already occured in the
> data.
> Keeping the table between the packets would improve the
> compression but the packets would have to be transmitted
> through a reliable medium since to decompress a packet you
> would have to decompress all the preceding packets first
> (essentially you get a stream compression).

Ah, yes, I see now.  You're right of course.  I was thinking of
something resembling a "compressed tunnel" when I wrote the
reply, but that doesn't work with IP very well, unless some other
sort of encapsulation is at work.

> To keep the packets separate, the compression state
> must be reset between them.
> But of course resetting the compression state does not
> mean that the memory should be deallocated.

Very true :)

More information about the freebsd-hackers mailing list