allocating 14KB memory per packet compression/decompression results in vm_fault

Giorgos Keramidas keramida at ceid.upatras.gr
Fri Nov 4 05:13:38 PST 2005


On 2005-11-03 22:56, kamal kc <kamal_ckk at yahoo.com> wrote:
>>> for my compression/decompression i use string tables and
>>> temporary buffers which take about 14KB of memory per
>>> packet.
>>
>> If you're allocating 14 KB of data just to send
>> (approximately) 1.4 KB
>> and then you throw away the 14 KB immediately, it
>> sounds terrible.
>
> yes that's true.
>
> since i am using the adaptive LZW compression scheme it
> requires construction of string table for
> compression/decompression. So an ip packet of size 1500 bytes
> requires a table of size (4KB + 4KB + 2KB = 12KB).

I may be stating the obvious or something totally wrong, but
couldn't the string table be constructed once instead of each
time a packet goes down?  It is my intuition that this would
perform much much better than re-doing the work of the string
table each time a packet goes out.

> what would be the best possible way to allocate/deallocate 14KB
> memory per packet without causing vm_faults ??

Bearing in mind that packets may be as small as 34 bytes, there's
no good way, IMHO.

- Giorgos



More information about the freebsd-net mailing list