allocating 14KB memory per packet compression/decompression results in v

Sergey Babkin babkin at verizon.net
Fri Nov 4 04:32:47 PST 2005


>From: kamal kc <kamal_ckk at yahoo.com>

>since i am using the adaptive LZW 
>compression scheme it requires construction of string
>table for compression/decompression. So an ip packet
> of size 1500 bytes requires a table of size (4KB +
> 4KB + 2KB =12KB). 
>
>further still i copy the ip packet
> data in another data buffer (about 1.4KB) and 
>then compress it.
>
>So all this adds up to about 14KB. 
>
>Right now i can't do with less than 14KB.
>
>as i said before the compression/decompression works
>fine. but soon the kernel would panic with one 
>of the vm_fault: error message.

Most likely you overrun your buffer somewhere and 
damage some unrelated memory area.

>what would be the best possible way to 
>allocate/deallocate 14KB memory per packet without 
>causing vm_faults ?? 

The best possible way is to not do it at all.
Allocate you 14KB buffer once and then reuse it
for every packet. Obviously, your code would
have to be either single-threaded, or synchronize
the access to the buffer, or use a separate buffer
per CPU.

>is there anything i am missing ??

Also an extra memory-to-memory copy is a bad idea. 
It hurts the performance.

-SB


More information about the freebsd-net mailing list