[patch] Adding optimized kernel copying support - Part III

Attilio Rao asmrookie at gmail.com
Wed May 31 13:29:41 PDT 2006

2006/5/31, Suleiman Souhlal <ssouhlal at freebsd.org>:
> Hello Attilio,

Hello Suleiman,

> Nice work. Any chance you could also port it to amd64? :-)

Not in the near future, I think. :P

> Does that mean it won't work with SMP and PREEMPTION?

Yes it will work (even if I think it needs more testing) but maybe
would give lesser performances on SMP|PREEMPTION due to too much
traffic on memory/cache. For this I was planing to use non-temporal
(obviously benchmarks would be very appreciate).

> What kind of performance improvements did you see in your benchmarks?

I'm sorry but I didn't benchmarked on P4 (with xmm instructions).
On P3, using integer copies, with dd and time I measured about 2%
increasing, I hope more on P4 (and you might add xmm usage too).

> I wonder if we could get rid of the memcpy_vector (copyin/copyout_vector
> before this patch), bzero_vector and bcopy_vector function pointers and
> do boot-time patching of the callers to the right version

Mmm, please note that on i386, at boot time (I've never studied that
code) it seems requiring of vectorized version of bcopy/bzero.
memcpy_vector that I introduced is used in slightly a different way
from the other so I don't think it's so simple applying your idea to

> I have a linux-inspired proof-of-concept demo of this boot-time patching
> at http://people.freebsd.org/~ssouhlal/testing/bootpatch-20060527.diff.
> It prefetches the next element in the *_FOREACH() macros in sys/queue.h.
> The patching that it does is to use PREFETCH instruction instead of
> PREFETCHNTA if the cpu is found to support SSE2.

It would be very appreciate to have it MI (yes, I mean MD + MI structure :PP)


Peace can only be achieved by understanding - A. Einstein

More information about the freebsd-hackers mailing list