Packet loss every 30.999 seconds
brde at optusnet.com.au
Wed Dec 19 10:09:34 PST 2007
On Thu, 20 Dec 2007, Bruce Evans wrote:
> On Wed, 19 Dec 2007, David G Lawrence wrote:
>> Considering that the CPU clock cycle time is on the order of 300ps, I
>> would say 125ns to do a few checks is pathetic.
> As I said, 125 nsec is a short time in this context. It is approximately
> the time for a single L2 cache miss on a machine with slow memory like
> freefall (Xeon 2.8 GHz with L2 cache latency of 155.5 ns). As I said,
Perfmon counts for the cache misses during sync(1);
==> /tmp/kg1/z0 <==
misses = 4%
==> /tmp/kg1/z1 <==
misses = 10%
==> /tmp/kg1/z2 <==
misses = 13%
==> /tmp/kg1/z3 <==
misses = 16%
==> /tmp/kg1/z4 <==
misses = 16%
I forgot to only count active vnodes in the above. vfs.freevnodes was
small (< 5%).
I set kern.maxvnodes to 200000, but vfs.numvnodes saturated at 138557
(probably all that fits in kvm or main memory on i386 with 1GB RAM).
With 138557 vnodes, a null sync(2) takes 39673 us according to kdump -R.
That is 35.1 ns per miss. This is consistent with lmbench2's estimate
of 42.5 ns for main memory latency.
Watching vfs.*vnodes confirmed that vnode caching still works like you
o "find /home/ncvs/ports -type f" only gives a vnode for each directory
o a repeated "find /home/ncvs/ports -type f" is fast because everything
remains cached by VMIO. FreeBSD performed very badly at this benchmark
before VMIO existed and was used for directories
o "tar cf /dev/zero /home/ncvs/ports" gives a vnode for files too.
More information about the freebsd-stable