Packet loss every 30.999 seconds

David G Lawrence dg at
Tue Dec 18 06:17:43 PST 2007

> >Right, it's a non-optimal loop when N is very large, and that's a fairly
> >well understood problem.  I think what DG was getting at, though, is
> >that this massive flush happens every time the syncer runs, which
> >doesn't seem correct.  Sure, maybe you just rsynced 100,000 files 20
> >seconds ago, so the upcoming flush is going to be expensive.  But the
> >next flush 30 seconds after that shouldn't be just as expensive, yet it
> >appears to be so.
> I'm sure it doesn't cause many bogus flushes.  iostat shows zero writes
> caused by calling this incessantly using "while :; do sync; done".

   I didn't say it caused any bogus disk I/O. My original problem
(after a day or two of uptime) was an occasional large scheduling delay
for a process that needed to process VoIP frames in real-time. It was
happening every 31 seconds and was causing voice frames to be dropped
due to the large latency causing the frame to be outside of the jitter
window. I wrote a program that measures the scheduling delay by sleeping
for one tick and then comparing the timeofday offset from what was
expected. This revealed that every 31 seconds, the process was seeing
a 17ms delay in scheduling. Further investigation found that 1) the
syncer was the process that was running every 31 seconds and causing
the delay (and it was the only one in the system with that timing
interval), and that 2) lowering the kern.maxvnodes to something lowish
(5000) would mostly mitigate the problem. The patch to limit the number
of vnodes to process in the loop before sleeping was then developed
and it completely resolved the problem. Since the wait that I added
is at the bottom of the loop and the limit is 500 vnodes, this tells
me that every 31 seconds, there are a whole lot of vnodes that are
being "synced", when there shouldn't have been any (this fact wasn't
apparent to me at the time, but when I later realized this, I had
no time to investigate further). My tests and analysis have all been
on an otherwise quiet system (no disk I/O), so the bottom of the
ffs_sync vnode loop should not have been reached at all, let alone
tens of thousands of times every 31 seconds. All machines were uni-
processor, FreeBSD 6+. I don't know if this problem is present in 5.2.
I didn't see ffs_syncvnode in your call graph, so it probably is not.
   Anyway, someone needs to instrument the vnode loop in ffs_sync and
figure out what is going on. As you've pointed out, it is necessary
to first read a lot of files (I use tar to /dev/null and make sure it
reads at least 100K files) in order to get the vnodes allocated. As
I mentioned previously, I suspect that either ip->i_flag is not getting
completely cleared in ffs_syncvnode or its children or
v_bufobj.bo_dirty.bv_cnt accounting is broken.


David G. Lawrence
Download Technologies, Inc. - - (866) 399 8500
The FreeBSD Project -
Pave the road of life with opportunities.

More information about the freebsd-net mailing list