[vfs] buf_daemon() slows down write() severely on low-speed CPU

Svatopluk Kraus onwahe at gmail.com
Wed Apr 4 21:18:26 UTC 2012


2012/3/21 Konstantin Belousov <kostikbel at gmail.com>:
> On Thu, Mar 15, 2012 at 08:00:41PM +0100, Svatopluk Kraus wrote:
>> 2012/3/15 Konstantin Belousov <kostikbel at gmail.com>:
>> > On Tue, Mar 13, 2012 at 01:54:38PM +0100, Svatopluk Kraus wrote:
>> >> On Mon, Mar 12, 2012 at 7:19 PM, Konstantin Belousov
>> >> <kostikbel at gmail.com> wrote:
>> >> > On Mon, Mar 12, 2012 at 04:00:58PM +0100, Svatopluk Kraus wrote:
>> >> >> Hi,
>> >> >>
>> >> >>    I have solved a following problem. If a big file (according to
>> >> >> 'hidirtybuffers') is being written, the write speed is very poor.
>> >> >>
>> >> >>    It's observed on system with elan 486 and 32MB RAM (i.e., low speed
>> >> >> CPU and not too much memory) running FreeBSD-9.
>> >> >>
>> >> >>    Analysis: A file is being written. All or almost all dirty buffers
>> >> >> belong to the file. The file vnode is almost all time locked by
>> >> >> writing process. The buf_daemon() can not flush any dirty buffer as a
>> >> >> chance to acquire the file vnode lock is very low. A number of dirty
>> >> >> buffers grows up very slow and with each new dirty buffer slower,
>> >> >> because buf_daemon() eats more and more CPU time by looping on dirty
>> >> >> buffers queue (with very low or no effect).
>> >> >>
>> >> >>    This slowing down effect is started by buf_daemon() itself, when
>> >> >> 'numdirtybuffers' reaches 'lodirtybuffers' threshold and buf_daemon()
>> >> >> is waked up by own timeout. The timeout fires at 'hz' period, but
>> >> >> starts to fire at 'hz/10' immediately as buf_daemon() fails to reach
>> >> >> 'lodirtybuffers' threshold. When 'numdirtybuffers' (now slowly)
>> >> >> reaches ((lodirtybuffers + hidirtybuffers) / 2) threshold, the
>> >> >> buf_daemon() can be waked up within bdwrite() too and it's much worse.
>> >> >> Finally and with very slow speed, the 'hidirtybuffers' or
>> >> >> 'dirtybufthresh' is reached, the dirty buffers are flushed, and
>> >> >> everything starts from beginning...
>> >> > Note that for some time, bufdaemon work is distributed among bufdaemon
>> >> > thread itself and any thread that fails to allocate a buffer, esp.
>> >> > a thread that owns vnode lock and covers long queue of dirty buffers.
>> >>
>> >> However, the problem starts when numdirtybuffers reaches
>> >> lodirtybuffers count and ends around hidirtybuffers count. There are
>> >> still plenty of free buffers in system.
>> >>
>> >> >>
>> >> >>    On the system, a buffer size is 512 bytes and the default
>> >> >> thresholds are following:
>> >> >>
>> >> >>    vfs.hidirtybuffers = 134
>> >> >>    vfs.lodirtybuffers = 67
>> >> >>    vfs.dirtybufthresh = 120
>> >> >>
>> >> >>    For example, a 2MB file is copied into flash disk in about 3
>> >> >> minutes and 15 second. If dirtybufthresh is set to 40, the copy time
>> >> >> is about 20 seconds.
>> >> >>
>> >> >>    My solution is a mix of three things:
>> >> >>    1. Suppresion of buf_daemon() wakeup by setting bd_request to 1 in
>> >> >> the main buf_daemon() loop.
>> >> > I cannot understand this. Please provide a patch that shows what do
>> >> > you mean there.
>> >> >
>> >>       curthread->td_pflags |= TDP_NORUNNINGBUF | TDP_BUFNEED;
>> >>       mtx_lock(&bdlock);
>> >>       for (;;) {
>> >> -             bd_request = 0;
>> >> +             bd_request = 1;
>> >>               mtx_unlock(&bdlock);
>> > Is this a complete patch ? The change just causes lost wakeups for bufdaemon,
>> > nothing more.
>> Yes, it's a complete patch. And exactly, it causes lost wakeups which are:
>> 1. !! UNREASONABLE !!, because bufdaemon is not sleeping,
>> 2. not wanted, because it looks that it's correct behaviour for the
>> sleep with hz/10 period. However, if the sleep with hz/10 period is
>> expected to be waked up by bd_wakeup(), then bd_request should be set
>> to 0 just before sleep() call, and then bufdaemon behaviour will be
>> clear.
> No, your description is wrong.
>
> If bufdaemon is unable to flush enough buffers and numdirtybuffers still
> greater then lodirtybuffers, then bufdaemon enters qsleep state
> without resetting bd_request, with timeouts of one tens of second.
> Your patch will cause all wakeups for this case to be lost. This is
> exactly the situation when we want bufdaemon to run harder to avoid
> possible deadlocks, not to slow down.

OK. Let's focus to bufdaemon implementation. Now, qsleep state is
entered with random bd_request value. If someone calls bd_wakeup()
during bufdaemon iteration over dirty buffers queues, then bd_request
is set to 1. Otherwise, bd_request remains 0. I.e., sometimes qsleep
state only can be timeouted, sometimes it can be waked up by
bd_wakeup(). So, this random behaviour is what is wanted?

>> All stuff around bd_request and bufdaemon sleep is under bd_lock, so
>> if bd_request is 0 and bufdaemon is not sleeping, then all wakeups are
>> unreasonable! The patch is about that mainly.
> Wakeups itself are very cheap for the running process. Mostly, it comes
> down to locking sleepq and waking all threads that are present in the
> sleepq blocked queue. If there is no threads in queue, nothing is done.

Are you serious? Is spin mutex really cheap? Many calls are cheap, but
they are not any matter where.

Svata


More information about the freebsd-hackers mailing list