bad sector in gmirror HDD

perryh at perryh at
Sun Aug 21 10:40:46 UTC 2011

Jeremy Chadwick <freebsd at> wrote:
> On Sun, Aug 21, 2011 at 02:00:33AM -0700, perryh at
> wrote:
> > Jeremy Chadwick <freebsd at> wrote:
> > > ... using dd to find the bad LBAs is the only choice he has.
> > or sysutils/diskcheckd ...
> That software has a major problem where it runs constantly, rather
> than periodically.

Even in light of the discussion below, I would not think that a
problem for the particular purpose under discussion, where it's
presumably going to be terminated after completing a single pass.
The "dd" approach is also going to soak the drive for the duration.

> I know because I'm the one who opened the PR on it:
> There's a discussion about this port/issue from a few days ago
> (how sweet!):
> With comments from you stating that the software is behaving as
> designed and that I misread the man page, but also stating point
> blank that "either way the software runs continuously" (which is
> what the PR was about in the first place):
> ...
> Back to my PR.
> I state that I set up diskcheckd.conf using the option you
> describe as "a length of time over which to spread each pass",
> yet what happened was that it did as much I/O as it could
> (read the entire disk in 45 minutes) then proceeded to do
> it again (no sleep()) ...

Agreed, that is not what is supposed to happen.

What I see as a misreading of the manpage is reflected in your
assertion, in the closing comment on 7/1/2008, that "the code does
not do what the manpage says (or vice-versa)."  Having looked at
both the code and the manpage, I don't agree with that assessment.

As I read it, the manpage sentence

    Naturally, it would be contradictory to specify both the
    frequency and the rate, so only one of these should be

has to mean that the "days" (frequency) setting is simply an
alternative way of specifying the rate.  Is there some other
interpretation that I'm missing?

Based on the code, it looks to me as if diskcheckd is supposed to
read 64KB checking for errors, then sleep for a calculated length
of time before reading the next 64KB, so as to average out to the
(directly or indirectly) specified rate.  Thus it is intended to
run "continuously" in the sense that its I/O load is supposed to
be as uniform as possible, consistent with reading 64KB at a time,
rather than imposing a heavier load for some period of time and
then pausing for the balance of the specified number of days.
This is entirely consistent with my understanding of the manpage.

Given that 115853 was closed (which AFAIK is supposed to mean
"no longer considered a problem"), and seemed to have involved
a misunderstanding of how diskcheckd was intended to operate,
I decided to investigate the open 143566 instead -- and 143566
explicitly stated that "diskcheckd runs fine when gmirror is not
involved ..."  So I've been running diskcheckd on a gmirrored
system and it seems to be working.

As to what is actually going on:  Earlier this evening I started
looking into the failure to call updateproctitle() as mentioned
in 115853's closing comment, which I had also noticed in my own
testing, and it seems that this _is_ related to the now-clarified
problem of diskcheckd running flat-out instead of pausing between
each 64KB read.  When the specified or calculated rate exceeds
64KB/sec, the required sleep interval between 64KB chunks is less
than one second.  Since diskcheckd calculates the interval in
whole seconds -- because it calls sleep() rather than usleep() or
nanosleep() -- an interval of less than one second is calculated as
zero.  That zero "interval" gets passed to sleep(), which dutifully
returns immediately or nearly so, and the same zero is also used to
"increment" the counter that is supposed to cause updateproctitle()
to be called every 300 seconds.

I suspect the fix will be to calculate in microseconds, and call
usleep() instead of sleep().  And yes, I am planning to fix it --
and clarify the manpage -- but not tonight.

> ... and besides, such a utility really shouldn't be a daemon
> anyway but a periodic(8)-called utility with appropriate locks put
> in place to ensure more than one instance can't be run at once.

I suppose that can be argued either way.  It's not obvious to me
that using, say, 7x as much bandwidth for one day and then taking
6 days off is somehow better than spreading the testing over an
entire week.  Furthermore, using periodic(8) could get _really_
messy if checking multiple drives using different frequencies --
unless one wanted to run a separate instance of the program for
each drive (and then we would have to prevent multiple simultaneous
instances for any one drive, while allowing simultaneous checking
of multiple drives).

More information about the freebsd-stable mailing list