geom_raid5 livelock?

R. B. Riddick arne_woerner at yahoo.com
Mon Jan 22 13:15:38 UTC 2007


--- "R. B. Riddick" <arne_woerner at yahoo.com> wrote:
> It looks like, always the same consumer returns false data again and again in
> this strange situation, although at the same time a dd to the same consumer
> at the same offset returns data, that fits to the parity block.
> 
> Does somebody here have an idea, why GEOM does that?
> Could it be, that graid5 ruined somehow memory management?
> Could it be, that GEOM is disturbed by simultaneous request?
>
I think, not graid5 ruined memory management, but <tadah> UFS changes memory
areas while a read request, that has to use the same memory area, is not
completed.</tadah>

Hints:
1. Since I use for graid5's SAFEOP mode just graid5-private-memory for the
parity check, no parity errors show up.
2. It was always -when I checked it- the use-data memory chunk, that had bad
data.
3. That happened in a quite simple special case, too (I used just 2 disks, so
that graid5 was like gmirror with 2 disks and round-robin balance).

Further details see:
http://perforce.freebsd.org/chv.cgi?CH=113310

Anyone here, who can validate my theory (it feels so _wrong_!)? :-)

-Arne


 
____________________________________________________________________________________
Never miss an email again!
Yahoo! Toolbar alerts you the instant new Mail arrives.
http://tools.search.yahoo.com/toolbar/features/mail/


More information about the freebsd-fs mailing list