gvinum raid5 degraded with Promise PDC20375 up with Sil 3114
Marius Nuennerich
marius.nuennerich at gmx.net
Wed Jan 4 12:29:57 PST 2006
Hi folks,
after a while running ok I have got the following tomorrow:
> ad6: TIMEOUT - WRITE_DMA48 retrying (1 retry left) LBA=475111936
> ad6: FAILURE - WRITE_DMA48 status=51<READY,DSC,ERROR> error=10<NID_NOT_FOUND> LBA=475111936
> GEOM_VINUM: subdisk data.p0.s2 state change: up -> down
> GEOM_VINUM: plex data.p0 state change: up -> degraded
> g_vfs_done():gvinum/data[WRITE(offset=486513836032, length=131072)]error = 5
> GEOM_VINUM: lost drive 'seag2'
> g_vfs_done():gvinum/data[WRITE(offset=486514491392, length=131072)]error = 6
> g_vfs_done():gvinum/data[WRITE(offset=486514622464, length=131072)]error = 6
> g_vfs_done():gvinum/data[WRITE(offset=486514360320, length=131072)]error = 6
> g_vfs_done():gvinum/data[WRITE(offset=486514753536, length=131072)]error = 6
> g_vfs_done():gvinum/data[WRITE(offset=486514884608, length=131072)]error = 6
ad6 is connected to the onboard VIA 6420 SATA150 controller, it is
data.p0.s2.
The machine rebooted itself some time after this messages it seems.
I did a gvinum start data.p0, now the RAID is up again, but the fsck I
did afterwards printed many unexpected inconsistencies :( And over 700
files are in lost+found.
smartmontools tell me the disk is ok so far, any idea what happend here
or what I can do that this doesn't happen again?
I'm currently updating to the newest 6-STABLE...
regards
Marius
More information about the freebsd-geom
mailing list