graid often resyncs raid1 array after clean reboot/shutdown

Lawrence Stewart lstewart at freebsd.org
Tue Nov 20 00:22:14 UTC 2012


On 10/30/12 10:24, Lawrence Stewart wrote:
> Hi Alexander,
> 
> On 10/30/12 01:25, Alexander Motin wrote:
>> On 29.10.2012 11:17, Alexander Motin wrote:
>>> On 29.10.2012 10:29, Alexander Motin wrote:
>>>> Hi.
>>>>
>>>> On 29.10.2012 06:55, Lawrence Stewart wrote:
>>>>> I have a fairly new HP Compaq 8200 Elite desktop PC with 2 x 1TB
>>>>> Seagate
>>>>> ST1000DM003 HDDs in raid1 using the on-board Intel Matrix RAID
>>>>> controller. The system is configured to boot from ZFS off the raid1
>>>>> array, and I use it as a KDE GUI (with on-cpu GPU + KMS) desktop.
>>>>>
>>>>> Everything works great, except that after a "shutdown -r now" of the
>>>>> system, graid almost always (I believe I've noted a few times where
>>>>> everything comes up fine) detects one of the disks in the array as
>>>>> stale
>>>>> and does a full resync of the array over the course of a few hours.
>>>>> Here's an example of what I see when starting up:
>>>>
>>>>  From log messages it indeed looks like result of unclean shutdown. I've
>>>> never seen such problem with UFS, but I never tested graid with ZFS. I
>>>> guess there may be some difference in shutdown process that makes RAID
>>>> metadata to have dirty flag on reboot. I'll try to reproduce it now.
>>>
>>> I confirm the problem. Seems it happens only when using ZFS as root file
>>> system. Probably ZFS issues some last moment write that makes volume
>>> dirty. I will trace it more.
>>
>> I've found problem in the fact that ZFS seems doesn't close devices on
>> shutdown. That doesn't allow graid to shutdown gracefully. r242314 in
>> HEAD fixes that by more aggressively marking volumes clean on shutdown.
> 
> Thanks for the quick detective work and fix. I'll merge r242314 back to
> my local stable/9 tree and test it.

I've rebooted the machine a few times now and the array has been started
in optimal state without requiring a rebuild each time. Thanks again for
the fix.

Cheers,
Lawrence


More information about the freebsd-fs mailing list