ZFS...

Borja Marcos borjam at sarenet.es
Thu May 9 09:41:25 UTC 2019



> On 9 May 2019, at 00:55, Michelle Sullivan <michelle at sorbs.net> wrote:
> 
> 
> 
> This is true, but I am of the thought in alignment with the zFs devs this might not be a good idea... if zfs can’t work it out already, the best thing to do will probably be get everything off it and reformat…  

That’s true, I would rescue what I could and create the pool again but after testing the setup thoroughly.

It would be worth to have a look at the excellent guide offered by the FreeNAS people. It’s full of excellent advice and a
priceless list of “donts” such as SATA port multipliers, etc. 

> 
>> That sound not be hard to write if everything else on the disk has no
>> issues. Don't you say in another message that the system is now returning
>> 100's of drive errors.
> 
> No, one disk in the 16 disk zRAID2 ...  previously unseen but it could be the errors have occurred in the last 6 weeks... everytime I reboot it started resilvering, gets to 761M resilvered and then stops.

That’s a really bad sign. It shouldn’t happen. 

>> How does that relate the statement =>Everything on
>> the disk is fine except for a little bit of corruption in the freespace map?
> 
> Well I think it goes through until it hits that little bit of corruption at stops it mounting...  then stops again..
> 
> I’m seeing 100s of hard errors at the beginning of one of the drives.. they were reported in syslog but only just so could be a new thing.  Could be previously undetected.. no way to know.

As for disk monitoring, smartmontools can be pretty good although only as an indicator. I also monitor my systems using Orca (I wrote a crude “devilator” many years
ago) and I gather disk I/O statistics using GEOM of which the read/write/delete/flush times are very valuable. An ailing disk can be returning valid data but become very slow due to retries. 





Borja.




More information about the freebsd-stable mailing list