zio_done panic in 10.3

Shiva Bhanujan Shiva.Bhanujan at Quorum.com
Fri Dec 15 23:44:54 UTC 2017


I've updated both of the bug reports.  I was hoping that setting secondacache=metadata on the destination ZFS where the snapshots are being received would restrict performance to only the receive side.  that isn't the case, and the crash has started again.  I was really hoping that there could be a solution to this.




From: owner-freebsd-fs at freebsd.org [owner-freebsd-fs at freebsd.org] on behalf of Shiva Bhanujan [shiva.bhanujan at quorum.net]

Sent: Wednesday, November 29, 2017 5:32 AM

To: Youzhong Yang; Andriy Gapon

Cc: freebsd-fs at freebsd.org

Subject: RE: zio_done panic in 10.3







Hi Andriy,



Could you please let me know when could a fix for this be available?



Regards,

Shiva











From: Youzhong Yang [youzhong at gmail.com]



Sent: Wednesday, November 22, 2017 8:26 AM



To: Andriy Gapon



Cc: Shiva Bhanujan; 
cem at freebsd.org; 
freebsd-fs at freebsd.org



Subject: Re: zio_done panic in 10.3



















Thanks Andriy.











Two bug reports filed:















https://www.illumos.org/issues/8857



https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=223803





















On Wed, Nov 22, 2017 at 10:22 AM, Andriy Gapon 

<avg at freebsd.org> wrote:





On 22/11/2017 16:40, Youzhong Yang wrote:



> Hi Andriy,



>



> This is nice! I am 100% sure it's exactly the same issue I experienced and then



> reported to illumos mailing list. In all the crash dumps zio->io_done =



> l2arc_read_done, so I thought the crash must be related to L2ARC. Once I set



> secondarycache=metadata, the frequency of crash went from one per 2 days down to



> one per week. I've been puzzled by what could have caused a zio being destroyed



> while there's still child zio. Your explanation definitely makes sense!







Oh, I now recall seeing your report:



https://illumos.topicbox.com/groups/zfs/Tccd8b4463865899e



I remember that it raised my interest, but then I forgot about it and didn't



correlate it with the latest reports.







> By the way, is there a FreeBSD bug report or an illumos bug number tracking this



> issue? I would be more than happy to create one if needed, and also test your



> potential fix here in our environment.







I am not aware of any existing bug report.



It would be great if you could open one [ or two :-) ]



If you open an illumos issue, please also add George Wilson as a watcher.



I think that George is also interested in fixing this issue and he knows the



relevant code better than me.







Thank you!







> On Tue, Nov 21, 2017 at 3:46 PM, Andriy Gapon <avg at freebsd.org



> <mailto:avg at freebsd.org>> wrote:



>



>



>



>



> On 21/11/2017 21:30, Shiva Bhanujan wrote:



> > it did get compressed to 0.5G - still too big to send via email. I did send some more debug information by running kgdb on the core file to Andriy, and I'm waiting for any analysis that he might provide.



>



> Yes, kgdb-over-email turned out to be a far more efficient compression :-)



> I already have an analysis based on the information provided by Shiva and by



> another user who has the same problem and contacted me privately.



> I am discussing possible ways to fix the problem with George Wilson who was very



> kind to double-check the analysis, complete it and suggest possible fixes.



>



> A short version is that dbuf_prefetch and dbuf_prefetch_indirect_done functions



> chain new zio-s under the same parent zio (a completion of one child zio may



> create another child zio). They do it using arc_read which can create either a



> logical zio in most cases or a vdev zio for a read from a cache device (2arc).



> zio_done() has a check for the completion of a parent zio's children but that



> check is not completely safe and can be broken by the pattern that dbuf_prefetch



> can create. So, under some specific circumstances the parent zio may complete



> and get destroyed while there is a child zio.



>



> I believe this problem to be rather rare, but there could be configurations and



> workloads where it's triggered more often.



> The problem does not happen if there are no cache devices.



>



> > From: Conrad Meyer [cem at freebsd.org <mailto:cem at freebsd.org>]



> >



> > Sent: Tuesday, November 21, 2017 9:04 AM



> >



> > To: Shiva Bhanujan



> >



> > Cc: Andriy Gapon; 

freebsd-fs at freebsd.org <mailto:freebsd-fs at freebsd.org>



> >



> > Subject: Re: zio_done panic in 10.3



> >



> >



> >



> >



> >



> >



> >



> > Have you tried compressing it with e.g. xz or zstd?



> >



> > --



> Andriy Gapon



> _______________________________________________



> 
freebsd-fs at freebsd.org <mailto:freebsd-fs at freebsd.org>

mailing list



> 
https://lists.freebsd.org/mailman/listinfo/freebsd-fs



> <https://lists.freebsd.org/mailman/listinfo/freebsd-fs>



> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org



> <mailto:freebsd-fs-unsubscribe at freebsd.org>"



>



>











--



Andriy Gapon





























_______________________________________________

freebsd-fs at freebsd.org mailing list

https://lists.freebsd.org/mailman/listinfo/freebsd-fs

To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"











More information about the freebsd-fs mailing list