From nobody Thu Sep 07 23:32:34 2023 X-Original-To: freebsd-current@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4RhbBr6wKXz4sfBj for ; Thu, 7 Sep 2023 23:32:52 +0000 (UTC) (envelope-from marklmi@yahoo.com) Received: from sonic310-22.consmr.mail.gq1.yahoo.com (sonic310-22.consmr.mail.gq1.yahoo.com [98.137.69.148]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4RhbBr3tVzz4TJc for ; Thu, 7 Sep 2023 23:32:52 +0000 (UTC) (envelope-from marklmi@yahoo.com) Authentication-Results: mx1.freebsd.org; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1694129570; bh=h1aQmrPwATVZowD7blxy+TP63SaEbhwfe/6+Klbn7sY=; h=Subject:From:In-Reply-To:Date:Cc:References:To:From:Subject:Reply-To; b=hY5OHGq6Z197abcd5rwNaRlnjh2DuXW8Mrn7O1r12QoKkHrS/EFcgk4qcQij1I+Z9v9C3fDnDpqvmEA/1ksaHEg0HY8iGYxra+SM3Vc5h9VCHbG0tea/GBda/vo1ARbsKc7t7oA3WR0TQrp5TyIMld5S51swhGLsJO9oKqKpPML4cKhahucerUapLZf7mK1/A3LrFzAwV6a8TJ+aeUSr5KALV86AZto41z4oDBW+uV34o3YQyp8kDlj6XPx5+lTM8nFnuJVhwCxujVqU2iQIVzW9Idx7+aj+SxfU1cGbmGmnRsqL23lnAOn0N55rITS8mC9DCS1bd4Tz9PAr/5HUTg== X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1694129570; bh=Au1pKf3QM0gKdWxsuhGp96p2UJ7zPBBvqXFmYcWajBG=; h=X-Sonic-MF:Subject:From:Date:To:From:Subject; b=IUZEvTdXQE1lj8Gxgl+IVG51RoNGr83GFiBrbf66Jjg2T3V5Niwi7MZOC+/d9dT0J4TcVb1L7gdLAM+JcXugevUBILwzFQa1pLU9Gi0rX91Tn2PYvMJ3/tQaN/4C7dev2wg4Ser4FwRDlQePE2a2Dh9zT8sGQjHSnLnbZ/v+p8rOgzqGEVzUqYGAlgJmb21yWbUld1B4qwiSEn2dhB22ajheTKZIS4YXuS7i5m1vUiFT3JRLh0GN9moQ+QJFOCJJUUrdF6wY9MdmjA3eutSNvjLHR0RNjotuXbiHnc4HGdQwlCCy5P/BgD22UpznRgbg9K4J83A/1eHAJfoO6Jmnog== X-YMail-OSG: xT_GXLEVM1lTy3ILszW9tsmN5Fm7x9FquhYBqyIqgZA4ko1rR4.vWjuiAF8ABj_ CEVR7FSz82uleVcVGCOIix3ei5Hvobblg0m2BOYnSGHeQ_0ovT.ooFgSACtXUAcWSzUVBldFAlDk Y.UsD_x0Zq6ClyGPsj6HU6azgNNxbz7bLnQEb0z5n.c0UOZNlQKIcYfppa0_wLRrSwZRm4.glVEN nXwevOwdIlJkKW_jX4HhkK1N2g8EQOFDrEUYu3.sFwPhNHeIu1LJoCvt0brr7tzItHG6DdWLyTR6 EWH9QODAamsTt968D5OS7ldWVsqLmJ4KdXQx1XVWKdEkKCKE.gbWutAjUW_Hlw_LnkySPyx96xHj LKnrfG6XMDK1XreiTbeLfwK6KT2FlSnvuhZQkWhg_VHLHFikx3uLHibmDM0Ack3TIAdYkbUxxl69 jgceb8Iwf8u_GT_mAEkvv6NC1hBVMsxGkUOAjYDo_elBKFuWbqgm7d4FGNAPyLzzWLDDhJqP2ZH7 CNnAOj8PhKNWq6Y0.WfH46kIkTE8jBv.PkjDAoLBR8.3Qn2zPKz.UOqHGfvAwVUWCnByAZ4ppz1U vsHxtteYea_aPTDxqEsI5NSsTE8.6deIzbHv2Z9fDopBW31lMhhn.0pyz5DZnX5wwgk8oo1ls9mI 6AqW5q06cJvxNH1Pizot2zh9DUzfgqhmPVtn7iQWFS.H5W_tQcY.ncdlpBovCxajwylU5moq3GKq NefQBRsiSSxFhJ4e6zH.WVUBdS8hEi5lzQrdprrL2raugxrMP.ttpTh0zq6jSQOluxq9t54Z8kSz _70uGUL1Hqp2oeJdRJL3L85Gvg1a7SmQaZz3geI0XGymwQb0hMDHGErU.m9saLN67lo4GMgDwQbI HfWn9oi9EcN1V8dqVuAwiiwP7KuxgOZO4NK5LOu7waHkPTvrkaVlAviJGd4aSB5A8jy6yI_ye0mw g6lBfVWqLxw74O6A58Cw0hQasOONF0gTdGQFVPNVw7ZV5YfYEi9Lef_VR_7NzQic4oJgkqW1j4.Z O80mb6zt2cXamRJbUqZaUU7XbrdhkKAEHY6XPWnWZD7hGSS.RGKrqO5ahdXzarEJTS.XIJadlm58 Hqv5v9qd3k6jw24snMSKNL7SgwU8O8kPbM7kCYqGwVfkEJl_AbktmosMw9ypv_3HYVsVr_KgxSHj XHYpQdifaAKDyfS5p2g.fLPYdjRpALVSeBxC2Ttbf17zgZxq94XKkij7GLBZpHF6k_NG0R7TSigH iVmcXL2FvnYPiURpT6VshCvXmiMR41SXYn3wsNcEyhwPBXXCEuPPmoYBYuKw14nd7lqT.dbJiUJ4 1UWdBd44v0ftO9LY4okU4H25PtiGd8sUaYoByilf_GSgvw.EWUNPz0pDFrdDpJkZmVI7SchwttPg CGPJX6b8qYLjcsU4OayJoGe_nSrKEg4zyOPo1jXL3I6ipjNxFFV4jfhImMM_THH2ZewtfU91kOKo YzQ6JXSm07HSylHj_.R9PSkjmjsCOxBu6sRe7SnyFvi.WKLAD5zquMUTEGN3kw.t4.QuZmJQlUzt zk1JDfc3TB00LUxEoDn6TeBHpegEmBuNsMA8QVThvZTswK7vRXL4ncel_mULAqNrFUz9Vq71xTuj Xb4vXG4p3xnnzVB32kuUjuNts3gl3BPxlyAFxTvLvDl3wf0KLuoHUG.BMG9B6XlaSHB3sW9toaFA HrjnfFzAgZLJcogtbnhqsOSpZhnye6zIz6dbynAaMrpw3RHc11nUFPBgIb81ReGK5Q_UVy.k0kX9 DxWmWsdL2pRMiJK8IqjmR.UhO_Mqgojdi7v3mYhfnf3ApKGV2rC1qZQ7NJBHNoWQ_FbRQBOrxqDa r0AHhdAAveRTRpqhqoBHa916MyPvp6trN2xRujyoJxGosVAa9Mf0S9_aRwbHJZX6w4PqGNZmvlHG dPtfSX2Ril2p.TQlWroU4qjZXUK6W7p.3OHSsYYi1WTK6Kc.rO0KL723Vd6RD.gMNsK4MQEie1Rl s_J2CQdqw00PmRtwskfFV3lQADfzyH_QGxCOjngfxAJVrBh5Ba1.Fhp49K_L7dLGa33d3aQ9TMSz H0QskMlp0rJ2WOzG8LwsXCvbbvLncNedvo6.RuAywAkP3Dphe0BBzrjv3JAHd2zJyWpBg7.esUDe FtuPVcx3kKUqI2afubRBz9OQpwjLJPEtqbYzjwfB_YtqUGTQu9SOWZL05Yzi.XID.Xsj47dCvz0h aHlzUx6SedPfFjNleqvslr1_HvIM8iJR9u70Dq1PoDY6tf5QFYCxJIHfzbj7d9Hoj.uRdGGJcy4V HwhDpZUtJ X-Sonic-MF: X-Sonic-ID: 47d7f830-3b57-42e9-9641-67c4340520e4 Received: from sonic.gate.mail.ne1.yahoo.com by sonic310.consmr.mail.gq1.yahoo.com with HTTP; Thu, 7 Sep 2023 23:32:50 +0000 Received: by hermes--production-ne1-7b767b77cc-q899j (Yahoo Inc. Hermes SMTP Server) with ESMTPA ID ad8c6981922bf36ed77b9b2519e01bd1; Thu, 07 Sep 2023 23:32:45 +0000 (UTC) Content-Type: text/plain; charset=us-ascii List-Id: Discussions about the use of FreeBSD-current List-Archive: https://lists.freebsd.org/archives/freebsd-current List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-current@freebsd.org Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3731.700.6\)) Subject: Re: main [and, likely, stable/14]: do not set vfs.zfs.bclone_enabled=1 with that zpool feature enabled because it still leads to panics From: Mark Millard In-Reply-To: <4f4e2b68-57e0-a475-e2bd-1f2b8844ebfe@FreeBSD.org> Date: Thu, 7 Sep 2023 16:32:34 -0700 Cc: Glen Barber , Martin Matuska , Current FreeBSD , FreeBSD-STABLE Mailing List , Pawel Jakub Dawidek Content-Transfer-Encoding: quoted-printable Message-Id: <354C5B8C-4216-4171-B8C2-8E827817F8E5@yahoo.com> References: <7CE2CAAF-8BB0-4422-B194-4A6B0A4BC12C@yahoo.com> <08B7E72B-78F1-4ACA-B09D-E8C34BCE2335@yahoo.com> <20230907184823.GC4090@FreeBSD.org> <4f4e2b68-57e0-a475-e2bd-1f2b8844ebfe@FreeBSD.org> To: Alexander Motin X-Mailer: Apple Mail (2.3731.700.6) X-Spamd-Bar: ---- X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Spamd-Result: default: False [-4.00 / 15.00]; REPLY(-4.00)[]; ASN(0.00)[asn:36647, ipnet:98.137.64.0/20, country:US] X-Rspamd-Queue-Id: 4RhbBr3tVzz4TJc On Sep 7, 2023, at 13:07, Alexander Motin wrote: > Thanks, Mark. >=20 > On 07.09.2023 15:40, Mark Millard wrote: >> On Sep 7, 2023, at 11:48, Glen Barber wrote: >>> On Thu, Sep 07, 2023 at 11:17:22AM -0700, Mark Millard wrote: >>>> When I next have time, should I retry based on a more recent >>>> vintage of main that includes 969071be938c ? >>>=20 >>> Yes, please, if you can. >> As stands, I rebooted that machine into my normal >> enviroment, so the after-crash-with-dump-info >> context is preserved. I'll presume lack of a need >> to preserve that context unless I hear otherwise. >> (But I'll work on this until later today.) >> Even my normal environment predates the commit in >> question by a few commits. So I'll end up doing a >> more general round of updates overall. >> Someone can let me know if there is a preference >> for debug over non-debug for the next test run. >=20 > It is not unknown when some bugs disappear once debugging is enabled = due to different execution timings, but generally debug may to detect = the problem closer to its origin instead of looking on random = consequences. I am only starting to look on this report (unless Pawel or = somebody beat me on it), and don't have additional requests yet, but if = you can repeat the same with debug kernel (in-base ZFS's ZFS_DEBUG = setting follows kernel's INVARIANTS), it may give us some additional = information. So I did a zpool import, rewinding to the checkpoint. (This depends on the questionable zfs doing fully as desired for this. Notably the normal environment has vfs.zfs.bclone_enabled=3D0 , including when it was doing this activity.) My normal environment reported no problems. Note: the earlier snapshot from my first setup was still in place since it was made just before the original checkpoint used above. However, the rewind did remove the /var/crash/ material that had been added. I did the appropriate zfs mount. I installed a debug kernel and world to the import. Again, no problems reported. I did the appropriate zfs umount. I did the appropriate zpool export. I rebooted with the test media. # sysctl vfs.zfs.bclone_enabled vfs.zfs.bclone_enabled: 1 # zpool trim -w zamd64 # zpool checkpoint zamd64 # uname -apKU FreeBSD amd64-ZFS 15.0-CURRENT FreeBSD 15.0-CURRENT amd64 1500000 #74 = main-n265188-117c54a78ccd-dirty: Tue Sep 5 21:29:53 PDT 2023 = root@amd64-ZFS:/usr/obj/BUILDs/main-amd64-dbg-clang/usr/main-src/amd64.amd= 64/sys/GENERIC-DBG amd64 amd64 1500000 1500000 (So, before the 969071be938c vintage, same sources as for my last run but a debug build.) # poudriere bulk -jmain-amd64-bulk_a -a . . . [00:03:23] Building 34214 packages using up to 32 builders [00:03:23] Hit CTRL+t at any time to see build progress and stats [00:03:23] [01] [00:00:00] Builder starting [00:04:19] [01] [00:00:56] Builder started [00:04:20] [01] [00:00:01] Building ports-mgmt/pkg | pkg-1.20.6 [00:05:33] [01] [00:01:14] Finished ports-mgmt/pkg | pkg-1.20.6: Success [00:05:53] [01] [00:00:00] Building print/indexinfo | indexinfo-0.3.1 [00:05:53] [02] [00:00:00] Builder starting . . . [00:05:54] [32] [00:00:00] Builder starting [00:06:11] [01] [00:00:18] Finished print/indexinfo | indexinfo-0.3.1: = Success [00:06:12] [01] [00:00:00] Building devel/gettext-runtime | = gettext-runtime-0.22_1 [00:08:24] [01] [00:02:12] Finished devel/gettext-runtime | = gettext-runtime-0.22_1: Success [00:08:31] [01] [00:00:00] Building devel/libtextstyle | = libtextstyle-0.22 [00:10:06] [05] [00:04:13] Builder started [00:10:06] [05] [00:00:00] Building devel/autoconf-switch | = autoconf-switch-20220527 [00:10:06] [31] [00:04:12] Builder started [00:10:06] [31] [00:00:00] Building devel/libatomic_ops | = libatomic_ops-7.8.0 . . . Crashed again, with 158 *.pkg files in .building/All/ after rebooting. The crash is similar to the non-debug one. No extra output from the debug build. For reference: Unread portion of the kernel message buffer: panic: Solaris(panic): zfs: accessing past end of object 422/10b1c02 = (size=3D2560 access=3D2560+2560) cpuid =3D 15 time =3D 1694127988 KDB: stack backtrace: db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame = 0xfffffe02e783b5a0 vpanic() at vpanic+0x132/frame 0xfffffe02e783b6d0 panic() at panic+0x43/frame 0xfffffe02e783b730 vcmn_err() at vcmn_err+0xeb/frame 0xfffffe02e783b860 zfs_panic_recover() at zfs_panic_recover+0x59/frame 0xfffffe02e783b8c0 dmu_buf_hold_array_by_dnode() at dmu_buf_hold_array_by_dnode+0xb8/frame = 0xfffffe02e783b970 dmu_brt_clone() at dmu_brt_clone+0x61/frame 0xfffffe02e783b9f0 zfs_clone_range() at zfs_clone_range+0xaa3/frame 0xfffffe02e783bbc0 zfs_freebsd_copy_file_range() at zfs_freebsd_copy_file_range+0x18a/frame = 0xfffffe02e783bc40 vn_copy_file_range() at vn_copy_file_range+0x114/frame = 0xfffffe02e783bce0 kern_copy_file_range() at kern_copy_file_range+0x36c/frame = 0xfffffe02e783bdb0 sys_copy_file_range() at sys_copy_file_range+0x78/frame = 0xfffffe02e783be00 amd64_syscall() at amd64_syscall+0x14f/frame 0xfffffe02e783bf30 fast_syscall_common() at fast_syscall_common+0xf8/frame = 0xfffffe02e783bf30 --- syscall (569, FreeBSD ELF64, copy_file_range), rip =3D = 0xa8a8d32b55a, rsp =3D 0xa8a8c8fa158, rbp =3D 0xa8a8c8fa5f0 --- KDB: enter: panic __curthread () at /usr/main-src/sys/amd64/include/pcpu_aux.h:57 57 __asm("movq %%gs:%P1,%0" : "=3Dr" (td) : "n" = (offsetof(struct pcpu, (kgdb) #0 __curthread () at = /usr/main-src/sys/amd64/include/pcpu_aux.h:57 #1 doadump (textdump=3Dtextdump@entry=3D0) at /usr/main-src/sys/kern/kern_shutdown.c:405 #2 0xffffffff804a505a in db_dump (dummy=3D,=20 dummy2=3D, dummy3=3D, = dummy4=3D) at /usr/main-src/sys/ddb/db_command.c:591 #3 0xffffffff804a4e5d in db_command (last_cmdp=3D,=20 cmd_table=3D, dopager=3Dtrue) at /usr/main-src/sys/ddb/db_command.c:504 #4 0xffffffff804a4b1d in db_command_loop () at /usr/main-src/sys/ddb/db_command.c:551 #5 0xffffffff804a81f6 in db_trap (type=3D, = code=3D) at /usr/main-src/sys/ddb/db_main.c:268 #6 0xffffffff80bb0793 in kdb_trap (type=3Dtype@entry=3D3, = code=3Dcode@entry=3D0,=20 tf=3Dtf@entry=3D0xfffffe02e783b4e0) at = /usr/main-src/sys/kern/subr_kdb.c:790 #7 0xffffffff810623c9 in trap (frame=3D0xfffffe02e783b4e0) at /usr/main-src/sys/amd64/amd64/trap.c:608 #8 #9 kdb_enter (why=3D, msg=3D) at /usr/main-src/sys/kern/subr_kdb.c:556 #10 0xffffffff80b61b33 in vpanic (fmt=3D0xffffffff82c3914f "%s%s",=20 ap=3Dap@entry=3D0xfffffe02e783b710) at /usr/main-src/sys/kern/kern_shutdown.c:958 #11 0xffffffff80b61913 in panic ( fmt=3D0xffffffff82369800 = "W\346\025\201\377\377\377\377") at /usr/main-src/sys/kern/kern_shutdown.c:894 #12 0xffffffff8299464b in vcmn_err (ce=3D,=20 fmt=3D0xffffffff82c6f791 "zfs: accessing past end of object = %llx/%llx (size=3D%u access=3D%llu+%llu)", adx=3D0xfffffe02e783b8a0) at = /usr/main-src/sys/contrib/openzfs/module/os/freebsd/spl/spl_cmn_err.c:60 #13 0xffffffff82ab1ab9 in zfs_panic_recover ( fmt=3D0x12 ) at /usr/main-src/sys/contrib/openzfs/module/zfs/spa_misc.c:1594 #14 0xffffffff82a10138 in dmu_buf_hold_array_by_dnode = (dn=3D0xfffff80b9edd67f0,=20 offset=3Doffset@entry=3D2560, length=3Dlength@entry=3D2560, = read=3Dread@entry=3D0,=20 tag=3D0xffffffff82c1c507, = numbufsp=3Dnumbufsp@entry=3D0xfffffe02e783b9bc,=20 dbpp=3D0xfffffe02e783b9c0, flags=3D0) at /usr/main-src/sys/contrib/openzfs/module/zfs/dmu.c:543 #15 0xffffffff82a13c91 in dmu_buf_hold_array (os=3D,=20 object=3D, read=3D0, numbufsp=3D0xfffffe02e783b9bc,=20= dbpp=3D0xfffffe02e783b9c0, offset=3D, = length=3D,=20 tag=3D) at /usr/main-src/sys/contrib/openzfs/module/zfs/dmu.c:654 #16 dmu_brt_clone (os=3Dos@entry=3D0xfffff8011e989800, object=3D,=20 offset=3Doffset@entry=3D2560, length=3Dlength@entry=3D2560,=20 tx=3Dtx@entry=3D0xfffff81b68d0bb00, = bps=3Dbps@entry=3D0xfffffe04bcc60000, nbps=3D1,=20 replay=3D0) at = /usr/main-src/sys/contrib/openzfs/module/zfs/dmu.c:2301 #17 0xffffffff82b7e393 in zfs_clone_range (inzp=3D0xfffff805a65f7000,=20 inoffp=3D0xfffff805c5e13548, outzp=3D0xfffff809d42c2cb0,=20 outoffp=3D0xfffff80134c184f8, lenp=3Dlenp@entry=3D0xfffffe02e783bc00,=20= cr=3D0xfffff8041841f700) at /usr/main-src/sys/contrib/openzfs/module/zfs/zfs_vnops.c:1302 #18 0xffffffff829b77ca in zfs_freebsd_copy_file_range = (ap=3D0xfffffe02e783bc58) at = /usr/main-src/sys/contrib/openzfs/module/os/freebsd/zfs/zfs_vnops_os.c:629= 4 #19 0xffffffff80c74694 in VOP_COPY_FILE_RANGE (invp=3D,=20= inoffp=3D, outvp=3D0xffffffff81183b4f,=20 outvp@entry=3D0xfffffe02e783bce0, outoffp=3D0xffffffff812265a2, = lenp=3D0x0,=20 lenp@entry=3D0xfffffe02e783bc70, flags=3D0, = incred=3D0xfffff8041841f700,=20 outcred=3D0x0, fsizetd=3D0xfffffe021b5f4740) at ./vnode_if.h:2385 #20 vn_copy_file_range (invp=3Dinvp@entry=3D0xfffff80435eef8c0,=20 inoffp=3Dinoffp@entry=3D0xfffff805c5e13548, = outvp=3D0xffffffff81183b4f,=20 outvp@entry=3D0xfffff80ce8c8ee00, outoffp=3D0xffffffff812265a2, = lenp=3D0x0,=20 lenp@entry=3D0xfffffe02e783bd40, flags=3Dflags@entry=3D0,=20 incred=3D0xfffff8041841f700, outcred=3D0xfffff8041841f700,=20 fsize_td=3D0xfffffe021b5f4740) at = /usr/main-src/sys/kern/vfs_vnops.c:3085 #21 0xffffffff80c6f22c in kern_copy_file_range ( td=3Dtd@entry=3D0xfffffe021b5f4740, infd=3D,=20 inoffp=3D0xfffff805c5e13548, inoffp@entry=3D0x0, outfd=3D,=20 outoffp=3D0xffffffff812265a2, outoffp@entry=3D0x0, = len=3D9223372036854775807,=20 flags=3D0) at /usr/main-src/sys/kern/vfs_syscalls.c:4971 #22 0xffffffff80c6f338 in sys_copy_file_range (td=3D0xfffffe021b5f4740,=20= uap=3D0xfffffe021b5f4b40) at = /usr/main-src/sys/kern/vfs_syscalls.c:5009 #23 0xffffffff8106323f in syscallenter (td=3D0xfffffe021b5f4740) at /usr/main-src/sys/amd64/amd64/../../kern/subr_syscall.c:187 #24 amd64_syscall (td=3D0xfffffe021b5f4740, traced=3D0) at /usr/main-src/sys/amd64/amd64/trap.c:1197 #25 #26 0x00000a8a8d32b55a in ?? () Backtrace stopped: Cannot access memory at address 0xa8a8c8fa158 (kgdb)=20 >> Looking at "git: 969071be938c - main", the relevant >> part seems to be just (white space possibly not >> preserved accurately): >> diff --git a/sys/kern/vfs_vnops.c b/sys/kern/vfs_vnops.c >> index 9fb5aee6a023..4e4161ef1a7f 100644 >> --- a/sys/kern/vfs_vnops.c >> +++ b/sys/kern/vfs_vnops.c >> @@ -3076,12 +3076,14 @@ vn_copy_file_range(struct vnode *invp, off_t = *inoffp, struct vnode *outvp, >> goto out; >> /* >> - * If the two vnode are for the same file system, call >> + * If the two vnodes are for the same file system type, call >> * VOP_COPY_FILE_RANGE(), otherwise call = vn_generic_copy_file_range() >> - * which can handle copies across multiple file systems. >> + * which can handle copies across multiple file system types. >> */ >> *lenp =3D len; >> - if (invp->v_mount =3D=3D outvp->v_mount) >> + if (invp->v_mount =3D=3D outvp->v_mount || >> + strcmp(invp->v_mount->mnt_vfc->vfc_name, >> + outvp->v_mount->mnt_vfc->vfc_name) =3D=3D 0) >> error =3D VOP_COPY_FILE_RANGE(invp, inoffp, outvp, outoffp, >> lenp, flags, incred, outcred, fsize_td); >> else >> That looks to call VOP_COPY_FILE_RANGE in more contexts and >> vn_generic_copy_file_range in fewer. >> The backtrace I reported involves: VOP_COPY_FILE_RANGE >> So it appears this change is unlikely to invalidate my >> test result, although failure might happen sooner if >> more VOP_COPY_FILE_RANGE calls happen with the newer code. >=20 > Your logic is likely right, but if you have block cloning requests = both within and between datasets, this patch may change the pattern. = Though it is obviously not a fix for the issue. I responded to the = commit email only because it makes no difference while = vfs.zfs.bclone_enabled is 0. >=20 >> That in turns means that someone may come up with some >> other change for me to test by the time I get around to >> setting up another test. Let me know if so. >=20 > --=20 > Alexander Motin =3D=3D=3D Mark Millard marklmi at yahoo.com