From nobody Thu Aug 10 23:28:54 2023 X-Original-To: current@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4RMNRS1mvRz4Tjbn for ; Thu, 10 Aug 2023 23:29:08 +0000 (UTC) (envelope-from kevin.bowling@kev009.com) Received: from mail-pf1-x42f.google.com (mail-pf1-x42f.google.com [IPv6:2607:f8b0:4864:20::42f]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "smtp.gmail.com", Issuer "GTS CA 1D4" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4RMNRS1LQFz3Wl1 for ; Thu, 10 Aug 2023 23:29:08 +0000 (UTC) (envelope-from kevin.bowling@kev009.com) Authentication-Results: mx1.freebsd.org; none Received: by mail-pf1-x42f.google.com with SMTP id d2e1a72fcca58-68783004143so1152795b3a.2 for ; Thu, 10 Aug 2023 16:29:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kev009.com; s=google; t=1691710147; x=1692314947; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=JcUrW896cluM/C5X4BPKGinAA0zD3iokYWrO1xzRjNQ=; b=ShyotWvi5oJ9maX/ra3lJFB/LNgOUrs4klQX7SVMx8UHZrmdUJwlDsNXdLoezRE3Vf 7VHakzZnjzaoJ5d6z0UQEaQgr8qrEllmJkdU1qm2Ka0Jo+HPboUGK5YwIYq4Yt1smWHd 4AqtIBrPLaduAOj3K/xZ+8cfZHTOPKxaIqw40= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691710147; x=1692314947; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JcUrW896cluM/C5X4BPKGinAA0zD3iokYWrO1xzRjNQ=; b=EZmOj+AuXg/2OlwUChm/oZ1LbrFj6vOjQaVi9pN0A6WrzdpfYov35Zkf/RE4wrf1qZ mccHEsJ7KeMpsd0Zds6WPWyILXX9xq2hmfnCH6n0lc2vY71kUYweaOwmlN+jIvioYFxu oUlD5TCt4ZWKW7HaBRQRmB812HkLj9Mf/ZeQu+qIzRZwan5U0cWDmsqYAVhOwWtoWWrf yI8Um8t7KmDBvPoWyq5rJdrN/OU4XH0M1RN0aQKy6fTxh04F+jNgiS+ncJWqR2adZqXC /xLRYWqXw7Js1zqOAnfOtH97bipDzQ+qSTAquqaJQZ7STQXZHCvWZ9cRGMPCLDzurzhF 2WrA== X-Gm-Message-State: AOJu0YyQbuywtaw0LEjYqB096yx9e88lRSBxFqIY64RdZTKiLA2lwJeP CLVIOpebsacsh60BezV8rgx022vneOHblFykIxsswQ== X-Google-Smtp-Source: AGHT+IH4NbqU1QYNe3VKEY9KjxA9I93wQg+Vh9LEqD82DTKNCTZRfg+6Ecv583ZVc9Uf0qdcV36QY5K6hpuyv0dxfK0= X-Received: by 2002:a05:6a20:13d8:b0:140:2805:6cc8 with SMTP id ho24-20020a056a2013d800b0014028056cc8mr435051pzc.27.1691710146779; Thu, 10 Aug 2023 16:29:06 -0700 (PDT) List-Id: Discussions about the use of FreeBSD-current List-Archive: https://lists.freebsd.org/archives/freebsd-current List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-current@freebsd.org MIME-Version: 1.0 References: <86leeltqcb.fsf@ltc.des.no> <20230810133745.D0EC0178@slippy.cwsent.com> In-Reply-To: <20230810133745.D0EC0178@slippy.cwsent.com> From: Kevin Bowling Date: Thu, 10 Aug 2023 16:28:54 -0700 Message-ID: Subject: Re: ZFS deadlock in 14 To: Cy Schubert Cc: =?UTF-8?Q?Dag=2DErling_Sm=C3=B8rgrav?= , current@freebsd.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 4RMNRS1LQFz3Wl1 X-Spamd-Bar: ---- X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Spamd-Result: default: False [-4.00 / 15.00]; REPLY(-4.00)[]; ASN(0.00)[asn:15169, ipnet:2607:f8b0::/32, country:US] The two MFVs on head have improved/fixed stability with poudriere for me 48 core bare metal. On Thu, Aug 10, 2023 at 6:37=E2=80=AFAM Cy Schubert wrote: > > In message om> > , Kevin Bowling writes: > > Possibly https://github.com/openzfs/zfs/commit/2cb992a99ccadb78d97049b4= 0bd4=3D > > 42eb4fdc549d > > > > On Tue, Aug 8, 2023 at 10:08=3DE2=3D80=3DAFAM Dag-Erling Sm=3DC3=3DB8rg= rav > sd.org> wrote: > > > > > > At some point between 42d088299c (4 May) and f0c9703301 (26 June), a > > > deadlock was introduced in ZFS. It is still present as of 9c2823bae9= (4 > > > August) and is 100% reproducable just by starting poudriere bulk in a > > > 16-core VM and waiting a few hours until deadlkres kicks in. In the > > > latest instance, deadlkres complained about a bash process: > > > > > > #0 sched_switch (td=3D3Dtd@entry=3D3D0xfffffe02fb1d8000, flags= =3D3Dflags@e=3D > > ntry=3D3D259) at /usr/src/sys/kern/sched_ule.c:2299 > > > #1 0xffffffff80b5a0a3 in mi_switch (flags=3D3Dflags@entry=3D3D25= 9) at /u=3D > > sr/src/sys/kern/kern_synch.c:550 > > > #2 0xffffffff80babcb4 in sleepq_switch (wchan=3D3D0xfffff818543a= 9e70, =3D > > pri=3D3D64) at /usr/src/sys/kern/subr_sleepqueue.c:609 > > > #3 0xffffffff80babb8c in sleepq_wait (wchan=3D3D, p= ri=3D3D<=3D > > unavailable>) at /usr/src/sys/kern/subr_sleepqueue.c:660 > > > #4 0xffffffff80b1c1b0 in sleeplk (lk=3D3Dlk@entry=3D3D0xfffff818= 543a9e70=3D > > , flags=3D3Dflags@entry=3D3D2121728, ilk=3D3Dilk@entry=3D3D0x0, wmesg= =3D3Dwmesg@entry=3D > > =3D3D0xffffffff8222a054 "zfs", pri=3D3D, pri@entry=3D3D6= 4, timo=3D3D=3D > > timo@entry=3D3D6, queue=3D3D1) at /usr/src/sys/kern/kern_lock.c:310 > > > #5 0xffffffff80b1a23f in lockmgr_slock_hard (lk=3D3D0xfffff81854= 3a9e70=3D > > , flags=3D3D2121728, ilk=3D3D, file=3D3D0xffffffff812544= fb "/usr/s=3D > > rc/sys/kern/vfs_subr.c", line=3D3D3057, lwa=3D3D0x0) at /usr/src/sys/ke= rn/kern_=3D > > lock.c:705 > > > #6 0xffffffff80c59ec3 in VOP_LOCK1 (vp=3D3D0xfffff818543a9e00, f= lags=3D > > =3D3D2105344, file=3D3D0xffffffff812544fb "/usr/src/sys/kern/vfs_subr.c= ", line=3D > > =3D3D3057) at ./vnode_if.h:1120 > > > #7 _vn_lock (vp=3D3Dvp@entry=3D3D0xfffff818543a9e00, flags=3D3D2= 105344, fi=3D > > le=3D3D, line=3D3D, line@entry=3D3D3057) at /= usr/src/sy=3D > > s/kern/vfs_vnops.c:1815 > > > #8 0xffffffff80c4173d in vget_finish (vp=3D3D0xfffff818543a9e00,= flags=3D > > =3D3D, vs=3D3Dvs@entry=3D3DVGET_USECOUNT) at /usr/src/sys/= kern/vfs_s=3D > > ubr.c:3057 > > > #9 0xffffffff80c1c9b7 in cache_lookup (dvp=3D3Ddvp@entry=3D3D0xf= ffff802c=3D > > d02ac40, vpp=3D3Dvpp@entry=3D3D0xfffffe046b20ac30, cnp=3D3Dcnp@entry=3D= 3D0xfffffe04=3D > > 6b20ac58, tsp=3D3Dtsp@entry=3D3D0x0, ticksp=3D3Dticksp@entry=3D3D0x0) a= t /usr/src/s=3D > > ys/kern/vfs_cache.c:2086 > > > #10 0xffffffff80c2150c in vfs_cache_lookup (ap=3D3D) at =3D > > /usr/src/sys/kern/vfs_cache.c:3068 > > > #11 0xffffffff80c32c37 in VOP_LOOKUP (dvp=3D3D0xfffff802cd02ac40,= vpp=3D > > =3D3D0xfffffe046b20ac30, cnp=3D3D0xfffffe046b20ac58) at ./vnode_if.h:69 > > > #12 vfs_lookup (ndp=3D3Dndp@entry=3D3D0xfffffe046b20abd8) at /usr= /src/sys=3D > > /kern/vfs_lookup.c:1266 > > > #13 0xffffffff80c31ce1 in namei (ndp=3D3Dndp@entry=3D3D0xfffffe04= 6b20abd8=3D > > ) at /usr/src/sys/kern/vfs_lookup.c:689 > > > #14 0xffffffff80c52090 in kern_statat (td=3D3D0xfffffe02fb1d8000,= flag=3D > > =3D3D, fd=3D3D-100, path=3D3D0xa75b480e070 > emory at address 0xa75b480e070>, pathseg=3D3Dpathseg@entry=3D3DUIO_USER= SPACE, s=3D > > bp=3D3Dsbp@entry=3D3D0xfffffe046b20ad18) > > > at /usr/src/sys/kern/vfs_syscalls.c:2441 > > > #15 0xffffffff80c52797 in sys_fstatat (td=3D3D, uap= =3D3D0xff=3D > > fffe02fb1d8400) at /usr/src/sys/kern/vfs_syscalls.c:2419 > > > #16 0xffffffff81049398 in syscallenter (td=3D3D) a= t /usr=3D > > /src/sys/amd64/amd64/../../kern/subr_syscall.c:190 > > > #17 amd64_syscall (td=3D3D0xfffffe02fb1d8000, traced=3D3D0) at /u= sr/src/s=3D > > ys/amd64/amd64/trap.c:1199 > > > #18 > > > > > > The lock it is trying to acquire in frame 5 belongs to another bash > > > process which is in the process of creating a fifo: > > > > > > #0 sched_switch (td=3D3Dtd@entry=3D3D0xfffffe046acd8e40, flags= =3D3Dflags@e=3D > > ntry=3D3D259) at /usr/src/sys/kern/sched_ule.c:2299 > > > #1 0xffffffff80b5a0a3 in mi_switch (flags=3D3Dflags@entry=3D3D25= 9) at /u=3D > > sr/src/sys/kern/kern_synch.c:550 > > > #2 0xffffffff80babcb4 in sleepq_switch (wchan=3D3D0xfffff8018acb= f154, =3D > > pri=3D3D87) at /usr/src/sys/kern/subr_sleepqueue.c:609 > > > #3 0xffffffff80babb8c in sleepq_wait (wchan=3D3D, p= ri=3D3D<=3D > > unavailable>) at /usr/src/sys/kern/subr_sleepqueue.c:660 > > > #4 0xffffffff80b59606 in _sleep (ident=3D3Dident@entry=3D3D0xfff= ff8018ac=3D > > bf154, lock=3D3Dlock@entry=3D3D0xfffff8018acbf120, priority=3D3Dpriorit= y@entry=3D3D=3D > > 87, wmesg=3D3D0xffffffff8223af0e "zfs teardown inactive", sbt=3D3Dsbt@e= ntry=3D3D0=3D > > , pr=3D3Dpr@entry=3D3D0, flags=3D3D256) > > > at /usr/src/sys/kern/kern_synch.c:225 > > > #5 0xffffffff80b45dc0 in rms_rlock_fallback (rms=3D3D0xfffff8018= acbf12=3D > > 0) at /usr/src/sys/kern/kern_rmlock.c:1015 > > > #6 0xffffffff80b45c93 in rms_rlock (rms=3D3D, rms@e= ntry=3D > > =3D3D0xfffff8018acbf120) at /usr/src/sys/kern/kern_rmlock.c:1036 > > > #7 0xffffffff81fb147b in zfs_freebsd_reclaim (ap=3D3D) =3D > > at /usr/src/sys/contrib/openzfs/module/os/freebsd/zfs/zfs_vnops_os.c:51= 64 > > > #8 0xffffffff8111d245 in VOP_RECLAIM_APV (vop=3D3D0xffffffff822e= 71a0 <=3D > > zfs_vnodeops>, a=3D3Da@entry=3D3D0xfffffe0410f1c9c8) at vnode_if.c:2180 > > > #9 0xffffffff80c43569 in VOP_RECLAIM (vp=3D3D0xfffff802cdbaca80)= at ./=3D > > vnode_if.h:1084 > > > #10 vgonel (vp=3D3Dvp@entry=3D3D0xfffff802cdbaca80) at /usr/src/s= ys/kern/=3D > > vfs_subr.c:4143 > > > #11 0xffffffff80c3ef61 in vtryrecycle (vp=3D3D0xfffff802cdbaca80)= at /u=3D > > sr/src/sys/kern/vfs_subr.c:1693 > > > #12 vnlru_free_impl (count=3D3Dcount@entry=3D3D1, mnt_op=3D3Dmnt_= op@entry=3D > > =3D3D0x0, mvp=3D3D0xfffff8010864da00) at /usr/src/sys/kern/vfs_subr.c:1= 344 > > > #13 0xffffffff80c49553 in vnlru_free_locked (count=3D3D1) at /usr= /src/s=3D > > ys/kern/vfs_subr.c:1357 > > > #14 vn_alloc_hard (mp=3D3Dmp@entry=3D3D0x0) at /usr/src/sys/kern/= vfs_subr=3D > > .c:1744 > > > #15 0xffffffff80c3f6f0 in vn_alloc (mp=3D3D0x0) at /usr/src/sys/a= md64/i=3D > > nclude/atomic.h:375 > > > #16 getnewvnode_reserve () at /usr/src/sys/kern/vfs_subr.c:1888 > > > #17 0xffffffff81faa072 in zfs_create (dzp=3D3D0xfffff812200261d0,= name=3D > > =3D3D0xfffff8011b8ac805 "sh-np.yPbxoo", vap=3D3D0xfffffe0410f1cc20, exc= l=3D3D > imized out>, mode=3D3D, zpp=3D3Dzpp@entry=3D3D0xfffffe04= 10f1cbc8, =3D > > cr=3D3D0xfffff80140fb1100, flag=3D3D, vsecp=3D3D0x0, mnt= _ns=3D3D0x0) > > > at /usr/src/sys/contrib/openzfs/module/os/freebsd/zfs/zfs_vno= ps_o=3D > > s.c:1146 > > > #18 0xffffffff81faea57 in zfs_freebsd_create (ap=3D3D0xfffffe0410= f1cda0=3D > > ) at /usr/src/sys/contrib/openzfs/module/os/freebsd/zfs/zfs_vnops_os.c:= 4618 > > > #19 0xffffffff8111aa9a in VOP_MKNOD_APV (vop=3D3D0xffffffff822e71= a0 > s_vnodeops>, a=3D3Da@entry=3D3D0xfffffe0410f1cda0) at vnode_if.c:372 > > > #20 0xffffffff80c50207 in VOP_MKNOD (dvp=3D3D, cnp= =3D3D0xfff=3D > > ffe0410f1cd50, vap=3D3D0xfffffe0410f1cc20, vpp=3D3D) at = ./vnode_=3D > > if.h:188 > > > #21 kern_mkfifoat (td=3D3D0xfffffe046acd8e40, fd=3D3D-100, path= =3D3D0x12772=3D > > f073500 , pathse= g=3D3D=3D > > UIO_USERSPACE, mode=3D3D) at /usr/src/sys/kern/vfs_sysca= lls.c:=3D > > 1492 > > > #22 0xffffffff81049398 in syscallenter (td=3D3D) a= t /usr=3D > > /src/sys/amd64/amd64/../../kern/subr_syscall.c:190 > > > #23 amd64_ =E6=90=AC=EE=8A=80 syscall (td=3D3D0xfffffe046acd8e= 40, traced=3D3D0) at /usr/src/s=3D > > ys/amd64/amd64/trap.c:1199 > > > #24 > > > > > > Frame 7 is trying to acquire the ZFS teardown inactive lock, which is > > > held by a process which is performing a ZFS rollback and is waiting f= or > > > the transaction to sync: > > > > > > #0 sched_switch (td=3D3Dtd@entry=3D3D0xfffffe0422ef8560, flags= =3D3Dflags@e=3D > > ntry=3D3D259) at /usr/src/sys/kern/sched_ule.c:2299 > > > #1 0xffffffff80b5a0a3 in mi_switch (flags=3D3Dflags@entry=3D3D25= 9) at /u=3D > > sr/src/sys/kern/kern_synch.c:550 > > > #2 0xffffffff80babcb4 in sleepq_switch (wchan=3D3D0xfffff8011b83= d540, =3D > > pri=3D3D0) at /usr/src/sys/kern/subr_sleepqueue.c:609 > > > #3 0xffffffff80babb8c in sleepq_wait (wchan=3D3D, w= chan@e=3D > > ntry=3D3D0xfffff8011b83d540, pri=3D3D, pri@entry=3D3D0) at= /usr/src/=3D > > sys/kern/subr_sleepqueue.c:660 > > > #4 0xffffffff80ad7f75 in _cv_wait (cvp=3D3Dcvp@entry=3D3D0xfffff= 8011b83d=3D > > 540, lock=3D3Dlock@entry=3D3D0xfffff8011b83d4d0) at /usr/src/sys/kern/k= ern_cond=3D > > var.c:146 > > > #5 0xffffffff820b42fb in txg_wait_synced_impl (dp=3D3Ddp@entry= =3D3D0xfff=3D > > ff8011b83d000, txg=3D3D8585097, wait_sig=3D3Dwait_sig@entry=3D3D0) at /= usr/src/sy=3D > > s/contrib/openzfs/module/zfs/txg.c:726 > > > #6 0xffffffff820b3cab in txg_wait_synced (dp=3D3D, = dp@ent=3D > > ry=3D3D0xfffff8011b83d000, txg=3D3D) at /usr/src/sys/contr= ib/openz=3D > > fs/module/zfs/txg.c:736 > > > #7 0xffffffff8206d5b5 in dsl_sync_task_common (pool=3D3Dpool@ent= ry=3D3D0=3D > > xfffffe0401d15000 "zroot/poudriere/jails/13amd64-default-ref/15", check= func=3D > > =3D3D, syncfunc=3D3D0xffffffff8203fbc0 > c>, sigfunc=3D3Dsigfunc@entry=3D3D0x0, arg=3D3Darg@entry=3D3D0xfffffe02= fb827a90, > > > blocks_modified=3D3Dblocks_modified@entry=3D3D1, space_check= =3D3DZFS_SP=3D > > ACE_CHECK_RESERVED, early=3D3D0) at /usr/src/sys/contrib/openzfs/module= /zfs/d=3D > > sl_synctask.c:93 > > > #8 0xffffffff8206d3c7 in dsl_sync_task (pool=3D3D, = pool@e=3D > > ntry=3D3D0xfffffe0401d15000 "zroot/poudriere/jails/13amd64-default-ref/= 15", c=3D > > heckfunc=3D3D, syncfunc=3D3D, arg=3D3D, ar=3D > > g@entry=3D3D0xfffffe02fb827a90, blocks_modified=3D3D, > > > blocks_modified@entry=3D3D1, space_check=3D3D, s= pace_che=3D > > ck@entry=3D3DZFS_SPACE_CHECK_RESERVED) at /usr/src/sys/contrib/openzfs/= module=3D > > /zfs/dsl_synctask.c:132 > > > #9 0xffffffff8204075b in dsl_dataset_rollback (fsname=3D3D > >, fsname@entry=3D3D0xfffffe0401d15000 "zroot/poudriere/jails/13amd64-d= efault=3D > > -ref/15", tosnap=3D3D, owner=3D3D, result= =3D3Dresul=3D > > t@entry=3D3D0xfffff81c826a9ea0) > > > at /usr/src/sys/contrib/openzfs/module/zfs/dsl_dataset.c:3261 > > > #10 0xffffffff82168dd9 in zfs_ioc_rollback (fsname=3D3D0xfffffe04= 01d150=3D > > 00 "zroot/poudriere/jails/13amd64-default-ref/15", fsname@entry=3D3D > ading variable: value is not available>, innvl=3D3D, innvl= @entry=3D > > =3D3D, > > > outnvl=3D3D0xfffff81c826a9ea0, outnvl@entry=3D3D > le: value is not available>) at /usr/src/sys/contrib/openzfs/module/zfs= /zfs=3D > > _ioctl.c:4405 > > > #11 0xffffffff82164522 in zfsdev_ioctl_common (vecnum=3D3Dvecnum@= entry=3D > > =3D3D25, zc=3D3Dzc@entry=3D3D0xfffffe0401d15000, flag=3D3Dflag@entry=3D= 3D0) at /usr/s=3D > > rc/sys/contrib/openzfs/module/zfs/zfs_ioctl.c:7798 > > > #12 0xffffffff81f97fca in zfsdev_ioctl (dev=3D3D, = zcmd=3D > > =3D3D, zcmd@entry=3D3D > ble>, arg=3D3D0xfffffe02fb827d50 "\017", arg@entry=3D3D > value is not available>, flag=3D3D, td=3D3D) > > > at /usr/src/sys/contrib/openzfs/module/os/freebsd/zfs/kmod_co= re.c=3D > > :168 > > > #13 0xffffffff809d6212 in devfs_ioctl (ap=3D3D0xfffffe02fb827c50)= at /u=3D > > sr/src/sys/fs/devfs/devfs_vnops.c:935 > > > #14 0xffffffff80c585f2 in vn_ioctl (fp=3D3D0xfffff8052cdd80f0, co= m=3D3D > ptimized out>, data=3D3D0xfffffe02fb827d50, active_cred=3D3D0xfffff8012= 2ab1e00,=3D > > td=3D3D) at /usr/src/sys/kern/vfs_vnops.c:1704 > > > #15 0xffffffff809d68ee in devfs_ioctl_f (fp=3D3D, fp= @entry=3D > > =3D3D, com=3D3D, c=3D > > om@entry=3D3D, data=3D3= D > lable>, data@entry=3D3D= , > > > cred=3D3D, cred@entry=3D3D > is not available>, td=3D3D, td@entry=3D3D > value is not available>) at /usr/src/sys/fs/devfs/devfs_vnops.c:866 > > > #16 0xffffffff80bc57e6 in fo_ioctl (fp=3D3D0xfffff8052cdd80f0, co= m=3D3D32=3D > > 22821401, data=3D3D, active_cred=3D3D, td=3D3= D0xfffffe0=3D > > 422ef8560) at /usr/src/sys/sys/file.h:367 > > > #17 kern_ioctl (td=3D3Dtd@entry=3D3D0xfffffe0422ef8560, fd=3D3D4,= com=3D3Dcom=3D > > @entry=3D3D3222821401, data=3D3D, data@entry=3D3D0xfffffe0= 2fb827d50 =3D > > "\017") at /usr/src/sys/kern/sys_generic.c:807 > > > #18 0xffffffff80bc54f2 in sys_ioctl (td=3D3D0xfffffe0422ef8560, u= ap=3D3D0=3D > > xfffffe0422ef8960) at /usr/src/sys/kern/sys_generic.c:715 > > > #19 0xffffffff81049398 in syscallenter (td=3D3D) a= t /usr=3D > > /src/sys/amd64/amd64/../../kern/subr_syscall.c:190 > > > #20 amd64_syscall (td=3D3D0xfffffe0422ef8560, traced=3D3D0) at /u= sr/src/s=3D > > ys/amd64/amd64/trap.c:1199 > [...] > > The backtrace looks different though it certainly smells like PR/271945. > > I've had similar to PR/271945 panics on an amd64 with a mirrored zpool wi= th > four vdevs running poudriere with AMD64 jails. My other amd64 with a > mirrored zpool with two vdevs using i386 jails has no such issue. All oth= er > workloads are unaffected. > > On the affected machine running poudriere bulk with -J N:1 circumvents th= e > issue. So far. There were two openzfs cherry-picks this morning. I intend > to try them against a full bulk build later today. > > > -- > Cheers, > Cy Schubert > FreeBSD UNIX: Web: https://FreeBSD.org > NTP: Web: https://nwtime.org > > e^(i*pi)+1=3D0 > >