10.2-stable, zvol + ctld = crash

Steven Hartland killing at multiplay.co.uk
Thu Dec 10 21:06:30 UTC 2015


It should always run on boot, but if this is the first time the machine 
has been rebooted since the entry was added to rc.conf and 
/etc/rc.d/dumpdev start was not run manually you could see this.

If that's not the case may be worth checking /var/log/messages or the 
console if you have access to it to see if an error occurred preventing 
it from being read.

On 10/12/2015 20:50, mxb wrote:
> So my guess savecore was never called to extract core from swap?
>
>
>> On 10 dec. 2015, at 21:43, mxb <mxb at alumni.chalmers.se> wrote:
>>
>>
>> root at nas:~ # sysctl kern.shutdown.dumpdevname
>> kern.shutdown.dumpdevname: ada0p3
>> root at nas:~ #
>>
>>> On 10 dec. 2015, at 18:35, Steven Hartland <killing at multiplay.co.uk> wrote:
>>>
>>> dumpdev="AUTO" only works if you have swap configured.
>>>
>>> You can if its properly configured with: sysctl kern.shutdown.dumpdevname, if its blank then its not configured.
>>>
>>> dumpdir defaults to /var/crash so no need to set that.
>>>
>>> On 10/12/2015 17:16, mxb wrote:
>>>> No core found, but system is configured to do that:
>>>>
>>>> dumpdev="AUTO"
>>>> dumpdir="/var/crash”
>>>>
>>>> Only minfree ascii file found.
>>>>
>>>>> On 10 dec. 2015, at 17:52, Steven Hartland <killing at multiplay.co.uk> wrote:
>>>>>
>>>>> As a workaround you can disable TRIM:
>>>>> sysctl vfs.zfs.trim.enabled=1
>>>>>
>>>>> Could you get parameter details on args frame #6 and #8 from the kernel core please for further investigation please.
>>>>>
>>>>> On 10/12/2015 16:41, mxb wrote:
>>>>>> Hey,
>>>>>> just got panic and reboot:
>>>>>>
>>>>>>
>>>>>> Dec 10 17:22:32 nas kernel: panic: solaris assert: start < end, file: /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/trim_map.c, line: 219
>>>>>> Dec 10 17:22:32 nas kernel: cpuid = 6
>>>>>> Dec 10 17:22:32 nas kernel: KDB: stack backtrace:
>>>>>> Dec 10 17:22:32 nas kernel: #0 0xffffffff80981be0 at kdb_backtrace+0x60
>>>>>> Dec 10 17:22:32 nas kernel: #1 0xffffffff80945716 at vpanic+0x126
>>>>>> Dec 10 17:22:32 nas kernel: #2 0xffffffff809455e3 at panic+0x43
>>>>>> Dec 10 17:22:32 nas kernel: #3 0xffffffff81c931fd at assfail+0x1d
>>>>>> Dec 10 17:22:32 nas kernel: #4 0xffffffff81a6cd01 at trim_map_segment_add+0x41
>>>>>> Dec 10 17:22:32 nas kernel: #5 0xffffffff81a6c1ef at trim_map_free_locked+0x9f
>>>>>> Dec 10 17:22:32 nas kernel: #6 0xffffffff81a6c128 at trim_map_free+0x98
>>>>>> Dec 10 17:22:32 nas kernel: #7 0xffffffff819b8af0 at arc_release+0x100
>>>>>> Dec 10 17:22:32 nas kernel: #8 0xffffffff819c1b27 at dbuf_dirty+0x357
>>>>>> Dec 10 17:22:32 nas kernel: #9 0xffffffff819c90dc at dmu_write+0xfc
>>>>>> Dec 10 17:22:32 nas kernel: #10 0xffffffff81a6900f at zvol_strategy+0x23f
>>>>>> Dec 10 17:22:32 nas kernel: #11 0xffffffff81a67531 at zvol_geom_start+0x51
>>>>>> Dec 10 17:22:32 nas kernel: #12 0xffffffff808a551e at g_io_request+0x38e
>>>>>> Dec 10 17:22:32 nas kernel: #13 0xffffffff81e34d5d at ctl_be_block_dispatch_dev+0x20d
>>>>>> Dec 10 17:22:32 nas kernel: #14 0xffffffff81e356fd at ctl_be_block_worker+0x5d
>>>>>> Dec 10 17:22:32 nas kernel: #15 0xffffffff809901d5 at taskqueue_run_locked+0xe5
>>>>>> Dec 10 17:22:32 nas kernel: #16 0xffffffff80990c68 at taskqueue_thread_loop+0xa8
>>>>>> Dec 10 17:22:32 nas kernel: #17 0xffffffff8090f07a at fork_exit+0x9a
>>>>>>
>>>>>>
>>>>>> This is FreeBSD 10.2-STABLE #0 r289883: Sat Oct 24 23:14:33 CEST 2015
>>>>>>
>>>>>> Any ideas?
>>>>>>
>>>>>> Several ZVOLs are exported to ESXi 6.x.
>>>>>> LUN 2 was added, iSCSI configured on ESX and initiated, ctld restart.
>>>>>>
>>>>>> root at nas:~ # cat /etc/ctl.conf
>>>>>>
>>>>>> portal-group pg0 {
>>>>>> 	discovery-auth-group no-authentication
>>>>>> 	listen 0.0.0.0
>>>>>> }
>>>>>>
>>>>>> target iqn.2015-03.com.unixconn:target0 {
>>>>>> 	auth-group no-authentication
>>>>>> 	portal-group pg0
>>>>>>
>>>>>> 	lun 0 {
>>>>>> 		path /dev/zvol/zfspool/iscsi0
>>>>>> 		size 250G
>>>>>> 	}
>>>>>>
>>>>>> 	lun 1 {
>>>>>> 		path /dev/zvol/zfspool/grey_timemachine0
>>>>>> 		size 500G
>>>>>> 	}
>>>>>>
>>>>>>   lun 2 {
>>>>>>     path /dev/zvol/zfspool/vcenter
>>>>>>     size 200G
>>>>>>   }
>>>>>>
>>>>>> }
>>>>>>
>>>>>> //mxb
>>>>>>
>>>>>> _______________________________________________
>>>>>> freebsd-fs at freebsd.org mailing list
>>>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
>>>>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>>>>> _______________________________________________
>>>>> freebsd-fs at freebsd.org mailing list
>>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
>>>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>>>> _______________________________________________
>>>> freebsd-fs at freebsd.org mailing list
>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
>>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>>> _______________________________________________
>>> freebsd-fs at freebsd.org mailing list
>>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"



More information about the freebsd-fs mailing list