[Bug 209099] ata2: already running! panic: bad link elm 0xfffff80003b7e6a0 prev->next != elm

bugzilla-noreply at freebsd.org bugzilla-noreply at freebsd.org
Fri Apr 29 13:58:19 UTC 2016


https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=209099

--- Comment #3 from Greg Furstenwerth <furstenwerth at gmail.com> ---
It seems that when running 'salt-call -l debug state.apply' there is no real
issue. It executes the command and does not create any problem. Once states are
applied if I reboot and the minion restarts, it panic's just after loading the
minion. Yet running 'salt machine state.apply' still does same behavior.
Running salt-call I get enormous output.

However, the paste here. If you take a look at line 9, "please install
'virt-what'" when running 'salt machine state.apply' this is the last line
printed with debugging turned on the minion before panic. 

[DEBUG   ] Reading configuration from /usr/local/etc/salt/minion
[DEBUG   ] Including configuration from
'/usr/local/etc/salt/minion.d/_schedule.conf'
[DEBUG   ] Reading configuration from
/usr/local/etc/salt/minion.d/_schedule.conf
[DEBUG   ] Configuration file path: /usr/local/etc/salt/minion
[WARNING ] Insecure logging configuration detected! Sensitive data may be
logged.
[DEBUG   ] Reading configuration from /usr/local/etc/salt/minion
[DEBUG   ] Including configuration from
'/usr/local/etc/salt/minion.d/_schedule.conf'
[DEBUG   ] Reading configuration from
/usr/local/etc/salt/minion.d/_schedule.conf
[DEBUG   ] Please install 'virt-what' to improve results of the 'virtual'
grain.
[DEBUG   ] Initializing new SAuth for ('/usr/local/etc/salt/pki/minion',
'machine.lan', 'tcp://salt.master.lan:4506')
[DEBUG   ] Generated random reconnect delay between '1000ms' and '11000ms'
(1950)
[DEBUG   ] Setting zmq_reconnect_ivl to '1950ms'
[DEBUG   ] Setting zmq_reconnect_ivl_max to '11000ms'
[INFO    ] Determining pillar cache
[DEBUG   ] Initializing new AsyncZeroMQReqChannel for
('/usr/local/etc/salt/pki/minion', 'machine.lan', 'tcp://salt.master.lan:4506',
'aes')
[DEBUG   ] Initializing new SAuth for ('/usr/local/etc/salt/pki/minion',
'machine.lan', 'tcp://salt.master.lan:4506')
[DEBUG   ] Initializing new AsyncZeroMQReqChannel for
('/usr/local/etc/salt/pki/minion', 'machine.lan', 'tcp://salt.master.lan:4506',
'clear')
[DEBUG   ] Decrypting the current master AES key
[DEBUG   ] Loaded minion key: /usr/local/etc/salt/pki/minion/minion.pem
[DEBUG   ] Loaded minion key: /usr/local/etc/salt/pki/minion/minion.pem
[DEBUG   ] LazyLoaded jinja.render
[DEBUG   ] LazyLoaded yaml.render
[DEBUG   ] LazyLoaded state.apply
[DEBUG   ] LazyLoaded grains.get
[DEBUG   ] LazyLoaded saltutil.is_running
[DEBUG   ] Initializing new AsyncZeroMQReqChannel for
('/usr/local/etc/salt/pki/minion', 'machine.lan', 'tcp://salt.master.lan:4506',
'aes')
[DEBUG   ] Initializing new SAuth for ('/usr/local/etc/salt/pki/minion',
'machine.lan', 'tcp://salt.master.lan:4506')
[INFO    ] Determining pillar cache
[DEBUG   ] Initializing new AsyncZeroMQReqChannel for
('/usr/local/etc/salt/pki/minion', 'machine.lan', 'tcp://salt.master.lan:4506',
'aes')
[DEBUG   ] Initializing new SAuth for ('/usr/local/etc/salt/pki/minion',
'machine.lan', 'tcp://salt.master.lan:4506')
[DEBUG   ] Loaded minion key: /usr/local/etc/salt/pki/minion/minion.pem
[INFO    ] Loading fresh modules for state activity
[DEBUG   ] LazyLoaded jinja.render
[DEBUG   ] LazyLoaded yaml.render
[DEBUG   ] In saltenv 'prod', looking at rel_path u'top.sls' to resolve
u'salt://top.sls'
[DEBUG   ] In saltenv 'prod', ** considering ** path
u'/var/cache/salt/minion/files/prod/top.sls' to resolve u'salt://top.sls'
[INFO    ] Fetching file from saltenv 'prod', ** skipped ** latest already in
cache u'salt://top.sls'
[DEBUG   ] compile template: /var/cache/salt/minion/files/prod/top.sls
[DEBUG   ] Jinja search path: ['/var/cache/salt/minion/files/prod']
[PROFILE ] Time (in seconds) to render
'/var/cache/salt/minion/files/prod/top.sls' using 'jinja' renderer:
0.00688982009888
[DEBUG   ] Rendered data from file: /var/cache/salt/minion/files/prod/top.sls:

-- 
You are receiving this mail because:
You are on the CC list for the bug.


More information about the freebsd-amd64 mailing list