Minion not working anymore after update to 12.2

Andrea Venturoli ml at netfence.it
Mon Nov 16 07:11:53 UTC 2020


Hello.

A minion of mine stopped connecting to the master after I upgraded it to 
12.2.

Error follows:
 > ...
> [DEBUG   ] Connecting to master. Attempt 1 of 1
> [ERROR   ] An un-handled exception was caught by salt's global exception handler:
> KeyError: 'inet'
> Traceback (most recent call last):
>   File "/usr/local/bin/salt-call", line 11, in <module>
>     load_entry_point('salt==3002', 'console_scripts', 'salt-call')()
>   File "/usr/local/lib/python3.7/site-packages/salt/scripts.py", line 449, in salt_call
>     client.run()
>   File "/usr/local/lib/python3.7/site-packages/salt/cli/call.py", line 48, in run
>     caller = salt.cli.caller.Caller.factory(self.config)
>   File "/usr/local/lib/python3.7/site-packages/salt/cli/caller.py", line 55, in factory
>     return ZeroMQCaller(opts, **kwargs)
>   File "/usr/local/lib/python3.7/site-packages/salt/cli/caller.py", line 320, in __init__
>     super().__init__(opts)
>   File "/usr/local/lib/python3.7/site-packages/salt/cli/caller.py", line 80, in __init__
>     self.minion = salt.minion.SMinion(opts)
>   File "/usr/local/lib/python3.7/site-packages/salt/minion.py", line 935, in __init__
>     io_loop.run_sync(lambda: self.eval_master(self.opts, failed=True))
>   File "/usr/local/lib/python3.7/site-packages/salt/ext/tornado/ioloop.py", line 459, in run_sync
>     return future_cell[0].result()
>   File "/usr/local/lib/python3.7/site-packages/salt/ext/tornado/concurrent.py", line 249, in result
>     raise_exc_info(self._exc_info)
>   File "<string>", line 4, in raise_exc_info
>   File "/usr/local/lib/python3.7/site-packages/salt/ext/tornado/gen.py", line 309, in wrapper
>     yielded = next(result)
>   File "/usr/local/lib/python3.7/site-packages/salt/minion.py", line 804, in eval_master
>     opts.update(resolve_dns(opts))
>   File "/usr/local/lib/python3.7/site-packages/salt/minion.py", line 209, in resolve_dns
>     if not opts["ipv6"]
> KeyError: 'inet'
> Traceback (most recent call last):
>   File "/usr/local/bin/salt-call", line 11, in <module>
>     load_entry_point('salt==3002', 'console_scripts', 'salt-call')()
>   File "/usr/local/lib/python3.7/site-packages/salt/scripts.py", line 449, in salt_call
>     client.run()
>   File "/usr/local/lib/python3.7/site-packages/salt/cli/call.py", line 48, in run
>     caller = salt.cli.caller.Caller.factory(self.config)
>   File "/usr/local/lib/python3.7/site-packages/salt/cli/caller.py", line 55, in factory
>     return ZeroMQCaller(opts, **kwargs)
>   File "/usr/local/lib/python3.7/site-packages/salt/cli/caller.py", line 320, in __init__
>     super().__init__(opts)
>   File "/usr/local/lib/python3.7/site-packages/salt/cli/caller.py", line 80, in __init__
>     self.minion = salt.minion.SMinion(opts)
>   File "/usr/local/lib/python3.7/site-packages/salt/minion.py", line 935, in __init__
>     io_loop.run_sync(lambda: self.eval_master(self.opts, failed=True))
>   File "/usr/local/lib/python3.7/site-packages/salt/ext/tornado/ioloop.py", line 459, in run_sync
>     return future_cell[0].result()
>   File "/usr/local/lib/python3.7/site-packages/salt/ext/tornado/concurrent.py", line 249, in result
>     raise_exc_info(self._exc_info)
>   File "<string>", line 4, in raise_exc_info
>   File "/usr/local/lib/python3.7/site-packages/salt/ext/tornado/gen.py", line 309, in wrapper
>     yielded = next(result)
>   File "/usr/local/lib/python3.7/site-packages/salt/minion.py", line 804, in eval_master
>     opts.update(resolve_dns(opts))
>   File "/usr/local/lib/python3.7/site-packages/salt/minion.py", line 209, in resolve_dns
>     if not opts["ipv6"]
> KeyError: 'inet'


I tracked it down to interface em1 being without an IP address (since it 
has vlan childs).
This was not a problem on 12.1. I have other hosts with a similar setup 
that I have yet to upgrade: when I'll be able, I'll see if they all show 
the problem or not.

I have INET6 commented in the kernel config.

Meanwhile, is this something I should report as a FreeBSD bug or upstream?
Any workaround?

  bye & Thanks
	av.




More information about the freebsd-ports mailing list