[PATCH] Netdump for review and testing -- preliminary version
Robert N. M. Watson
rwatson at freebsd.org
Thu Oct 14 08:01:34 UTC 2010
On 13 Oct 2010, at 18:46, Ryan Stone wrote:
> On Fri, Oct 8, 2010 at 9:15 PM, Robert Watson <rwatson at freebsd.org> wrote:
>> + /*
>> + * get and fill a header mbuf, then chain data as an
>> + * mbuf.
>> + */
>> + MGETHDR(m, M_DONTWAIT, MT_DATA);
>> The idea of calling into the mbuf allocator in this context is just freaky,
>> and may have some truly awful side effects. I suppose this is the cost of
>> trying to combine code paths in the network device driver rather than have
>> an independent path in the netdump case, but it's quite unfortunate and will
>> significantly reduce the robustness of netdumps in the face of, for example,
>> mbuf starvation.
> Changing this will require very invasive changes to the network
> drivers. I know that the Intel drivers allocate their own mbufs for
> their receive rings and I imagine that all other drivers have to do
> something similar. Plus the drivers are responsible for freeing mbufs
> after they have been transmitted. It seems to me that the cost of
> making significant changes to the network drivers to support an
> alternate lifecycle for netdump mbufs far outweighs the cost of losing
> a couple of kernel dumps in extreme circumstances.
My concern is less about occasional lost dumps that destabilising the dumping process: calls into the memory allocator can currently trigger a lot of interesting behaviours, such as further calls back into the VM system, which can then trigger calls into other subsystems. What I'm suggesting is that if we want the mbuf allocator to be useful in this context, we need to teach it about things not to do in the dumping / crash / ... context, which probably means helping uma out a bit in that regard. And a watchdog to make sure the dump is making progress.
More information about the freebsd-current