svn commit: r209026 - in head/sys/ia64: ia64 include

Marcel Moolenaar xcllnt at
Fri Jun 11 17:32:52 UTC 2010

On Jun 11, 2010, at 10:21 AM, Scott Long wrote:

> On Jun 11, 2010, at 11:04 AM, Marcel Moolenaar wrote:
>> On Jun 11, 2010, at 9:12 AM, Scott Long wrote:
>>> On Jun 11, 2010, at 5:51 AM, John Baldwin wrote:
>>>> On Thursday 10 June 2010 11:00:33 pm Marcel Moolenaar wrote:
>>>>> Author: marcel
>>>>> Date: Fri Jun 11 03:00:32 2010
>>>>> New Revision: 209026
>>>>> URL:
>>>>> Log:
>>>>> Bump MAX_BPAGES from 256 to 1024. It seems that a few drivers, bge(4)
>>>>> in particular, do not handle deferred DMA map load operations at all.
>>>>> Any error, and especially EINPROGRESS, is treated as a hard error and
>>>>> typically abort the current operation. The fact that the busdma code
>>>>> queues the load operation for when resources (i.e. bounce buffers in
>>>>> this particular case) are available makes this especially problematic.
>>>>> Bounce buffering, unlike what the PR synopsis would suggest, works
>>>>> fine.
>>>>> While on the subject, properly implement swi_vm().
>>>> NIC drivers do not handle deferred load operations at all (note that 
>>>> bus_dmamap_load_mbuf() and bus_dmamap_load_mbuf_sg() enforce BUS_DMA_NOWAIT).
>>>> It is common practice to just drop the packet in that case.
>>> Yes, long ago when network drivers started being converted to busdma, it was agreed that EINPROGRESS simply doesn't make sense for them.  Any platform that winds up making extensive use of bounce buffers for network hardware is going to perform poorly no matter what, and should hopefully have some sort of IOMMU that can be used instead.
>> Unfortunately things aren't as simple as is presented.
>> For one, bge(4) wedges as soon as the platform runs out of bounce
>> buffers when they're needed. The box needs to be reset in order to
>> get the interface back. I pick any implementation that remains
>> functional over a mis-optimized one that breaks. Deferred load
>> operations are more performance optimal than failure is.
> This sounds like a bug in the bge driver.  I don't see if through casual inspection, but the driver should be able to either drop the mbuf entirely, or requeue it on the ifq and then restart the ifq later.
>> Also: the kernel does nothing to guarantee maximum availability
>> of DMA-able memory under load, so bounce buffers (or use of I/O
>> MMUs for that matter) are a reality. Here too the performance
>> argument doesn't necessarily hold because the kernel may be
>> busy with more than just sending and receiving packets and the
>> need to defer load operations is very appropriate. If the
>> alternative is just dropped packets, I'm fine with that too,
>> but I for one cannot say that *not* filling a H/W ring with
>> buffers is not going to wedge the hardware in some cases.
>> Plus: SGI Altix does not have any DMA-able memory for 32-bit
>> hardware. The need for an I/O MMU is absolute and since there
>> are typically less mapping registers than packets on a ring,
>> the need for deferred operation seems quite acceptable if the
>> alternative is, again, failure to operate.
> I'm not against you upping the bounce buffer limit for a particular platform, but it's still unclear to me if (given bug-free drivers) it's worth the effort to defer a load rather than just drop the packet and let the stack retry it.  One question that would be good to answer is wether the failed load is happening in the RX to TX path.

RX path I believe.

Marcel Moolenaar
xcllnt at

More information about the svn-src-head mailing list