svn commit: r209026 - in head/sys/ia64: ia64 include

John Baldwin jhb at freebsd.org
Fri Jun 11 17:28:33 UTC 2010


On Friday 11 June 2010 1:04:36 pm Marcel Moolenaar wrote:
> 
> On Jun 11, 2010, at 9:12 AM, Scott Long wrote:
> 
> > On Jun 11, 2010, at 5:51 AM, John Baldwin wrote:
> >> On Thursday 10 June 2010 11:00:33 pm Marcel Moolenaar wrote:
> >>> Author: marcel
> >>> Date: Fri Jun 11 03:00:32 2010
> >>> New Revision: 209026
> >>> URL: http://svn.freebsd.org/changeset/base/209026
> >>> 
> >>> Log:
> >>> Bump MAX_BPAGES from 256 to 1024. It seems that a few drivers, bge(4)
> >>> in particular, do not handle deferred DMA map load operations at all.
> >>> Any error, and especially EINPROGRESS, is treated as a hard error and
> >>> typically abort the current operation. The fact that the busdma code
> >>> queues the load operation for when resources (i.e. bounce buffers in
> >>> this particular case) are available makes this especially problematic.
> >>> Bounce buffering, unlike what the PR synopsis would suggest, works
> >>> fine.
> >>> 
> >>> While on the subject, properly implement swi_vm().
> >> 
> >> NIC drivers do not handle deferred load operations at all (note that 
> >> bus_dmamap_load_mbuf() and bus_dmamap_load_mbuf_sg() enforce 
BUS_DMA_NOWAIT).
> >> It is common practice to just drop the packet in that case.
> >> 
> > 
> > Yes, long ago when network drivers started being converted to busdma, it 
was agreed that EINPROGRESS simply doesn't make sense for them.  Any platform 
that winds up making extensive use of bounce buffers for network hardware is 
going to perform poorly no matter what, and should hopefully have some sort of 
IOMMU that can be used instead.
> 
> Unfortunately things aren't as simple as is presented.
> 
> For one, bge(4) wedges as soon as the platform runs out of bounce
> buffers when they're needed. The box needs to be reset in order to
> get the interface back. I pick any implementation that remains
> functional over a mis-optimized one that breaks. Deferred load
> operations are more performance optimal than failure is.
> 
> Also: the kernel does nothing to guarantee maximum availability
> of DMA-able memory under load, so bounce buffers (or use of I/O
> MMUs for that matter) are a reality. Here too the performance
> argument doesn't necessarily hold because the kernel may be
> busy with more than just sending and receiving packets and the
> need to defer load operations is very appropriate. If the
> alternative is just dropped packets, I'm fine with that too,
> but I for one cannot say that *not* filling a H/W ring with
> buffers is not going to wedge the hardware in some cases.
> 
> Plus: SGI Altix does not have any DMA-able memory for 32-bit
> hardware. The need for an I/O MMU is absolute and since there
> are typically less mapping registers than packets on a ring,
> the need for deferred operation seems quite acceptable if the
> alternative is, again, failure to operate.

I think in this case since you have already accepted the cost of copying the 
data via bounce buffers, you would be better off allocating slabs of memory 
via bus_dmamem_alloc() such that you can fit multiple receive buffers into a 
single IOMMU entry and then copying received packet data out into an mbuf that 
gets passed up the stack during rx interrupt handling.

-- 
John Baldwin


More information about the svn-src-head mailing list