svn commit: r344027 - in stable/12/sys: dev/vmware/vmxnet3 modules/vmware/vmxnet3 net

Michael Tuexen Michael.Tuexen at
Wed Feb 13 09:52:33 UTC 2019

> On 13. Feb 2019, at 00:54, Marius Strobl <marius at> wrote:
> On Mon, Feb 11, 2019 at 05:24:18PM -0800, Rodney W. Grimes wrote:
>>> On 2/11/19 4:26 PM, Rodney W. Grimes wrote:
>>>>> Author: pkelsey
>>>>> Date: Mon Feb 11 23:24:39 2019
>>>>> New Revision: 344027
>>>>> URL:
>>>>> Log:
>>>>>  MFC r343291:
>>>>>  Convert vmx(4) to being an iflib driver.
>>>> I strongly object to this MFC, given the current number
>>>> of 12.0 RELEASE related iflib problems we have it is
>>>> foolish of us to iflib any more drivers in 12.0
>>> This isn't the release branch though and presumably we have some time before
>>> 12.1 ships.  If there are reports of vmx(4) breakage on stable before 12.1
>>> we could always revert this commit then?
>> At this point the status if iflib in stable/12 is not certain, but
>> what is certain is this merge to 12 is probably going to break
>> someones system and at best is an unknown if working.
>> People DO run stable/12, breaking it is a no no.
>> Has the committer even booted this code in a stable/12 system
>> and run a serious amount of testing on it?
>>> I've heard of some EN's for 12.0 for iflib fixes.  Are those fixes in stable/12
>>> yet or are we still waiting for them to land in HEAD and/or be merged?
>> I sent a ping out earlier today trying to find that out.   I belive that
>> some of them are merged to stable/12, some are waiting to be merged, I
>> do believe most if not all are commited to head.
> As for the iflib(4)-converted Intel Ethernet MAC drivers, it's
> hard to imagine how these drivers could have a chance of properly
> working on arm64 without r344060 and r344062 (which just hit head,
> with the latter breaking KBI) but also some previous iflib(4)
> fixes that were already MFCed to stable/12 (but aren't part of
> 12.0) in place. However, despite em(4) and ix(4) being in its
> GENERIC, I don't know what relevance these drivers actually have
> for arm64.
Hi Marius,

would love to use an ix(4) card on an Overdrive 3000 system. However,
this doesn't really work, since there is a PCI related problem when
booting. Sometimes the box works for a while, sometimes it doesn't
come up:

pci0: <PCI bus> on pcib0
pcib1: <PCI-PCI bridge> at device 2.1 on pci0
pcib0: pci_host_generic_core_alloc_resource FAIL: type=4, rid=28, start=0000000000000000, end=0000000000000fff, count=0000000000001000, flags=0
pcib1: failed to allocate initial I/O port window: 0-0xfff
pci1: <PCI bus> on pcib1
pcib0: pci_host_generic_core_alloc_resource FAIL: type=4, rid=28, start=0000000000000000, end=0000000000000fff, count=0000000000001000, flags=3000
ix0: <Intel(R) PRO/10GbE PCI-Express Network Driver> port 0x1000-0x101f mem 0x7fffe80000-0x7fffefffff,0x7ffff04000-0x7ffff07fff at device 0.0 on pci1
ix0: Using 2048 tx descriptors and 2048 rx descriptors
ix0: Using 8 rx queues 8 tx queues
ix0: Using MSI-X interrupts with 9 vectors
ix0: allocated for 8 queues
ix0: allocated for 8 rx queues
ix0: Ethernet address: 90:e2:ba:f7:48:74
ix0: PCI Express Bus: Speed 5.0GT/s Width x8
ix1: <Intel(R) PRO/10GbE PCI-Express Network Driver> mem 0x7fffe00000-0x7fffe7ffff,0x7ffff00000-0x7ffff03fff at device 0.1 on pci1
ix1: Using 2048 tx descriptors and 2048 rx descriptors
ix1: Using 8 rx queues 8 tx queues
ix1: Using MSI-X interrupts with 9 vectors
ix1: allocated for 8 queues
ix1: allocated for 8 rx queues
ix1: Ethernet address: 90:e2:ba:f7:48:75
ix1: PCI Express Bus: Speed 5.0GT/s Width x8

jhb@ said that this is related to some PCI memory allocation limitation on arm64, if I remember it correctly.

I think that uses an igb card. But I can't login to verify...

Best regards
> So far, r343934 isn't in stable/12 either (I'll probably merge it
> tomorrow), fixing the problem with non-working 82583V a bunch of
> people ran into judging PRs. Last time I checked, there also were
> some other iflib(4)-related changes, e. g. to converted drivers,
> not done by me still missing in stable/12.
> As for the iflib(4) status in head, I'm aware of two remaining
> user-visible regressions I ran myself into when trying to use
> em(4) in production. 1) TX UDP performance is abysmal even when
> using multiple queues and, thus, MSI-X. In a quick test with
> netperf I see ~690 Mbits/s with 9216 bytes and 282 Mbits/s with
> 42080 bytes on a Xeon E3-1245V2 and 82574 with GigE connection
> (stable/11 e1000 drivers forward-ported to 12+ achieve 957 Mbit/s
> in both cases). 2) TX TCP performance is abysmal when using MSI
> or INTx (that's likely also PR 235031).
> I have an upcoming iflib(4) fix for 2) but don't have an idea
> what's causing 1) so far. I've identified two bugs in iflib(4)
> that likely have a minimal (probably more so with ixl(4), though)
> impact on UDP performance but don't explain the huge drop.
> Moreover, I have no idea so far how these relate to PR 234550.
> Regarding the latter, one obvious difference is that prior to
> the iflib(4)-conversions, the Intel Ethernet MAC drivers didn't
> engage software LRO when a VLAN ID is set. The actual reason
> why they didn't do that isn't obvious to me, though, and I
> found no other in-tree driver which behaves the same way, i. e.
> all employ software LRO now even when a VLAN ID is set.
> Personally, I don't see much point in issuing an iflib(4) EN
> for 12.0 before the above regressions have been fixed. Judging
> some PRs, people started using net/intel-em-kmod instead of the
> in-tree drivers, which IMO is the better option for now.
> Marius

More information about the svn-src-all mailing list