hw.igb.num_queues default

Eggert, Lars lars at netapp.com
Thu Jun 20 10:34:54 UTC 2013


Hi,

I just popped a new four-port igb card into a -STABLE system and encountered severe issues even when unloaded right after boot, to the point where I couldn't even ssh into the system anymore. The box has 2x4 cores:

CPU: Intel(R) Xeon(R) CPU           X5450  @ 3.00GHz (2992.60-MHz K8-class CPU)
  Origin = "GenuineIntel"  Id = 0x10676  Family = 0x6  Model = 0x17  Stepping = 6
  Features=0xbfebfbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE>
  Features2=0xce3bd<SSE3,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,DCA,SSE4.1>
  AMD Features=0x20100800<SYSCALL,NX,LM>
  AMD Features2=0x1<LAHF>
  TSC: P-state invariant, performance statistics
real memory  = 8589934592 (8192 MB)
avail memory = 8239513600 (7857 MB)
MPTable: <DELL     PE 01B2     >
Event timer "LAPIC" quality 400
FreeBSD/SMP: Multiprocessor System Detected: 8 CPUs
FreeBSD/SMP: 2 package(s) x 4 core(s)

By default, the igb driver seems to set up one queue per detected CPU. Googling around, people seemed to suggest that limiting the number of queues makes things work better. I can confirm that setting hw.igb.num_queues=2 seems to have fixed the issue. (Two was the first value I tried, maybe other values other than 0 would work, too.)

In order to uphold POLA, should the igb driver maybe default to a conservative value for hw.igb.num_queues that may not deliver optimal performance, but at least works out of the box?

Lars


More information about the freebsd-net mailing list