Can't boot 8/CURRENT hvm on Quad-Core Opteron 2352

Tim Bisson bissont at
Fri Jan 15 03:43:01 UTC 2010

On Jan 12, 2010, at 4:51 AM, Deploy IS INFO wrote:

> Kostik Belousov wrote:
>> On Mon, Jan 11, 2010 at 11:36:00PM -0800, Timothy Bisson wrote:
>>> Hi,
>>> I'm trying to run a FreeBSD 8/CURRENT hvm on XEN, but booting FreeBSD  currently panics on a quad-core Operton 2352 box while booting from  the iso (disabling ACPI doesn't help).
>>> However, I'm successful at running a FreeBSD 8/CURRENT hvm on XEN on a  Intel Xeon Nehalem box. I tried booting the same installed disk image  (from the Nehalem box) on the Operteron box, but that also resulted in  a panic while booting.
>>> The CURRENT iso I'm using is from:
>>> I'm using Xen-3.3.1 on both physical boxes, and a BSD 6 hvm works on  both the Nehalem and Operton boxes...
>>> Here's the backtrace from the opteron box:
>>> kernel trap 9 with interrupts disabled
>>> Fatal trap 9: general protection fault while in kernel mode
>>> cpuid = 0; apic id = 00
>>> instruction pointer	= 0x20:0xffffffff80878193
>>> stack pointer	        = 0x28:0xffffffff81044bb0
>>> frame pointer	        = 0x28:0xffffffff81044bc0
>>> code segment		= base 0x0, limit 0xfffff, type 0x1b
>>> 			= DPL 0, pres 1, long 1, def32 0, gran 1
>>> processor eflags	= resume, IOPL = 0
>>> current process		= 0 ()
>>> [thread pid 0 tid 0 ]
>>> Stopped at      pmap_invalidate_cache_range+0x43:        clflushl        (%rdi)
>>> db> bt
>>> bt
>>> Tracing pid 0 tid 0 td 0xffffffff80c51fc0
>>> pmap_invalidate_cache_range() at pmap_invalidate_cache_range+0x43
>>> pmap_change_attr_locked() at pmap_change_attr_locked+0x368
>>> pmap_change_attr() at pmap_change_attr+0x43
>>> pmap_mapdev_attr() at pmap_mapdev_attr+0x112
>>> lapic_init() at lapic_init+0x29
>>> madt_setup_local() at madt_setup_local+0x26
>>> apic_setup_local() at apic_setup_local+0x13
>>> mi_startup() at mi_startup+0x59
>>> btext() at btext+0x2c
>>> I took a look through the bug database and didn't see any similar  problem reports. Is it reasonable to file a bug report? Is there  additional information that I should be reporting?
>> Set hw.clflush_disable=1 at the loader prompt.
> Hi,
> I'm trying also FreeBSD8 on a nehalem box. When I configure more than 2 vcpus for the HVM guest I met with the following:
> - With the XENHVM kernel the boot simply stops when the message about WITNESS performance came. No debug messages, nothing, just stops there and the PV drivers just attaches before the message
> - With the GENERIC kernel the re driver somehow fails to receive packages when 4 vcpus configured. Tcpdump showed that packages are going out, but somehow none received. I'd say it's not really a FreeBSD problem, but it's wierd enough.
> Could you verify these these two problem on the Opteron and your Nehalem  machine? We are also using Xen-3.3.1.
> Regards,
> Andras

The XENHVM kernel works fine (both boxes) with more than 2 vcpus. I received a panic regarding the xn driver, but that went away once I configured the vif to use netfront.

I don't know what model you specify to use the re driver, but the ed and em drivers work fine.

More information about the freebsd-xen mailing list