bhyve issues on Dell C6220 node

Rodney W. Grimes freebsd-rwg at gndrsh.dnsmgr.net
Wed Jan 8 21:37:45 UTC 2020


> Hello,
> 
> I've recently got hold of some Dell C6220 systems (2 servers in a single chassis) while I was hoping to use for bhyve
> The basic spec of a single server is as follows -
> 
> Xeon(R) CPU E5-2670 (VT-x: PAT,HLT,MTF,PAUSE,EPT,UG,VPID)
> 64GB DDR3
> LSI MegaRaid 9265-8i
> FreeBSD 12.1
> 
> I had a bit of trouble getting the disks configured and booting (**probably unrelated but see below for the details).
> 
> However, my main problem at the moment is that booting either 11.3 or 12.1 in a bhyve guest (using bhyveload) gets as far as starting to boot from the install ISO, then just hangs
> 
> "
> Loading kernel...
> /boot/kernel/kernel text=0x168fdf1 data=0x1d0a68+0x768d80 syms=[0x8+0x178bc0+0x8+0x1969d5]
> Loading configured modules...
> can't find '/boot/entropy'
> /
> "
> 
> I get either a "|" or "/" character, then nothing else.

My experience has been when I see this it is a "wrong console" issue, ie the kernel
has decided to use something else for a console and your output is going there.

It might be your setup using a UEFI with a fb console up to the end of the loader,
then the kernel decides it is using a serial console.  Or vice versa.  

> 
> I've also tested a windows server 2016 guest, and while that will actually install via UEFI, it is noticeably slow - over a minute to boot to the login screen and everything crawls along
> 
> I'm at a loss at the moment. My test machine for years has been an old i3-2100 system and that has always booted freebsd & windows guests fairly well
> 
> I'd be grateful if anyone has any ideas or insight, or whether I just need to scrap the whole idea and either switch to a completely different hypervisor (which I'd really rather not do as I know FreeBSD very well and was planning on making use of ZFS for backups/migration/etc) or find some different systems
> 
> 
> **Regarding the disks, I didn't really want a raid controller and have ended up creating a jbod (RAID0) volume for each disk, which appears as mfidX. This seems to work fine, although at first I tried to create a single pool of 2 mirrored pairs (4 disks). This failed to boot due to zfs block i/o errors on boot. In the end I had to create a separate boot mirror across 2 partitions, the create a second pool for actual data storage.
> 
> Regards,
> Matt Churchyard
> 
> _______________________________________________
> freebsd-virtualization at freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
> To unsubscribe, send any mail to "freebsd-virtualization-unsubscribe at freebsd.org"
> 

-- 
Rod Grimes                                                 rgrimes at freebsd.org


More information about the freebsd-virtualization mailing list