Re: GPU Passthrough with FreeBSD 14.2 bhyve and NVidia Quadro RTX 6000/8000

From: Mario Marietto <marietto2008_at_gmail.com>
Date: Mon, 24 Mar 2025 15:48:35 UTC
Ohhh man.

I forgot to say that you need to apply the Corvin's patches if you want to
pass your gpu to a LINUX vm. And you should apply another patch if you want
to pass it to a Windows VM. It seems that the necessary patches have never
been uploaded mainstream.

On Mon, Mar 24, 2025 at 4:38 PM Mario Marietto <marietto2008@gmail.com>
wrote:

> Hello Shamim.
>
> You don't have any bhyve-win. You can use bhyve. I use bhyve-win to pass
> through my GPU in a Windows vm,bhyve-lin for a Linux vm. They are two
> different executables,because for some reason,my bhyve-lin executable is
> not able to pass my GPU in a Windows vm.
>
> On Mon, Mar 24, 2025 at 4:29 PM Shamim Shahriar <shamim.shahriar@gmail.com>
> wrote:
>
>> Hi Mario
>>
>> I will try the options you have used later and let you know how it goes.
>> Not sure if this is intentional or typo, but I don't recall any bhyve-win
>> binary in my system.
>>
>> Best regards
>> SS
>>
>> On Mon, 24 Mar 2025 at 14:59, Mario Marietto <marietto2008@gmail.com>
>> wrote:
>>
>>> Oh sorry. I forgot to add the following bhyve parameters necessary to
>>> passthru my GPU :D
>>>
>>> -s 8:0,passthru,2/0/0 \
>>> -s 8:1,passthru,2/0/1 \
>>> -s 8:2,passthru,2/0/2 \
>>> -s 8:3,passthru,2/0/3 \
>>>
>>> On Mon, Mar 24, 2025 at 3:57 PM Mario Marietto <marietto2008@gmail.com>
>>> wrote:
>>>
>>>> Usually I use this kind of script to launch a bhyve vm :
>>>>
>>>> #!/bin/sh
>>>>
>>>> setxkbmap it
>>>> kldload vmm.ko
>>>> vms="$(ls /dev/vmm/*)"
>>>> vncs="$(ps ax | awk '/vncviewer [0]/{print $6}')"
>>>> kldload vmm.ko
>>>>
>>>> if ! pciconf -l pci0:2:0:0 | grep -q "^ppt"; then
>>>> echo "rtx 2080ti slot 2/0/0 is not attached to ppt,attaching..."
>>>> kldload nvidia-modeset
>>>> devctl clear driver -f pci0:2:0:0
>>>> devctl set driver -f pci0:2:0:0 ppt
>>>> else
>>>> echo "rtx 2080ti slot 2/0/0 is already attached to ppt"
>>>> fi
>>>>
>>>> if ! pciconf -l pci0:2:0:1 | grep -q "^ppt"; then
>>>> echo "rtx 2080ti slot 2/0/1 is not attached to ppt,attaching..."
>>>> devctl clear driver -f pci0:2:0:1
>>>> devctl set driver -f pci0:2:0:1 ppt
>>>> else
>>>> echo "rtx 2080ti slot 2/0/1 is already attached to ppt"
>>>> fi
>>>>
>>>> if ! pciconf -l pci0:2:0:2 | grep -q "^ppt"; then
>>>> echo "rtx 2080ti slot 2/0/2 is not attached to ppt,attaching..."
>>>> devctl clear driver -f pci0:2:0:2
>>>> devctl set driver -f pci0:2:0:2 ppt
>>>> else
>>>> echo "rtx 2080ti slot 2/0/2 is already attached to ppt"
>>>> fi
>>>>
>>>> if ! pciconf -l pci0:2:0:3 | grep -q "^ppt"; then
>>>> echo "rtx 2080ti slot 2/0/3 is not attached to ppt,attaching..."
>>>> devctl clear driver -f pci0:2:0:3
>>>> devctl set driver -f pci0:2:0:3 ppt
>>>> else
>>>> echo "rtx 2080ti slot 2/0/3 is already attached to ppt"
>>>> fi
>>>>
>>>> echo "rtx 2080ti is fully attached to ppt"
>>>>
>>>> for vm in $vms; do
>>>>                 session="${vm##*/}"
>>>>                 echo "bhyve session = $session"
>>>>                 echo "vnc session = $vncs"
>>>>                 if ! printf '%s\n' "${vncs}" | grep "${session#vm}";
>>>> then
>>>>                 printf 'VNC session not found,destroying ghost vms\n'
>>>>                                 bhyvectl --vm=$session --destroy
>>>>                 else
>>>>                                 printf 'Found VNC session %s\n'
>>>> "${session},no ghost vms found,not destroying them"
>>>>                 fi
>>>> done
>>>>
>>>> vmdisk0=`geom disk list | awk '/^Geom name: /{d=$NF} /^ *ident:
>>>> (AM13N4CZ)/ && d{print d}'`
>>>> echo "Seagate M3 Portable 1.8T ; $vmdisk0"
>>>>
>>>> vmdisk1=`geom disk list | awk '/^Geom name: /{d=$NF} /^ *ident:
>>>> (2015040204055E)/ && d{print d}'`
>>>> echo "TOSHIBA External USB 3.0 1.8T ; $vmdisk1"
>>>>
>>>> vmdisk2=`geom disk list | awk '/^Geom name: /{d=$NF} /^ *ident:
>>>> (2027285F1175)/ && d{print d}'`
>>>> echo "CT1000P1SSD8 ; $vmdisk2"
>>>>
>>>> vmdisk3=`geom disk list | awk '/^Geom name: /{d=$NF} /^ *ident:
>>>> (20130508005976F)/ && d{print d}'`
>>>> echo "TOSHIBA External USB 3.0 932 GB ; $vmdisk3"
>>>>
>>>> vmdisk4=`geom disk list | awk '/^Geom name: /{d=$NF} /^ *ident:
>>>> (BE0191510218)/ && d{print d}'`
>>>> echo "G-DRIVE USB ; $vmdisk4"
>>>>
>>>> vmdisk5=`geom disk list | awk '/^Geom name: /{d=$NF} /^ *ident:
>>>> (38234B4237354B45)/ && d{print d}'`
>>>> echo "Elements 25A3 ; $vmdisk5"
>>>>
>>>> vmdisk6=`geom disk list | awk '/^Geom name: /{d=$NF} /^ *ident:
>>>> (WD-WCAV2X797309)/ && d{print d}'`
>>>> echo "WDC WD3200AAJS-00L7A0 ; 298 GB ; $vmdisk6"
>>>>
>>>> vmdisk7=`geom disk list | awk '/^Geom name: /{d=$NF} /^ *ident:
>>>> (20140108006C)/ && d{print d}'`
>>>> echo "Corsair Force 3 SSD ; $vmdisk7"
>>>>
>>>> vmdisk8=`geom disk list | awk '/^Geom name: /{d=$NF} /^ *ident:
>>>> (1924E50B2AE5)/ && d{print d}'`
>>>> echo "CT500MX500SSD4 ; $vmdisk8"
>>>>
>>>> vmdisk9=`geom disk list | awk '/^Geom name: /{d=$NF} /^ *ident:
>>>> (0774921DDC4200A6)/ && d{print d}'`
>>>> echo "SanDisk Cruzer-15GB ; $vmdisk9"
>>>>
>>>> vmdisk10=`geom disk list | awk '/^Geom name: /{d=$NF} /^ *ident:
>>>> (60A44D4138D8F311190A0149)/ && d{print d}'`
>>>> echo "Kingston DataTraveler 2.0 ; $vmdisk10"
>>>>
>>>> vmdisk11=`geom disk list | awk '/^Geom name: /{d=$NF} /^ *ident:
>>>> (575845483038524844323238)/ && d{print d}'`
>>>> echo "WD 2500BMV External ; $vmdisk11"
>>>>
>>>> vmdisk12=`geom disk list | awk '/^Geom name: /{d=$NF} /^ *ident:
>>>> (57442E575845323039544433303334)/ && d{print d}'`
>>>> echo "WD 3200BMV External ; $vmdisk12"
>>>>
>>>> vmdisk13=`geom disk list | awk '/^Geom name: /{d=$NF} /^ *ident:
>>>> (2414E989076B)/ && d{print d}'`
>>>> echo "CT500BX500SSD1 ; $vmdisk13"
>>>>
>>>> /usr/sbin/./bhyve-win -S -c sockets=4,cores=1,threads=1 -m 4G -w -H -A \
>>>> -s 0,hostbridge \
>>>> -s 1,ahci-hd,/dev/$vmdisk13 \
>>>> -s 10,virtio-net,tap6 \
>>>> -s 11,virtio-9p,sharename=/ \
>>>> -s 29,fbuf,tcp=0.0.0.0:5906,w=1600,h=950 \
>>>> -s 31,lpc \
>>>> -l bootrom,/usr/local/share/uefi-firmware/BHYVE_BHF_CODE.fd \
>>>> vm0:6 < /dev/null & sleep 5 && vncviewer 0:6 &
>>>>
>>>> As you can see,each vm has a unique ID (connected with the vncviewer
>>>> instance) and the vncviewer window is launched as soon as the bhyve vm is
>>>> called...and my Geforce RTX 2080 ti is passed through inside the vm.
>>>>
>>>>
>>>> On Mon, Mar 24, 2025 at 3:47 PM Mario Marietto <marietto2008@gmail.com>
>>>> wrote:
>>>>
>>>>> First of all,you could use a more flexible technique than using the
>>>>> pptdevs on /boot/loader.conf. I would use a script like this :
>>>>>
>>>>> 1)
>>>>>
>>>>> if ! pciconf -l pci0:18:0:0 | grep -q "^ppt"; then
>>>>> echo "18/0/0 is not attached to ppt,attaching..."
>>>>> devctl clear driver -f pci0:18:0:0
>>>>> devctl set driver -f pci0:18:0:0 ppt
>>>>> else
>>>>> echo "18/0/0 already attached to ppt"
>>>>> fi
>>>>>
>>>>> 2) I would not like to use vm-bhyve. It adds only some confusion...
>>>>>
>>>>> On Mon, Mar 24, 2025 at 3:22 PM Shamim Shahriar <
>>>>> shamim.shahriar@gmail.com> wrote:
>>>>>
>>>>>> Good afternoon everyone.
>>>>>>
>>>>>> I am trying to have VMs with GPU passthrough. The setup is a Dell
>>>>>> server with NVidia Quadro RTX 6000/8000 installed already. I have checked
>>>>>> the device IDs and put pptdevs in place
>>>>>>
>>>>>> # cat /boot/loader.conf
>>>>>> pptdevs="18/0/0 19/0/0"
>>>>>> pptdevs2="72/0/0 73/0/0"
>>>>>>
>>>>>> this is showing the GPUs as pptdev on the pciconf
>>>>>>
>>>>>> ppt0@pci0:18:0:0:       class=0x030200 rev=0xa1 hdr=0x00
>>>>>> vendor=0x10de device=0x1e78 subvendor=0x10de subdevice=0x13d8
>>>>>>     vendor     = 'NVIDIA Corporation'
>>>>>>     device     = 'TU102GL [Quadro RTX 6000/8000]'
>>>>>>     class      = display
>>>>>>     subclass   = 3D
>>>>>> ppt1@pci0:19:0:0:       class=0x030200 rev=0xa1 hdr=0x00
>>>>>> vendor=0x10de device=0x1e78 subvendor=0x10de subdevice=0x13d8
>>>>>>     vendor     = 'NVIDIA Corporation'
>>>>>>     device     = 'TU102GL [Quadro RTX 6000/8000]'
>>>>>>     class      = display
>>>>>>     subclass   = 3D
>>>>>>
>>>>>> as I am using vm-bhyve, I have put the configuration as below
>>>>>>
>>>>>> # cat /mnt/VMs/jagadish/jagadish.conf
>>>>>> loader="uefi"
>>>>>> cpu=16
>>>>>> memory=128G
>>>>>> xhci_mouse="yes"
>>>>>> debug="true"
>>>>>>
>>>>>> graphics="yes"
>>>>>> graphics_listen="127.0.0.1"
>>>>>> graphics_port="5920"
>>>>>> graphics_res="1024x768"
>>>>>> graphics_wait="no"
>>>>>> #graphics_vga="io"
>>>>>>
>>>>>> network0_type="virtio-net"
>>>>>> network0_switch="swUNI"
>>>>>> network0_mac="58:9c:fc:06:3f:af"
>>>>>>
>>>>>> disk0_type="nvme"
>>>>>> disk0_name="jagadish-disk0.img"
>>>>>> #disk0_size="128G"
>>>>>>
>>>>>> uuid="966e909b-1293-11ef-a9a4-e4434bfe34de"
>>>>>>
>>>>>> passthru0="19/0/0=6:0"
>>>>>>
>>>>>> bhyve_options="-A -H -P"
>>>>>> #END
>>>>>>
>>>>>> however, when I start the vm (for OS installation to start with), it
>>>>>> shows it is running but I am unable to access the VNC for a while, then
>>>>>> when I finally manage to connect via VNC, there is nothing on the screen,
>>>>>> just blank dark screen
>>>>>>
>>>>>> the vm-bhyve.log shows
>>>>>>
>>>>>> Mar 24 13:59:26: initialising
>>>>>> Mar 24 13:59:26:  [loader: uefi]
>>>>>> Mar 24 13:59:26:  [cpu: 16]
>>>>>> Mar 24 13:59:26:  [memory: 128G]
>>>>>> Mar 24 13:59:26:  [hostbridge: standard]
>>>>>> Mar 24 13:59:26:  [com ports: com1]
>>>>>> Mar 24 13:59:26:  [uuid: 966e909b-1293-11ef-a9a4-e4434bfe34de]
>>>>>> Mar 24 13:59:26:  [debug mode: true]
>>>>>> Mar 24 13:59:26:  [primary disk: jagadish-disk0.img]
>>>>>> Mar 24 13:59:26:  [primary disk dev: file]
>>>>>> Mar 24 13:59:26: initialising network device tap0
>>>>>> Mar 24 13:59:26: adding tap0 -> vm-swUNI (swUNI addm)
>>>>>> Mar 24 13:59:26: bring up tap0 -> vm-swUNI (swUNI addm)
>>>>>> Mar 24 13:59:26: booting
>>>>>> Mar 24 13:59:26:  [bhyve options: -c 16 -m 128G -AHPw -l
>>>>>> bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd -A -H -P -U
>>>>>> 966e909b-1293-11ef-a9a4-e4434bfe34de -u -S]
>>>>>> Mar 24 13:59:26:  [bhyve devices: -s 0,hostbridge -s 31,lpc -s
>>>>>> 4:0,nvme,/mnt/VMs/jagadish/jagadish-disk0.img -s
>>>>>> 5:0,virtio-net,tap0,mac=58:9c:fc:06:3f:af -s 6:0,passthru,19/0/0 -s
>>>>>> 7:0,fbuf,tcp=127.0.0.1:5920,w=1024,h=768 -s 8:0,xhci,tablet]
>>>>>> Mar 24 13:59:26:  [bhyve console: -l com1,/dev/nmdm-jagadish.1A]
>>>>>> Mar 24 13:59:26:  [bhyve iso device: -s
>>>>>> 3:0,ahci-cd,/mnt/VMs/.iso/lubuntu-24.04.1-desktop-amd64.iso,ro]
>>>>>> Mar 24 13:59:26: starting bhyve (run 1)
>>>>>>
>>>>>> based on what I can see, and a little out of desparation, I decided
>>>>>> to run the installer in the foreground, and below is what I got
>>>>>>
>>>>>> # vm install -f jagadish FreeBSD-14.2-RELEASE-amd64-disc1.iso
>>>>>> Starting jagadish
>>>>>>   * found guest in /mnt/VMs/jagadish
>>>>>>   * booting...
>>>>>> fbuf frame buffer base: 0x112245400000 [sz 16777216]
>>>>>>
>>>>>> it stays there for as long as I wait until I poweroff the vm.
>>>>>>
>>>>>> tried with Debian installer,
>>>>>> # vm install -f jagadish debian-12.5.0-amd64-netinst.iso
>>>>>> Starting jagadish
>>>>>>   * found guest in /mnt/VMs/jagadish
>>>>>>   * booting...
>>>>>> fbuf frame buffer base: 0x2747e2400000 [sz 16777216]
>>>>>>
>>>>>>
>>>>>> ideally I need to install Debian for my user base on these GPU based
>>>>>> systems. But even that is proving to be impossible since the installer does
>>>>>> not move any further from where it is.
>>>>>>
>>>>>> Any thoughts/ideas/suggestions what I else I can try to make this
>>>>>> work? Anything that you have tried that worked? have I missed something?
>>>>>>
>>>>>> Would appreciate any and all thoughts/suggestions
>>>>>>
>>>>>> best regards
>>>>>> SS
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Mario.
>>>>>
>>>>
>>>>
>>>> --
>>>> Mario.
>>>>
>>>
>>>
>>> --
>>> Mario.
>>>
>>
>
> --
> Mario.
>


-- 
Mario.