question on Bhyve

Julian Elischer julian at freebsd.org
Wed Sep 28 04:21:37 UTC 2011


On 9/27/11 1:35 PM, Neel Natu wrote:
> Hi Julian,
>
> On Mon, Sep 26, 2011 at 11:10 PM, Julian Elischer<julian at freebsd.org>  wrote:
>> does anyone know what is needed for a hypervisor to support PCI pass
>> through?
>>
> I can speak about the bhyve implementation of pci passthru but I
> suspect that other hypervisors do it similarly. BHyVe requires that
> the platform support nested page tables and has an iommu to be able to
> support pci passthru.
>
> There are two parts to it.
>
> The simpler part is "hiding" the pci device that is to be used to
> passthru. In bhyve we do this by setting the tunable "pptdevs" to a
> list of pci devices that we may want to assign as passthru devices.
>
> This tunable is used by the "blackhole" pci device driver to claim
> each device in "pptdevs" during probe but then reject it in attach.
> The net effect of this is leave the pci device un-attached from the
> host's perspective.
>
> For e.g. in /boot/loader.conf:
>
> pptdevs="1/0/0 2/0/0 3/0/0"
> blackhole_load="YES"
>
> The second part is a bit more work for the hypervisor.
>
> We assign the pci device to a guest with the following command line
> option to /usr/sbin/bhyve: -s 1,passthru,3/0/0. This says to make the
> pci device identified by "3/0/0" appear as a passthru pci device on
> slot 1 of the virtual pci bus.
>
> When we assign the device to the guest we need to provide it with
> passthru access to the device's config space, mmio space, io space. In
> the other direction we need to provide the pci device with access to
> the guest's address space.
>
> The pci config space is a bit tricky because we need to intercept
> accesses to the PCI BARs and also to the MSI capability. But beyond
> those registers we simply intercept config space accesses and fulfill
> them by issuing PCIIOCREAD and PCIIOCWRITE ioctls on /dev/pci
>
> Accesses to the MMIO space are straightforward and require no
> hypervisor intervention after setting up the nested page tables.
>
> Accesses to the I/O space allocated to the device are trapped by the
> hypervisor and handled by the opening /dev/io and issuing IODEV_PIO
> ioctl on it.
>
> And finally dma transfers generated by the pci device into the guest's
> address space are transparent to the hypervisor once it has set up the
> IOMMU.
>
> Hope that helps.

thanks for your explanation..  I will see if I can use it to 
pass-through a fusion-io flash card
as soon as I get a change to set it all up..


Julian
> best
> Neel
>
>>
>> _______________________________________________
>> freebsd-virtualization at freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
>> To unsubscribe, send any mail to
>> "freebsd-virtualization-unsubscribe at freebsd.org"
>>



More information about the freebsd-virtualization mailing list