VFIO / Passthrough support¶
Thanks to the LS1088's support for IOMMU and GICv3, it is possible to pass PCIe and DPAA2 devices direct to VMs.
PCIe devices¶
VFIO of PCIe devices is supported, but there is one main limitation:
When passing through miniPCIe cards, both cards must be passed through to the same VM
This also means you cannot pass through just one of the miniPCIe cards (e.g leaving one to the host).
This is because the miniPCIe cards are behind a PCIe switch. The LS1088 cannot setup an IOMMU group for the individual devices behind the switch, only for the devices directly attached to it.
How to passthrough a VFIO device¶
vfio-pci works on LS1088/Ten64 almost exactly the same as it does on other systems (such as x86-64).
- First, identify the PCI bus reference for the device, with lspci:
0000:00:00.0 PCI bridge: Freescale Semiconductor Inc Device 80c0 (rev 10)
0001:00:00.0 PCI bridge: Freescale Semiconductor Inc Device 80c0 (rev 10)
0001:01:00.0 PCI bridge: Pericom Semiconductor Device b304 (rev 01)
0001:02:01.0 PCI bridge: Pericom Semiconductor Device b304 (rev 01)
0001:02:02.0 PCI bridge: Pericom Semiconductor Device b304 (rev 01)
0001:03:00.0 Network controller: Qualcomm Atheros QCA6174 802.11ac Wireless Network Adapter (rev 32)
0001:04:00.0 System peripheral: Global Unichip Corp. Coral Edge TPU
0002:00:00.0 PCI bridge: Freescale Semiconductor Inc Device 80c0 (rev 10)
0002:01:00.0 Non-Volatile memory controller: Micron/Crucial Technology P1 NVMe PCIe SSD (rev 03)
In this case, we will pass through two miniPCIe cards - the QCA6174 wireless controller (0001:03:00.0
) and
the Coral Edge TPU (0001:04:00.0
).
- Then, check if there is a driver that owns the device:
# QCA6174
$ basename $( readlink -f /sys/bus/pci/devices/0001\:03\:00.0/driver )
ath10k_pci
# Coral TPU (no driver in our current kernel)
$ basename $( readlink -f /sys/bus/pci/devices/0001\:04\:00.0/driver )
driver
- If the device has a driver loaded, unbind the existing one
echo 0001\:03\:00.0 > /sys/bus/pci/drivers/ath10k_pci/unbind
- Load the vfio-pci driver for the device
echo "vfio-pci" > /sys/bus/pci/devices/0001\:03\:00.0/driver_override
echo "vfio-pci" > /sys/bus/pci/devices/0001\:04\:00.0/driver_override
echo 0001\:03\:00.0 > /sys/bus/pci/drivers/vfio-pci/bind
echo 0001\:04\:00.0 > /sys/bus/pci/drivers/vfio-pci/bind
- Specify the device the QEMU command line
qemu-system-aarch64 ... -device vfio-pci,host=0001\:03\:00.0 -device vfio-pci,host=0001\:04\:00.0
Things to check if vfio doesn't work : Check the vfio-pci module is loaded (if not compiled into the kernel).
You may also need to disable D3 sleep states on the kernel command line: vfio-pci.disable_idle_d3=1
Devices known to work/not-work under VFIO passthrough¶
Working:
- Atheros chipset wireless cards (ath9k/ath10k)
- Google Coral TPU
Known issues:
-
Marvell 92xx SATA controller's (as used by Innodisk mPCIe/M.2 cards).
In some VM's, this causes a kernel panic at boot. We believe this issue is related to firmware (EFI) reinitalizing the card inside the VM rather than VFIO, but have not debugged this yet.
DPAA2 devices¶
DPAA2 containers (DPRCs) can be passed through to VMs, this is how usecases such as DPDMUX and service chaining is achieved.
DPDK's DPAA2 driver also uses VFIO to directly access the hardware without going through the kernel.
The child VM must have DPAA2 drivers, and currently (2020-09) you need to either boot the kernel directly under QEMU (-kernel) or use U-Boot as the VM's "BIOS" as there currently is no UEFI firmware that can generate ACPI descriptions for DPAA2 devices under VMs.
Software requirements to pass through DPAA2 devices..¶
-
VFIO passthrough of DPAA2 devices is currently only available in kernels with NXP's LSDK patchset.
As of 2023-11, while there is support for fsl-mc vfio in the mainline kernel, it is not complete.
We have added the required patches into the Traverse
lts-6-1
tree. -
Patches to QEMU are required, see NXP's qemu repository or the squashed patchset in μVirt.
-
DPRC containers that are passed through to VMs must be configured with
DPRC_CFG_OPT_IRQ_CFG_ALLOWED
so the management complex firmware can direct the IRQs to the VMs. DPRC containers used for DPDK do not need this option.
Passing through DPAA2 containers¶
This is conceptually similar to the VFIO-PCI process described above.
- Bind the DPRC to the
vfio-fsl-mc
driver.
echo dprc.2 > /sys/bus/fsl-mc/drivers/vfio-fsl-mc/bind
Usually DPRCs created at runtime (with restool) are not bound to the host operating system. If the DPRC is bound to the host OS, you can unbind it from it's existing driver:
echo dprc.2 > /sys/bus/fsl-mc/devices/${dprc}/driver/unbind
- Supply the DPRC to QEMU as a vfio-fsl-mc device;
qemu-system-aarch64 ... -device vfio-fsl-mc,host=dprc.2
Further reading¶
-
See these merge requests that added VFIO support to μVirt:
-
There is preview support for automated setup of DPDMUX objects in μVirt which simplifies the DPAA2 passthrough process: