Will GPU passthrough work if I’m also running Home Assistant on the same Proxmox host?
It depends on whether your host has integrated graphics or a second GPU for the hypervisor. If the host can keep its own display adapter, other VMs like Home Assistant on Proxmox run independently and aren’t affected by the GPU being passed through to a different VM. Security note: IOMMU configuration and kernel parameters affect system-level isolation. Review any changes in the context of your threat model — especially allow_unsafe_interrupts, which weakens IOMMU isolation. Security best practices in this area continue to evolve.
We’ll be configuring a QDevice in this tutorial for Quorum. Quorum is used to ensure that most of the nodes in a cluster are online, with the overall goal being to have three total votes. The QDevice we’ll be setting up acts as the third vote without actually running the Proxmox OS. The steps below can run on any Debian-based device, but we’ll be using a Raspberry Pi specifically because it’s one of the cheapest options you can use. Remember, you don’t need a Raspberry Pi, but you do need it to be a separate device from the two Proxmox nodes, so any Linux system should work. This ensures that
Most discrete PCIe GPUs work with passthrough on Proxmox, but success is heavily hardware-dependent. NVIDIA consumer cards have historically had driver-level restrictions (the “Error 43” issue on GeForce cards in VMs), though this has become less of a problem on newer drivers.
pcie_acs_override=downstream,multifunction forces Proxmox to treat PCIe devices as if they have ACS (Access Control Services) support, which allows them to be placed in separate IOMMU groups. You need this when your GPU ends up grouped with unrelated devices on kernels 5.15 and newer. It does reduce memory isolation slightly, so only use it if your groups are genuinely broken.