https://heiko-sieger.info/running-windows-10-on-linux-using-kvm-with-vga-passthrough
virtual machines on a Linux host platform without compromising performance of the (Windows) guest system.
For some benchmarks of my current system, see Windows 10 Virtual Machine Benchmarks
The Solution
In the tutorial below I describe how to install and run Windows 10 as a KVM virtual machine on a Linux Mint or Ubuntu host. The tutorial uses a technology called VGA passthrough (also referred to as “GPU passthrough” or “vfio” for the vfio driver used) which provides near-native graphics performance in the VM. I’ve been doing VGA passthrough since summer 2012, first running Windows 7 on a Xen hypervisor, switching to KVM and Windows 10 in December 2015. The performance – both graphics and computing – under Xen and KVM has been nothing less than stellar!
The tutorial below will only work with suitable hardware!
If your computer does not fulfill the basic hardware requirements outlined below, you won’t be able to make it work.
The tutorial is not written for the beginner! I assume that you do have some Linux background, at least enough to be able to restore your system when things go wrong.
I am also providing links to other, similar tutorials that might help. See the References section below. Last not least, you will find links to different forums and communities where you can find further information and help.
Note: The tutorial was originally posted on the Linux Mint forum.
Disclaimer
All information and data provided in this tutorial is for informational purposes only. I make no representations as to accuracy, completeness, currentness, suitability, or validity of any information in this tutorial and will not be liable for any errors, omissions, or delays in this information or any losses, injuries, or damages arising from its use. All information is provided on an as-is basis.
You are aware that by following this tutorial you may risk the loss of data, or may render your computer inoperable. Backup your computer! Make sure that important documents/data are accessible elsewhere in case your computer becomes inoperable.
For a glossary of terms used in the tutorial, see Glossary of Virtualization Terms.
Tutorial
Note for Ubuntu users: My tutorial uses the “xed” command found in Linux Mint Mate to edit documents. You will have to replace it with “gedit” or whatever editor you use in Ubuntu/Xubuntu/Lubuntu…
Note 2: I’ve just published a new tutorial for a Ryzen based system running Pop!_OS. See “Creating a Windows 10 VM on the AMD Ryzen 9 3900X using Qemu 4.0 and VGA Passthrough“.
Important Note: This tutorial was written several years ago and has been updated to Linux Mint 19 and Ubuntu 18.04 syntax. It uses QEMU 2.11 or QEMU 2.12. Today I use libvirt / virt-manager and either QEMU 4.2 (on Linux Mint 20 / Ubuntu 20.04), or QEMU 5.0 on Manjaro (see the link above).
If you are following this tutorial when running a newer version of the OS and QEMU (e.g. QEMU 4.2 or 5.0), some of the QEMU syntax has changed. For the changes see the QEMU User Documentation.
I simply haven’t found the time to revise the tutorial. That said, you should still be able to use it and find valuable information.
Part 1 – Hardware Requirements
For this tutorial to succeed, your computer hardware must fulfill all of the following requirements:
IOMMU support
In Intel jargon its called VT-d. AMD calls it variously AMD Virtualisation, AMD-Vi, or Secure Virtual Machine (SVM). Even IOMMU has surfaced. If you plan to purchase a new PC/CPU, check the following websites for more information:
- Reddit VFIO group – Hardware configuration for successful VFIO
- Arch Linux wiki – PCI passthrough via OVMF examples
- Community reports on working systems – https://passthroughpo.st/vfio-increments/
- The Reddit VFIO community – the most active GPU passthrough community around
- Intel – http://ark.intel.com/Search/FeatureFilter?productType=processors&VTD=true
- Intel processors with ACS support –http://vfio.blogspot.com/2015/10/intel-processors-with-acs-support.html
- Wikipedia –== https://en.wikipedia.org/wiki/List_of_IOMMU-supporting_hardware==
- AMD – http://products.amd.com/en-us and check the processor specs
- AMD – B350 motherboards are reported to NOT support IOMMU.
Both Intel and AMD have improved their IOMMU support in recent years. There are still differences between CPUs – specifically ACS (Access Control Services) may vary between CPU models. Generally speaking, the high-end Intel or AMD CPUs provide better ACS or device isolation capabilities. That is not to say that the more down-to-earth CPUs won’t work, as long as they support IOMMU.
The first link above provides a non-comprehensive list of CPU/motherboard/GPU configurations where users were successful with GPU passthrough. When building a new PC, make sure you purchase components that support GPU passthrough.
Most PC / motherboard manufacturers disable IOMMU by default. You will have to enable IOMMU in the BIOS. To check your current CPU / motherboard IOMMU support and enable it, do the following:
- Reboot your PC and enter the BIOS setup menu (usually you press F2, DEL, or similar during boot to enter the BIOS setup).
- Search for IOMMU, VT-d, SVM, or “virtualisation technology for directed IO” or whatever it may be called on your system. Turn on VT-d / IOMMU.
- Save and Exit BIOS and boot into Linux.
- Edit the /etc/default/grub file (you need root permission to do so). Open a terminal window (Ctrl+Alt+T) and enter (copy/paste):
xed admin:///etc/default/grub
(use gksudo gedit /etc/default/grub for older Linux Mint/Ubuntu releases)
Here is my /etc/default/grub file before the edit:
1 2 3 4 5 6 7 8 | GRUB_DEFAULT=0 #GRUB_HIDDEN_TIMEOUT=10 #GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT_STYLE=countdown GRUB_TIMEOUT=0 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT=”quiet” GRUB_CMDLINE_LINUX=”” |
Look for the line that starts with GRUB_CMDLINE_LINUX_DEFAULT=”…”. You need to add one of the following options to this line, depending on your hardware:
1 | Intel CPU: |
intel_iommu=on
AMD CPU:
amd_iommu=on
Save the file and exit. Then type:
sudo update-grub
5. Now check that IOMMU is actually supported. Reboot the PC. Open a terminal window.
On AMD machines use:
dmesg | grep AMD-Vi
The output should be similar to this:
…
AMD-Vi: Enabling IOMMU at 0000:00:00.2 cap 0x40
AMD-Vi: Lazy IO/TLB flushing enabled
AMD-Vi: Initialized for Passthrough Mode
…
Or use:
cat /proc/cpuinfo | grep svm
On Intel machines use:
dmesg | grep “Virtualization Technology for Directed I/O”
The output should be this:
[ 0.902214] DMAR: Intel? Virtualization Technology for Directed I/O
If you do not get this output, then VT-d or AMD-V is not working – you need to fix that before you continue!
Most likely it means that your hardware (CPU or motherboard) doesn’t support IOMMU, in which case there is no point continuing this tutorial ?? . Check again to make sure your CPU supports IOMMU. If yes, the cause may be a faulty motherboard BIOS. See the troubleshooting section further below. You may need to update your motherboard BIOS (be careful, flashing the BIOS can potentially brick your motherboard).
Two graphics processors
In addition to a CPU and motherboard that supports IOMMU, you need two graphics processors (GPU):
-
One GPU for your Linux host (the OS you are currently running, I hope);
-
One GPU (graphics card) for your Windows guest.
We are building a system that runs two operating systems at the same time. Many resources like disk space, memory, etc. can be switched forth and back between the host and the guest, as needed. Unfortunately the GPU cannot be switched or shared between the two OS, at least not in an easy way. (There are ways to reset the graphics card as well as the X server in Linux so you could get away with one graphics card, but I personally believe it’s not ideal. See for example here and here for more on that.)
If, like me, you use Linux for the everyday stuff such as emails, web browsing, documents, etc., and Windows for gaming, photo or video editing, you’ll have to give Windows a more powerful GPU, while Linux will run happily with an inexpensive GPU, or the integrated graphics processor (IGP). (You can also create a Linux VM with GPU passthru if you need Linux for gaming or graphics intensive applications.)
The graphics card to be passed through to Windows (or Linux) must be able to reset properly after VM shutdown. I’ve written a separate post on AMD vs. Nvidia graphics cards but the bottom line is this: avoid AMD GPUs! See also passthroughpo.st and open “Guest GPUs” towards the bottom.
UEFI support in the GPU used with Windows
In this tutorial I use UEFI to boot the Windows VM. That means that the graphics card you are going to use for the Windows guest must support UEFI – most newer cards do. You can check here if your video card and BIOS support UEFI. If you run Windows, download and run GPU-Z and see if there is a check mark next to UEFI. (For more information, see here.)
There are several advantages to UEFI, namely it starts faster and overcomes some issues associated with legacy boot (Seabios).
If you plan to use the Intel IGD (integrated graphics device) for your Linux host, UEFI boot is the way to go. UEFI overcomes the VGA arbitration problem associated with the IGD and the use of the legacy Seabios.
If, for some reason, you cannot boot the VM using UEFI, and you want to use the Intel IGD for the host, you need to compile the i915 VGA arbiter patch into the kernel. Before you do, check the note below. For more on VGA arbitration, see here. For the i915 VGA arbiter patch, look here or under Part 15 – References.
Note: If your GPU does NOT support UEFI, there is still hope. You might be able to find a UEFI BIOS for your card at TechPowerUp Video BIOS Collection. A Youtube blogger calling himself Spaceinvader has produced a very helpful video on using a VBIOS.
If there is no UEFI video BIOS for your Windows graphics card, you will have to look for a tutorial using the Seabios method. It’s not much different from this here, but there are some things to consider.
Laptop users with Nvidia Optimus technology: Misairu_G (username) published an in-depth guide to VGA passthrough on laptops using Nvidia Optimus technology – see GUIDE to VGA passthrough on Nvidia Optimus laptops. (For reference, here some older posts on the subject: https://forums.linuxmint.com/viewtopic.php?f=231&t=212692&p=1300764#p1300634.)
Note: In recent years AMD graphics cards have suffered from a bug that is termed “reset bug”. Modern AMD graphics cards are often not capable of performing a proper “function level reset” (FLR, in short). They will boot fine, but when you shut down the VM and boot it again, you’ll get an “internal error: Unknown PCI header type ‘127’“.
There are some workarounds for this error. See the Troubleshooting section below.
Part 2 – Installing Qemu / KVM
The Qemu release shipped with Linux Mint 19 is version 2.11 and supports the latest KVM features.
In order to have Linux Mint “remember” the installed packages, use the Software Manager to install the following packages:
qemu-kvm
qemu-utils
seabios
ovmf
hugepages
cpu-checker
bridge-utils
Linux Mint
Software Manager
For AMD Ryzen, see also here (note that Linux Mint 19/Ubuntu 18.04 only require the BIOS update). Generally, AMD has had a range of issues with VFIO/GPU passthrough support. Read through the trouble shooting section further below and check the links under hardware compatibility for further information.
Alternatively, use
sudo apt install qemu-kvm qemu-utils seabios ovmf hugepages cpu-checker bridge-utils
to install the required packages.
Part 3 – Determining the Devices to Pass Through to Windows
We need to find the PCI ID(s) of the graphics card and perhaps other devices we want to pass through to the Windows VM. Normally the IGP (the GPU inside the processor) will be used for Linux, and the discrete graphics card for the Windows guest. My CPU does not have an integrated GPU, so I use 2 graphics cards. Here my hardware setup:
1 2 | GPU for Linux: Nvidia Quadro 2000 residing in the first PCIe graphics card slot. GPU for Windows: Nvidia GTX 970 residing in the second PCIe graphics card slot. |
To determine the PCI bus number and PCI IDs, enter:
Here is the output on my system:
1 2 | 01:00.0 VGA compatible controller: NVIDIA Corporation GF106GL [Quadro 2000] (rev a1) 02:00.0 VGA compatible controller: NVIDIA Corporation Device 13c2 (rev a1) |
The first card under 01:00.0 is the Quadro 2000 I want to use for the Linux host. The other card under 02:00.0 I want to pass to Windows.
Modern graphics cards usually come with an on-board audio controller, which we need to pass through as well. To find its ID, enter:
1 | lspci -nn | grep 02:00. |
Substitute “02:00.” with the bus number of the graphics card you wish to pass to Windows, without the trailing “0“. Here is the output on my computer:
02:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:13c2] (rev a1)
02:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:0fbb] (rev a1)
Write down the bus numbers (02:00.0 and 02:00.1 above), as well as the PCI IDs (10de:13c2 and 10de:0fbb in the example above). We need them in the next part.
Now check to see that the graphics card resides within its own IOMMU group:
For a sorted list, use:
Look for the bus number of the graphics card you want to pass through. Here is the (shortened) output on my system:
1 2 3 4 5 6 7 8 9 | … /sys/kernel/iommu_groups/19/devices/0000:00:1f.3 /sys/kernel/iommu_groups/20/devices/0000:01:00.0 /sys/kernel/iommu_groups/20/devices/0000:01:00.1 /sys/kernel/iommu_groups/21/devices/0000:02:00.0 /sys/kernel/iommu_groups/21/devices/0000:02:00.1 /sys/kernel/iommu_groups/22/devices/0000:05:00.0 /sys/kernel/iommu_groups/22/devices/0000:06:04.0 … |
Make sure the GPU and perhaps other PCI devices you wish to pass through reside within their own IOMMU group. In my case the graphics card and its audio controller designated for passthrough both reside in IOMMU group 21. No other PCI devices reside in this group, so all is well.
If your VGA card shares an IOMMU group with other PCI devices, see IOMMU GROUP CONTAINS ADDITIONAL DEVICES for a solution!
Next step is to find the mouse and keyboard (USB devices) that we want to assign to the Windows guest. Remember, we are going to run 2 independent operating systems side by side, and we control them via mouse and keyboard.
About keyboard and mouse
Depending whether and how much control you want to have over each system, there are different approaches:
-
Get a USB-KVM (Keyboard/VGA/Mouse) switch. This is a small hardware device with usually 2 USB ports for keyboard and mouse as well as a VGA or (the more expensive) DVI or HDMI graphics outputs. In addition the USB-KVM switch has two USB cables and 2 VGA/DVI/HDMI cables to connect to two different PCs. Since we run 2 virtual PCs on one single system, this is viable solution. See also my Virtualization Hardware Accessories post.
Advantages:
– Works without special software in the OS, just the usual mouse and keyboard drivers;
– Best in performance – no software overhead.
Disadvantages:
– Requires extra (though inexpensive) hardware;
– More cable clutter and another box with cables on your desk;
– Requires you to press a button to switch between host and guest and vice versa;
– Many low-cost KVM are unreliable and do not initialize the keyboard or mouse properly when switching USB ports;
– Need to pass through a USB port or controller – see below on IOMMU groups. -
Without spending a nickel you can simply pass through your mouse and keyboard when the VM starts.
Advantages:
– Easy to implement;
– No money to invest;
– Good solution for setting up Windows.
There are at least two ways to accomplish this task. I will describe both options.
-
Synergy (http://symless.com/synergy/) is a commercial software solution that, once installed and configured, allows you to interact with two PCs or virtual machines.
Advantages:
– Most versatile solution, especially with dual screens;
– Software only, easy to configure;
– No hardware purchase required.
Disadvantages:
– Requires the installation of software on both the host and the guest;
– Doesn’t work during Windows installation (see option 2);
– Costs $10 for a Basic, lifetime license;
– May produce lag, although I doubt you’ll notice unless there is something wrong with the bridge configuration. -
“Multi-device” bluetooth keyboard and mouse that can connect to two different devices and switch between them at the press of a button (see for example here):
Advantages:
– Most convenient solution;
– Same performance as option 1.
Disadvantages:
– Price.
– Make sure the device supports Linux, or that you can return it if it doesn’t!
I first went with option 1 for simplicity and universality, but have replaced it with option 4. The USB-KVM soon started to malfunction and gave me lots of trouble.
I’m now using a Logitech MX master 2S mouse and a Logitech K780 BT keyboard. See here for how to pair these devices to the USB dongles.
Both option 1 and 4 usually require to pass through a USB PCI device to the Windows guest. I had a need for both USB2 and USB3 ports in my Windows VM and I was able to pass through two USB controllers to my Windows guest, using PCI passthrough.
For the VM installation we choose option 2 (see above), that is we pass our keyboard and mouse through to the Windows VM. To do so, we need to identify their USB ID:
Here my system output (truncated):
1 2 3 4 | … Bus 010 Device 006: ID 045e:076c Microsoft Corp. Comfort Mouse 4500 Bus 010 Device 005: ID 045e:0750 Microsoft Corp. Wired Keyboard 600 … |
Note down the IDs: 045e:076c and 045e:0750 in my case.
Part 4 – Prepare for Passthrough
In order to make the graphics card available to the Windows VM, we will assign a “dummy” driver as a place holder: vfio-pci. To do that, we first have to prevent the default driver from binding to the graphics card. This can sometimes be tricky, as some drivers load early in the boot process and prevent binding to vfio-pci.
(One way to accomplish that is by blacklisting driver modules, or by using Kernel Mode Settings. For more on Kernel Mode Setting, see https://wiki.archlinux.org/index.php/kernel_mode_setting.)
Note: If you have two identical graphics cards for both the host and the VM, the method below won’t work. In that case see Using the driver_override feature.
The method I describe below uses module alias (thanks to this post). Another promising method is described in this tutorial.
Run the following command:
where 0000:02:00.0 is the PCI bus number of your graphics card obtained in Part 3 above. The output will look something like:
pci:v000010DEd000013C2sv00001458sd00003679bc03sc00i00
Repeat above command with the PCI bus number of the audio part:
where 0000:02:00.1 is the PCI bus number of your graphics card’ audio device.
In the terminal window, enter the following:
followed by your password to have a root terminal.
Open or create /etc/modprobe.d/local.conf:
and copy and paste the results from the two cat /sys/… commands above. Then precede the lines with “alias” and append the lines with “vfio-pci”, as shown below:
1 2 | alias pci:v000010DEd000013C2sv00001458sd00003679bc03sc00i00 vfio-pci alias pci:v000010DEd00000FBBsv00001458sd00003679bc04sc03i00 vfio-pci |
At the end of that file, add the following line:
where 10de:13c2 and 10de:0fbb are the PCI IDs for your graphics card’ VGA and audio part, as determined in the previous paragraph.
You can also add the following option below the options vfio-pci entry:
(The above entry is only valid for 4.1 and newer kernels and UEFI guests. It helps prevent VGA arbitration from interfering with host devices.)
Save the file and exit the editor.
Some applications like Passmark and Windows 10 releases 1803 and newer require the following option:
To load vfio and other required modules at boot, edit the /etc/initramfs-tools/modules file:
At the end of the file add in the order listed below:
1 2 3 4 5 | vfio vfio_iommu_type1 vfio_pci vfio_virqfd vhost-net |
Save and close the file.
Any changes in /etc/modprobe.d require you to update the initramfs. Enter at the command line:
Part 5 – Network Settings
For performance reasons it is best to create a virtual network bridge that connects the VM with the host. In a separate post I have written a detailed tutorial on how to set up a bridge using Network Manager.
Note: Bridging only works for wired networks. If your PC is connected to a router via a wireless link (Wifi), you won’t be able to use a bridge. The easiest way to get networking inside the Windows VM is NOT to use any network setup. You also need to delete the network configuration in the qemu command (script). If you still want to use a bridged network, there are workarounds such as routing or ebtables (see https://wiki.debian.org/BridgeNetworkConnections#Bridging_with_a_wireless_NIC).
Once you’ve setup the network, reboot the computer and test your network configuration – open your browser and see if you have Internet access.
Part 6 – Setting up Hugepages
Moved to Part 18 – Performance Tuning. This is a performance tuning measure and not required to run Windows on Linux. See Configure Hugepages under Part 18 – Performance Tuning.
Part 7 – Download the VFIO drivers
Download the VFIO driver ISO to be used with the Windows installation from https://docs.fedoraproject.org/en-US/quick-docs/creating-windows-virtual-machines-using-virtio-drivers/index.html. Below are the direct links to the ISO images:
Latest VIRTIO drivers: https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/latest-virtio/virtio-win.iso
Stable VIRTIO drivers: https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso
I chose the latest driver ISO.
Part 8 – Prepare Windows VM Storage Space
We need some storage space on which to install the Windows VM. There are several choices:
- Create a raw image file.
Advantages:
– Easy to implement;
– Flexible – the file can grow with your requirements;
– Snapshots;
– Easy migration;
– Good performance.
Disadvantages:
– Takes up the entire space you specify. - Create a dedicated LVM volume.
Advantages:
– Familiar technology (at least to me);
– Excellent performance, like bare-metal;
– Flexible – you can add physical drives to increase the volume size;
– Snapshots;
– Mountable within Linux host using kpartx.
Disadvantages:
– Takes up the entire space specified;
– Migration isn’t that easy. - Pass through a PCI SATA controller / disk.
Advantages:
– Excellent performance, using original Windows disk drivers;
– Allows the use of Windows virtual drive features;
– Can use an existing bare-metal installation of Windows in a VM;
– Possibility to boot Windows directly, i.e. not as VM;
– Possible to add more drives.
Disadvantages:
– The PC needs at least two discrete SATA controllers;
– Host has no access to disk while VM is running;
– Requires a dedicated SATA controller and drive(s);
– SATA controller must have its own IOMMU group;
– Possible conflicts in Windows between bare-metal and VM operation.
For further information on these and other image options, see here: https://en.wikibooks.org/wiki/QEMU/Images
Although I’m using an LVM volume, I suggest you start with the raw image. Let’s create a raw disk image:
for performance, or simply:
Note: Adjust size (100G) and path to match your needs or resources.
See also my post on Tuning VM disk performance.
Part 9 – Check Configuration
It’s best to check that we got everything:
KVM: kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used
KVM module: lsmod | grep kvm
kvm_intel 200704 0
kvm 593920 1 kvm_intel
irqbypass 16384 2 kvm,vfio_pci
Above is the output for the Intel module.
VFIO: lsmod | grep vfio
vfio_pci 45056 0
vfio_virqfd 16384 1 vfio_pci
irqbypass 16384 2 kvm,vfio_pci
vfio_iommu_type1 24576 0
vfio 32768 2 vfio_iommu_type1,vfio_pci
QEMU: qemu-system-x86_64 --version
You need QEMU emulator version 2.5.0 or newer. On Linux Mint 19 / Ubuntu 18.04 the QEMU version is 2.11.
Did vfio load and bind to the graphics card?
lspci -kn | grep -A 2 02:00
where 02:00 is the bus number of the graphics card to pass to Windows. Here the output on my PC:
02:00.0 0300: 10de:13c2 (rev a1)
Subsystem: 1458:3679
Kernel driver in use: vfio-pci
02:00.1 0403: 10de:0fbb (rev a1)
Subsystem: 1458:3679
Kernel driver in use: vfio-pci
Kernel driver in use is vfio-pci. It worked!
Interrupt remapping: dmesg | grep VFIO
[ 3.288843] VFIO – User Level meta-driver version: 0.3
All good!
If you get this message:
vfio_iommu_type1_attach_group: No interrupt remapping support. Use the module param “allow_unsafe_interrupts” to enable VFIO IOMMU support on this platform
enter the following command in a root terminal (or use sudo -i):
1 | echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" > /etc/modprobe.d/vfio_iommu_type1.conf |
followed by:
In this case you need to reboot once more.
Part 10 – Create Script to Start Windows
To create and start the Windows VM, copy the script below and safe it as windows10vm.sh (or whatever name you like, just keep the .sh extension):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 | #!/bin/bash vmname="windows10vm" if ps -ef | grep qemu-system-x86_64 | grep -q multifunction=on; then echo "A passthrough VM is already running." & exit 1 else # use pulseaudio export QEMU_AUDIO_DRV=pa export QEMU_PA_SAMPLES=8192 export QEMU_AUDIO_TIMER_PERIOD=99 export QEMU_PA_SERVER=/run/user/1000/pulse/native cp /usr/share/OVMF/OVMF_VARS.fd /tmp/my_vars.fd qemu-system-x86_64 \ -name $vmname,process=$vmname \ -machine type=q35,accel=kvm \ -cpu host,kvm=off \ -smp 4,sockets=1,cores=2,threads=2 \ -m 8G \ -balloon none \ -rtc clock=host,base=localtime \ -vga none \ -nographic \ -serial none \ -parallel none \ -soundhw hda \ -usb \ -deviceusb-host,vendorid=0x045e,productid=0x076c \ -device usb-host,vendorid=0x045e,productid=0x0750 \ -device vfio-pci,host=02:00.0,multifunction=on \ -device vfio-pci,host=02:00.1 \ -drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd \ -drive if=pflash,format=raw,file=/tmp/my_vars.fd \ -boot order=dc \ -drive id=disk0,if=virtio,cache=none,format=raw,file=/media/user/win.img \ -drive file=/home/user/ISOs/win10.iso,index=1,media=cdrom \ -drive file=/home/user/Downloads/virtio-win-0.1.140.iso,index=2,media=cdrom \ -netdev type=tap,id=net0,ifname=vmtap0,vhost=on \ -device virtio-net-pci,netdev=net0,mac=00:16:3e:00:01:01 exit 0 fi |
Make the file executable:
You need to edit the file and change the settings and paths to match your CPU and configuration. See below for explanations on the qemu-system-x86 options:
-name $vmname , process= $vmname
Name and process name of the VM. The process name is displayed when using ps -A to show all processes, and used in the script to determine if the VM is already running. Don’t use win10 as process name, for some inexplicable reason it doesn’t work!
-machine type=q35,accel=kvm
This specifies a machine to emulate. The accel=kvm option tells qemu to use the KVM acceleration – without it the Windows guest will run in qemu emulation mode, that is it’ll run real slow.
I have chosen the type=q35 option, as it improved my SSD read and write speeds. See https://wiki.archlinux.org/index.php/QEMU#Virtual_machine_runs_too_slowly. In some cases type=q35 will prevent you from installing Windows, instead you may need to use type=pc,accel=kvm. See the post here. To see all options for type=…, enter the following command:
Important: Several users passing through Radeon RX 480 and Radeon RX 470 cards have reported reboot loops after updating and installing the Radeon drivers. If you pass through a Radeon graphics card, it is better to replace the -machine line in the startup script with the following line:
to use the default i440fx emulation.
Note for IGD users: If you have an Intel CPU with internal graphics (IGD), and want to use the Intel IGD for Windows, there is a new option to enable passthrough:
igd-passthru=on|off controls IGD GFX passthrough support (default=off).
In most cases you will want to use a discrete graphics card with Windows.
-cpu host,kvm=off
This tells qemu to emulate the host’s exact CPU. There are more options, but it’s best to stay with host.
The kvm=off option is only needed for Nvidia graphics cards – if you have an AMD/Radeon card for your Windows guest, you can remove that option and specify -cpu host.
-smp 4,sockets=1,cores=2,threads=2
This specifies multiprocessing. -smp 4 tells the system to use 4 (virtual) processors. My CPU has 6 cores, each supporting 2 threads, which makes a total of 12 threads. It’s probably best not to assign all CPU resources to the Windows VM – the host also needs some resources (remember that some of the processing and I/O coming from the guest takes up CPU resources in the host). In the above example I gave Windows 4 virtual processors. sockets=1 specifies the number of actual CPU sockets qemu should assign, cores=2 tells qemu to assign 2 processor cores to the VM, and threads=2 specifies 2 threads per core. It may be enough to simply specify -smp 4, but I’m not sure about the performance consequences (if any).
If you have a 4-core Intel CPU with hyper-threading, you can specify -smp 6,sockets=1,cores=3,threads=2 to assign 75% of your CPU resources to the Windows VM. This should usually be enough even for demanding games and applications.
Note: If your CPU doesn’t support hyper-threading, specify threads=1.
-m 8G
The -m option assigns memory (RAM) to the VM, in this case 8 GByte. Same as -m 8192. You can increase or decrease it, depending on your resources and needs. With modern Windows releases it doesn’t make sense to give it less than 4G, unless you are really stretched with RAM. If you use hugepages, make sure your hugepage size matches this!
-mem-path /dev/hugepages
This tells qemu where to find the hugepages we reserved. If you haven’t configured hugepages, you need to remove this option.
-mem-prealloc
Preallocates the memory we assigned to the VM.
-balloon none
We don’t want memory ballooning.
-rtc clock=host,base=localtime
-rtc clock=host tells qemu to use the host clock for synchronization. base=localtime allows the Windows guest to use the local time from the host system. Another option is utc.
-vga none
Disables the built in graphics card emulation. You can remove this option for debugging.
-nographic
Totally disables SDL graphical output. For debugging purposes, remove this option if you don’t get to the Tiano Core screen.
-serial none
-parallel none
Disable serial and parallel interfaces. Who needs them anyway?
-soundhw hda
Together with the export QEMU_AUDIO_DRV=pa shell command, this option enables sound through PulseAudio.
If you want to pass through a physical audio card or audio device and stream audio from your Linux host to your Windows guest, see here: Streaming Audio from Linux to Windows.
-usb
-device usb-host,vendorid=0x045e,productid=0x076c
-device usb-host,vendorid=0x045e,productid=0x0750
-usb enables USB support and -device usb-host… assigns the USB host devices mouse (045e:076c) and keyboard (045e:0750) to the guest. Replace the device IDs with the ones you found using the lsusb command in Part 3 above!
Note the new syntax. There are also many more options that you can find here: file:///usr/share/doc/qemu-system-common/qemu-doc.html.
There are three options to assign host devices to guests. Here the syntax:
1 2 3 | -usb \ -device usb-kbd \ -device usb-mouse \ |
passes through the keyboard and mouse to the VM. When using this option, remove the -vga none and -nographic options from the script to enable switching back and forth between Windows VM and Linux host using CTRL+ALT.
1 2 | -usb \ -device usb-host,hostbus=bus,hostaddr=addr \ |
passes through the host device identified by bus and addr.
1 2 | -usb \ -device usb-host,vendorid=vendor,productid=product \ |
passes through the host device identified by vendor and product ID
1 2 | -device vfio-pci,host=02:00.0,multifunction=on -device vfio-pci,host=02:00.1 |
Here we specify the graphics card to pass through to the guest, using vfio-pci. Fill in the PCI IDs you found under Part 3 above. It is a multifunction device (graphics and sound). Make sure to pass through both the video and the sound part (02:00.0 and 02:00.1 in my case).
-drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd
Specifies the location and format of the bootable OVMF UEFI file. This file doesn’t contain the variables, which are loaded separately (see right below).
-drive if=pflash,format=raw,file=/tmp/my_vars.fd
These are the variables for the UEFI boot file, which were copied by the script to /tmp/my_vars.fd.
-boot order=dc
Start boot from CD (d), then first hard disk ?. After installation of Windows you can remove the “d” to boot straight from disk.
-drive id=disk0,if=virtio,cache=none,format=raw,file=/media/user/win.img
Defines the first hard disk. With the options above it will be accessed as a paravirtualized (if=virtio) drive in raw format (format=raw).
Important: file=/… enter the path to your previously created win.img file.
Other possible drive options are file=/dev/mapper/group-vol for LVM volumes, or file=/dev/sdx1 for entire disks or partitions.
For some basic -drive options, see my post here. For the new Qemu syntax and drive performance tuning, see Tuning VM Disk Performance.
-drive file=/home/user/ISOs/win10.iso,index=1,media=cdrom \
\
This attaches the Windows win10.iso as CD or DVD. The driver used is the ide-cd driver.
Important: file=/… enter the path to your Windows ISO image.
Note: This option is only needed during installation. Afterwards, copy the line to the end of the file and comment it out with #.
-drive file=/home/user/Downloads/virtio-win-0.1.140.iso,index=2,media=cdrom \
\
This attaches the virtio ISO image as CD. Note the different index.
Important: file=/… enter the path to your virtio ISO image. If you downloaded it to the default location, it should be in your Downloads directory.
Note 1: There are many ways to attach ISO images or drives and invoke drivers. My system didn’t want to take a second scsi-cd device, so this option did the job. Unless this doesn’t work for you, don’t change.
Note 2: This option is only needed during installation. Afterwards, copy the line to the end of the file and comment it out with #.
-netdev type=tap,id=net0,ifname=vmtap0,vhost=on
-device virtio-net-pci,netdev=net0,mac=00:16:3e:00:01:01
Defines the network interface and network driver. It’s best to define a MAC address, here 00:16:3e:00:01:01. The MAC is specified in Hex and you can change the last :01:01 to your liking. Make sure no two MAC addresses are the same!
vhost=on is optional – some people reported problems with this option. It is for network performance improvement.
For more information: https://wiki.archlinux.org/index.php/QEMU#Network and https://wiki.archlinux.org/index.php/QEMU#Networking.
Important: Documentation on the installed QEMU can be found here: file:///usr/share/doc/qemu-system-common/qemu-doc.html.
For syntax changes in newer versions, see https://wiki.qemu.org/Features/RemovedFeatures.
Linux Mint 19.2 and Ubuntu 18.04 come with QEMU 2.11, Ubuntu 18.10 with 2.12. Ubuntu 19.04 uses QEMU 3.1. The latest stable version of QEMU is 4.1.0. For additional documentation on QEMU, see https://www.qemu.org/documentation/. Some configuration examples can be found in the following directory:
/usr/share/doc/qemu-system-common/config
Part 11 – Install Windows
Start the VM by running the script as root:
(Make sure you specify the correct path.)
You should get a Tiano Core splash screen with the memory test result.
You might land in an EFI shell. Type exit and you should be getting a menu. Enter the “Boot Manager” menu, select your boot disk and hit Enter. (See below.)
UEFI shell
UEFI shell (OVMF)
UEFI menu
UEFI menu (OVMF)
UEFI boot manager menu
UEFI boot manager menu (OVMF)
Now the Windows ISO boots and asks you to:
Press any key to start the CD / DVD…
Press a key!
Windows will then ask you to:
Select the driver to install
Click “Browse”, then select your VFIO ISO image and go to “viostor“, open and select your Windows version (w10 for Windows 1o), then select the “AMD64” version for 64 bit systems, click OK.
Note: Instead of the viostor driver, you can also install the vioscsi driver. See qemu documentation for proper syntax in the qemu command – make sure to change the startup script before you choose this driver. The vioscsi driver supports trim for SSD drives.
Windows will ask for the license key, and you need to specify how to install – choose “Custom”. Then select your drive (there should be only disk0) and install.
Windows may reboot several times. When done rebooting, open Device Manager and select the Network interface. Right-click and select update. Then browse to the VFIO disk and install NetKVM.
Windows should be looking for a display driver by itself. If not, install it manually.
Note: In my case, Windows did not correctly detect my drives being SSD drives. Not only will Windows 10 perform unnecessary disk optimization tasks, but these “optimizations” can actually lead to reduced SSD life and performance issues. To make Windows 10 determine the correct disk drive type, do the following:
- Inside Windows 10, right-click the Start menu.
- Select “Command prompt (admin)”.
- At the command prompt, run:
1 | winsat formal |
- It will run a while and then print the Windows Experience Index (WEI).
- Please share your WEI in a comment below!
To check that Windows correctly identified your SSD:
- Open Explorer
- Click “This PC” in the left tab.
- Right-click your drive (e.g. C:) and select “Properties”.
- Select the “Tools” tab.
- Click “Optimize”
You should see something similar to this:
SSD optimization
Use Optimize Drives to optimize for SSD
In my case, I have drive C: (my Windows 10 system partition) and a “Recovery” partition located on an SSD, the other two partitions (“photos” and “raw_photos”) are using regular hard drives (HDD). Notice the “Optimization not available” ?? .
Turn off hibernation and suspend ! Having either of them enabled can cause your Windows VM to hang, or may even affect the host. To turn off hibernation and suspend, follow the instructions for hibernation and suspend.
Turn off fast startup ! When you shut down the Windows VM, fast startup leaves the file system in a state that is unmountable by Linux. If something goes wrong, you’re screwed. NEVER EVER let proprietary technology have control over your data. Follow these instructions to turn off fast startup.
By now you should have a working Windows VM with VGA passthrough.
Part 12 – Troubleshooting
Below are a number of common issues when trying to install/run Windows in a VGA passthrough environment.
Kernel 5.1 through 5.3 having Issues with VFIO-solved with latest kernel update
It’s been reported that Kernel branches 5.1, 5.2 and 5.3 have introduced issues with VFIO passthrough and outright crashes. But if you stick to the good old Linux Mint 19.2 system and the kernels provided in the LTS repository, you do not need to worry.
Important: The problem has been solved with the latest 5.3 kernels! I’m currently running kernel 5.3.0-40 without issues on my system.
System Thread Exception Not Handled
When you receive a Windows blue screen with the error:
System Thread Exception Not Handled
enter the following terminal command:
1 | echo "options kvm ignore_msrs=1" >> /etc/modprobe.d/kvm.conf |
See also Part 4 above.
No Video Output after GPU Passthrough (black screen)
A number of people report that they can’t get video output on their passed-through GPU. This is quite a common issue.
When booting the PC, the host UEFI initializes the GPU and makes a somewhat modified “shadow copy” of the GPU’s vBIOS. Later when you start the VM, Linux exposes this crippled shadow BIOS to the guests UEFI loader. The same happens when you try to passthrough your primary (and only) GPU to the guest. A telltale sign is the following error when running the VM start script:
qemu-system-x86_64: -device vfio-pci,host=02:00.0,multifunction=on: Failed to mmap 0000:02:00.0 BAR 3. Performance may be slow
If you use 2 graphics cards (one for the host and one for the VM), before trying any of the following steps, check the following:
- Your BIOS may allow you to select the PCIE slot where your (host) graphics card is located. Boot into the BIOS and check to see if the host graphics card is selected.
- In any other case, you would normally want to place your host GPU in slot 1 and your passthrough GPU in another slot (for example PCIE slot 2). If that is not the case, you need to follow the steps below to provide a proper vBIOS to the passthrough GPU.
There are several possible solutions all described in Explaining CSM, efifb=off, and Setting the Boot GPU Manually. The first solution to try is to enable CSM (Compatibility Support Module) in the motherboard BIOS. This may set another GPU as the primary GPU, leaving the passthrough GPU untouched. Run
If the above CSM solution doesn’t work, the probably best solution is to:
- Put the passthrough GPU in a secondary PCI slot;
- Temporarily install a graphics card into the primary PCI slot;
- Create a BIOS dump file off the passthrough GPU;
- Remove the temporary primary GPU, install the passthrough GPU and boot the VM using the romfile=/…/GPU_BIOS.dump file we created.
Turn off the PC and unplug from mains. Remove the passthrough GPU from its slot, place it in another slot, and put in another GPU into the primary GPU slot.
Now turn on the PC, open a terminal window, and enter:
You should see your two VGA cards, for example:
01:00.0 VGA compatible controller: NVIDIA Corporation GF106GL [Quadro 2000] (rev a1) (prog-if 00 [VGA controller])
02:00.0 VGA compatible controller: NVIDIA Corporation GM204 [GeForce GTX 970] (rev a1) (prog-if 00 [VGA controller])
Note the PCI bus of the GPU you want to pass through (in the above example: 02:00.0). To unbind this card from the vfio-pci driver (in case it is bound), use these commands:
1 2 | sudo -i echo "0000:02:00.0" > /sys/bus/pci/drivers/vfio-pci/unbind |
Should you get an error: it just means it wasn’t bound to the vfio-pci driver.
Now enter:
1 | cd /sys/bus/pci/devices/0000:02:00.0/ |
(replace 02:00.0 with your PCI bus number) followed by:
1 | echo 1 > rom |
1 | cat rom > /path/to/GPU_BIOS.dump |
1 | echo 0 > rom |
In case your graphics card was bound to vfio-pci (no error message when unbinding), enter the following:
1 | echo "0000:02:00.0" > /sys/bus/pci/drivers/vfio-pci/bind |
(replace 02:00.0 with your PCI bus).
Turn off the PC, disconnect from mains, and replace the temporary GPU with the passthrough GPU.
(Note: Instead of the above procedure, you can download your video BIOS from TechPowerUp and modify the vBIOS file as shown here.)
After booting the host once again, edit the VM start script and add the romfile option to the qemu -device command for your GPU:
1 | -device vfio-pci,host=02:00.0,multifunction=on,romfile=/path/to/GPU_BIOS.dump \ |
AMD Motherboard BIOS Update Breaks VFIO
It’s been reported that updating the AMD X370 and X470 motherboard BIOS to support the latest Ryzen CPUs breaks VFIO passthrough. See Upgrading my Bios ready for Nextgen Ryzen Broke my Passthrough Twice! and Attention amd vfio users do not update your bios. This issue has already been reported some time ago, and work was done to produce a kernel patch. But as long as AMD and/or the motherboard/BIOS vendors don’t fix their bugs, you should NOT upgrade the BIOS, unless you must because you upgraded to a CPU that isn’t supported by the current BIOS.
Make sure to check the VFIO Reddit forum to see what’s working and what not.
Youtuber Spaceinvador One has created a video on how to downgrade an updated BIOS: How to Downgrade an AMD Bios & Agesa. Proceed with caution, if you must.
AMD Ryzen 3000 Considerations
After seeing some success stories and Zen 2 / Ryzen based tutorials, I decided to give it a try and bought a X570 motherboard and an AMD Ryzen 9 3900X CPU. This new hardware needed a more up-to-date QEMU release so I used a Ubuntu 19.04 based distro and wrote a new how-to – see Creating a Windows 10 VM on the AMD Ryzen 9 3900X using Qemu 4.0 and VGA Passthrough.
Linux Mint is expected to see a new release comming out this June 2020 which is going to be based on Ubuntu 20.04 LTS. If you bought an AMD Ryzen or Threadripper system, I strongly recommend that you sit this out and use the new Linux Mint 20 when available.
AMD GPU doesn’t reset after VM shutdown
Many modern AMD graphics cards don’t reset properly after the VM is shutdown. This requires a host reboot to re-run the Windows VM.
Youtube blogger Spaceinvader has published a short script that resets the graphics card. Here it is:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | #!/bin/bash # #replace xx\:xx.x with the number of your gpu and sound counterpart # # echo "disconnecting amd graphics" echo "1" | tee -a /sys/bus/pci/devices/0000\:xx\:xx.x/remove echo "disconnecting amd sound counterpart" echo "1" | tee -a /sys/bus/pci/devices/0000\:xx\:xx.x/remove echo "entered suspended state press power button to continue" echo -n mem > /sys/power/state echo "reconnecting amd gpu and sound counterpart" echo "1" | tee -a /sys/bus/pci/rescan echo "AMD graphics card sucessfully reset" |
See his video for more explanations.
A programmer nicknamed @gnif has written a patch for the Linux kernel that resets the AMD graphics card. See the post on Level1Techs, as well as the video.
Yet another solution (one of the first workarounds) is the GPU passthrough reinitialization fix which uses a Windows command line utility called devcon64.exe that essentially enables or disables hardware. Two Windows batch files are created, one that is run at Windows startup, the other when Windows shuts down. At the VM startup the graphics card video and audio are enabled, at shutdown they are disabled.
For more on AMD graphics cards and how to make them work, see further below.
Audio – Crackling Sound
If you experience crackling sound in your VM, you should enable MSI in your Windows 10 VM. See Turn on MSI Message Signaled Interrupts in your VM.
If the above step does not solve the crackling sound issue, have a look at Mathias Hueber’s Virtual machine audio setup – or how to get pulse audio working.
Reddit user spheenik wrote a patch to get rid of the audio crackling. This patch is incorporated in QEMU 4.0 and later – see Qemu 4.0 Audio Improvements below. (Note: my latest tutorial already incorporates the audio patch.)
If the above steps don’t solve the issue, and you are running a QEMU version prior to 4.0, you may want to read How To: Patch Qemu and Fix VM Audio.
Developer Geoff aka gniff wrote some patch to bring native JACK audio support to QEMU which is supposed to solve all audio issues.
VM not starting – graphics driver
A common issue is the binding of a driver to the graphics card we want to pass through. As I was writing this how-to and made changes to my (previously working) system, I suddenly couldn’t start the VM anymore. The first thing to check if you don’t get a black Tianocore screen is whether or not the graphics card you try to pass through is bound to the vfio-pci driver:
The output should be similar to this:
[ 2.735931] VFIO – User Level meta-driver version: 0.3
[ 2.757208] vfio_pci: add [10de:13c2[ffff:ffff]] class 0x000000/00000000
[ 2.773223] vfio_pci: add [10de:0fbb[ffff:ffff]] class 0x000000/00000000
[ 8.437128] vfio-pci 0000:02:00.0: enabling device (0000 -> 0003)
The above example shows that the graphics card is bound to the vfio-pci driver (see last line), which is what we want. If the command doesn’t produce any output, or a very different one from above, something is wrong. To check further, enter:
Here is what I got when my VM wouldn’t want to start anymore:
01:00.0 VGA compatible controller: NVIDIA Corporation GF106GL [Quadro 2000] (rev a1)
Subsystem: NVIDIA Corporation GF106GL [Quadro 2000]
Kernel driver in use: nvidia
Kernel modules: nvidiafb, nouveau, nvidia_361
—
02:00.0 VGA compatible controller: NVIDIA Corporation GM204 [GeForce GTX 970] (rev a1)
Subsystem: Gigabyte Technology Co., Ltd GM204 [GeForce GTX 970]
Kernel driver in use: nvidia
Kernel modules: nvidiafb, nouveau, nvidia_361
Graphics card 01:00.0 (Quadro 2000) uses the Nvidia driver – just what I want.
Graphics card 02:00.0 (GTX 970) also uses the Nvidia driver – that is NOT what I was hoping for. This card should be bound to the vfio-pci driver. So what do we do?
In Linux Mint, click the menu button, click “Control Center”, then click “Driver Manager” in the Administration section. Enter your password. You will then see the drivers associated with the graphics cards. Change the driver of the graphics card so it will use the open-source driver (in this example “Nouveau”) and press “Apply Changes”. After the change, it should look similar to the photo below:
Driver Manager
Driver Manager in Linux Mint
If above doesn’t help, or if you can’t get rid of the nouveau driver, see if the nvidia-fallback.service is running. If yes, it will load the open-source nouveau driver whenever it can’t find the Nvidia proprietary driver. You need to disable it by running the following command as sudo:
BSOD when installing AMD Crimson drivers under Windows
Several users on the Redhat VFIO mailing list have reported problems with the installation of AMD Crimson drivers under Windows. This seems to affect a number of AMD graphics cards, as well as a number of different AMD Crimson driver releases. A workaround is described here: https://www.redhat.com/archives/vfio-users/2016-April/msg00153.html
In this workaround the following line is added to the startup script, right above the definition of the graphics device:
1 | -device ioh3420,bus=pci,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 |
Should the above configuration give you a “Bus ‘pci’ not found” error, change the line as follows:
1 | -device ioh3420,bus=pci.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 |
Then you change the graphics card passthrough options as follows:
1 2 | -device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on \ -device vfio-pci,host=02:00.1,bus=root.1,addr=00.1 \ |
Identical graphics cards for host and guest
If you use two identical graphics cards for both the Linux host and the Windows guest, follow these instructions:
Modify the /etc/modprobe.d/local.conf file as follows:
Create a /sbin/vfio-pci-override-vga.sh file with the following content:
1 2 3 4 5 6 7 8 9 | #!/bin/sh DEVS="0000:02:00.0 0000:02:00.1" for DEV in $DEVS; do echo "vfio-pci" > /sys/bus/pci/devices/$DEV/driver_override done modprobe -i vfio-pci |
Make the vfio-pci-override-vga.sh file executable:
Windows ISO won’t boot – 1
If you can’t start the Windows ISO, it may be necessary to run a more recent version of Qemu to get features or work-arounds that solve problems. If you require a more updated version of Qemu (version 2.12 as of this update), add the following PPA (warning: this is not an official repository – use at your own risk). At the terminal prompt, enter:
1 | sudo add-apt-repository ppa:jacob/virtualisation |
The latest stable QEMU version as of April, 2019 is QEMU 4.0.0. See also above under “Audio – Crackling Sound“.
Windows ISO won’t boot – 2
Sometimes the OVMF BIOS files from the official Ubuntu repository don’t work with your hardware and the VM won’t boot. In that case you can download alternative OVMF files from here: http://www.ubuntuupdates.org/package/core/wily/multiverse/base/ovmf, or get the most updated version from here:
https://www.kraxel.org/repos/jenkins/edk2/
Download the latest edk2.git-ovmf-x64 file, as of this update it is “edk2.git-ovmf-x64-0-20180807.221.g1aa9314e3a.noarch.rpm” for a 64bit installation. Open the downloaded .rpm file with root privileges and unpack to /.
Copy the following files:
1 | sudo cp /usr/share/edk2.git/ovmf-x64/OVMF-pure-efi.fd /usr/share/ovmf/OVMF.fd |
1 | sudo cp /usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd /usr/share/OVMF/OVMF_CODE.fd |
1 | sudo cp /usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd /usr/share/OVMF/OVMF_VARS.fd |
Check to see if the /usr/share/ovmf/OVMF.fd link exists, if not, create it:
1 | sudo ln -s '/usr/share/ovmf/OVMF.fd' '/usr/share/qemu/OVMF.fd' |
Windows ISO won’t boot – 3
Sometimes the Windows ISO image is corrupted or simply an old version that doesn’t work with passthrough. Go to https://www.microsoft.com/en-us/software-download/windows10ISO and download the ISO you need (see your software license). Then try again.
Motherboard BIOS bugs
Some motherboard BIOSes have bugs and prevent passthrough. Use “dmesg” and look for entries like these:
[ 0.297481] [Firmware Bug]: AMD-Vi: IOAPIC[7] not in IVRS table
[ 0.297485] [Firmware Bug]: AMD-Vi: IOAPIC[8] not in IVRS table
[ 0.297487] [Firmware Bug]: AMD-Vi: No southbridge IOAPIC found in IVRS table
[ 0.297490] AMD-Vi: Disabling interrupt remapping due to BIOS Bug(s)
If you find entries that point to a faulty BIOS or problems with interrupt remapping, go to Easy solution to get IOMMU working on mobos with broken BIOSes. (All credits go to leonmaxx on the Ubuntu forum!)
Intel IGD and arbitration bug
For users of Intel CPUs with IGD (Intel graphics device): The Intel i915 driver has a bug, which has necessitated a kernel patch named i915 vga arbiter patch. According to developer Alex Williamson, this patch is needed any time you have host Intel graphics and make use of the x-vga=on option. This tutorial, however, does NOT use the x-vga option; the tutorial is based on UEFI boot and doesn’t use VGA. That means you do NOT need the i915 vga arbiter patch! See http://vfio.blogspot.com/2014/08/primary-graphics-assignment-without-vga.html.
In some case you may need to stop the i915 driver from loading by adding nomodeset to the following line in /etc/default/grub as shown:
GRUB_CMDLINE_LINUX_DEFAULT=”quiet nomodeset intel_iommu=on”
Then run:
sudo update-grub
nomodeset prevents the kernel from loading video drivers and tells it to use BIOS modes instead, until X is loaded.
IOMMU group contains additional devices
When checking the IOMMU groups, your graphics card’ video and audio part should be the only 2 entries under the respective IOMMU group (some graphics cards have more, see “AMD Vega” below). The same goes for any other PCI device you want to pass through, as you must pass through all devices within an IOMMU group, or nothing. If – aside from the PCI device(s) you wish to pass to the guest – there are other devices within the same IOMMU group, see What if there are other devices in my IOMMU group for a solution.
AMD Vega / RX 5700 XT / Navi Graphics Cards
Numerous AMD graphics cards users have reported issues that either prevented VFIO passthrough or complained about other bugs (most infamously the “Reset bug” – see above for workarounds). github user/developer “gnif” wrote a patch (a kernel quirk, to be precise) that provides a workaround. It needs to be compiled into the kernel, though.
Note: I can’t recommend buying any AMD Navi based products – the kernel patch is a workaround, not a bug fix. It’s the manufacturers job (i.e. AMD) to fix their bugs, especially since this one has been around for such a long time.
In addition to or instead of the kernel patch mentioned above, there are some more suggestions by users (follow the links to see the original posts):
-
When passed through as primary GPU, pass through the VBIOS. See AMD RX 5700XT – Reviewing Passthrough to KVM Virtual Machines.
-
Use the q35 option (see above link).
-
Some AMD Vega graphics cards have actually 3 devices. Check it with:
In addition to the graphics device and the audio device, you’ll find a PCI bridge. In order to successfully pass through a Vega card, you need to replicate the topology in the QEMU command:
1 2 3 4 5 | -device ioh3420,id=root_port1,chassis=1,slot=2,bus=pcie.0 \ -device x3130-upstream,id=upstream_port1,bus=root_port1 \ -device xio3130-downstream,id=downstream_port1,chassis=11,slot=21,bus=upstream_port1 \ -device vfio-pci,host=05:00.0,bus=downstream_port1,multifunction=on \ -device vfio-pci,host=05:00.1,bus=downstream_port1 |
Change the above PCI-IDs to reflect your system. See Reddit VFIO forum.
AMD Vega users, see Primary slot VEGA passthrough now works, VEGA reset issue fixed for good for details on how to get it working.
I don’t own any Navi-based AMD graphics card and cannot test the solutions suggested above.
Dual-graphics laptops (e.g. Optimus technology)
Misairu_G (username on forum) published a Guide to VGA passthrough on Optimus laptops. You may want to consult that guide if you use a laptop with Nvidia graphics.
User bash64 on the Linux Mint forum has reported success with only minor modifications to this tutorial. The main deviations were:
- The nomodeset option in the /etc/default/grub file (see “Intel IGD and arbitration bug” above)
- Seabios instead of UEFI / ovmf
- Minor modifications to the qemu script file
Issues with Skylake CPUs
Another issue has come up with Intel Skylake CPUs. This problem is likely solved by now. Update to a recent kernel (e.g. 4.18 or newer), as described above.
In case the kernel upgrade doesn’t solve the issue, see https://lkml.org/lkml/2016/3/31/1112 for an available patch. Another possible solution can be found here: https://teksyndicate.com/2015/09/13/wendells-skylake-pc-build-i7-6700k/.
Issues with AMD Threadripper CPUs
For some time the AMD Threadripper family of CPUs had been plagued with a bug that prevented proper GPU passthrough. The issue is described in this Reddit thread. Motherboard manufacturers have recently issued BIOS updates that solve the problem. Install the latest motherboard BIOS update.
Be aware that with some other AMD CPU and motherboard combinations, a BIOS upgrade may actually prevent passthrough from working.
AMD Ryzen freeze
Aside from poor IOMMU groupings which may be solved by upgrading the BIOS, AMD Ryzen CPUs have also been reported to freeze occasionally. This video shows how to fix it with a simple grub startup option:
where 0-7 is the total number of threads your Ryzen CPU has. See also What the Linux rcu_nocbs kernel argument does (and my Ryzen issues again) and Fix Ryzen lockups related to low system usage for more information.
Note: I’m running QEMU 4.0 on a AMD Ryzen 9 3900X CPU and X570 board without any issue. See my tutorial for details.
AMD Ryzen Hyperthreading (SMT) Not Supported
Qemu versions prior to Qemu 3.0 do not recognize hyperthreading in AMD Ryzen CPUs, which leads to performance degradation. To fix this, you may want to upgrade to Qemu 3.0 or later. The instructions under How do I install QEMU 3.0 on ubuntu 18.04? should also work in Linux Mint 19.
qemu: hardware error: vfio: DMA mapping failed, unable to continue
When running the start script for the VM, the VM crashes with the error above. You may have to unplug and replug the mouse and keyboard USB cables to regain control over the PC. This happens when the user’s locked memory limit is too small.
Open a terminal and enter the following:
ulimit -a | grep locked
If you get
max locked memory (kbytes, -l) 16384
your locked memory limit is too low. See Run Windows VM in user mode (non-root) below for how to increase the locked memory.
No Solution Found
If you haven’t found a solution to your problem, check the References. You are also welcome to leave a comment and I or someone else will try to help.
Part 13 – Run Windows VM in user mode (non-root)
To run your Windows VM in user mode has become easy.
- Add your user to the kvm group:
sudo usermod -a -G kvm myusername
Note: Always replace “myusername” with your user name. - Reboot (or logout and login) to see your user in the kvm group.
- If you use hugepages, make sure they are properly configured. Open your /etc/fstab file and compare your hugetlbfs configuration with the following:
hugetlbfs /dev/hugepages hugetlbfs mode=1770,gid=129 0 0
Note that the gid=129 might be different for your system. The rest should be identical!
Now enter the following command:
getent group kvm
This should return something like:
kvm?129:myusername
The group ID (gid) number matches. If not, edit the fstab file to have the gid= entry match the gid number you got using getent. - Edit the file /etc/security/limits.conf and add the following lines:
@kvm soft memlock unlimited
@kvm hard memlock unlimited - Edit the file /etc/security/limits.d/hugepages.conf and add the following lines (if not present):
@hugepages hard memlock unlimited
@hugepages soft memlock unlimited - Edit the Windows VM start script and add the following line below the entry “cp /usr/share/OVMF/OVMF_VARS.fd /tmp/my_vars.fd“:
chown myusername:kvm /tmp/my_vars.fd
Next add the following entry right under the qemu-system-x86_64 \ entry, on a separate line:
-runas myusername \
Save the file and start your Windows VM. You will still need sudo to run the script, since it performs some privileged tasks, but the guest will run in user mode with your user privileges. - After booting into Windows, switch to the Linux host and run in a terminal:
top
Your output should be similar to this:
1 2 | kvm Windows VM top shows Windows 10 VM running with user privileges |
Notice the win10vm entry associated with my user name “heiko” instead of “root”.
Please report if you encounter problems (use comment section below).
For other references, see the following tutorial: https://www.evonide.com/non-root-gpu-passthrough-setup/.
Part 14 – Passing more PCI devices to guest
If you wish to pass additional PCI devices through to your Windows guest, you must make sure that you pass through all PCI devices residing under the same IOMMU group. Moreover, DO NOT PASS root devices to your guest. To check which PCI devices reside under the same group, use the following command:
1 | find /sys/kernel/iommu_groups/ -type l |
The output on my system is:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 | /sys/kernel/iommu_groups/0/devices/0000:00:00.0 /sys/kernel/iommu_groups/1/devices/0000:00:01.0 /sys/kernel/iommu_groups/2/devices/0000:00:02.0 /sys/kernel/iommu_groups/3/devices/0000:00:03.0 /sys/kernel/iommu_groups/4/devices/0000:00:05.0 /sys/kernel/iommu_groups/4/devices/0000:00:05.2 /sys/kernel/iommu_groups/4/devices/0000:00:05.4 /sys/kernel/iommu_groups/5/devices/0000:00:11.0 /sys/kernel/iommu_groups/6/devices/0000:00:16.0 /sys/kernel/iommu_groups/7/devices/0000:00:19.0 /sys/kernel/iommu_groups/8/devices/0000:00:1a.0 /sys/kernel/iommu_groups/9/devices/0000:00:1c.0 /sys/kernel/iommu_groups/10/devices/0000:00:1c.1 /sys/kernel/iommu_groups/11/devices/0000:00:1c.2 /sys/kernel/iommu_groups/12/devices/0000:00:1c.3 /sys/kernel/iommu_groups/13/devices/0000:00:1c.4 /sys/kernel/iommu_groups/14/devices/0000:00:1c.7 /sys/kernel/iommu_groups/15/devices/0000:00:1d.0 /sys/kernel/iommu_groups/16/devices/0000:00:1e.0 /sys/kernel/iommu_groups/17/devices/0000:00:1f.0 /sys/kernel/iommu_groups/17/devices/0000:00:1f.2 /sys/kernel/iommu_groups/17/devices/0000:00:1f.3 /sys/kernel/iommu_groups/18/devices/0000:01:00.0 /sys/kernel/iommu_groups/18/devices/0000:01:00.1 /sys/kernel/iommu_groups/19/devices/0000:02:00.0 /sys/kernel/iommu_groups/19/devices/0000:02:00.1 /sys/kernel/iommu_groups/20/devices/0000:05:00.0 /sys/kernel/iommu_groups/20/devices/0000:06:04.0 … |
As you can see in the above list, some IOMMU groups contain multiple devices on the PCI bus. I wanted to see which devices are in IOMMU group 17 and used the PCI bus ID:
1 | lspci -nn | grep 00:1f. |
Here is what I got:
00:1f.0 ISA bridge [0601]: Intel Corporation C600/X79 series chipset LPC Controller [8086:1d41] (rev 05)
00:1f.2 SATA controller [0106]: Intel Corporation C600/X79 series chipset 6-Port SATA AHCI Controller [8086:1d02] (rev 05)
00:1f.3 SMBus [0c05]: Intel Corporation C600/X79 series chipset SMBus Host Controller [8086:1d22] (rev 05)
All of the listed devices are used by my Linux host:
– The ISA bridge is a standard device used by the host. You do not pass it through to a guest!
– All my drives are controlled by the host, so passing through a SATA controller would be a very bad idea!
– Do NOT pass through a host controller, such as the C600/X79 series chipset SMBus Host Controller!
In order to pass through individual PCI devices, edit the VM startup script and insert the following code underneath the vmname=… line:
1 2 3 4 5 6 7 8 9 10 11 12 | configfile=/etc/vfio-pci.cfg vfiobind() { dev="$1" vendor=$(cat /sys/bus/pci/devices/$dev/vendor) device=$(cat /sys/bus/pci/devices/$dev/device) if [ -e /sys/bus/pci/devices/$dev/driver ]; then echo $dev > /sys/bus/pci/devices/$dev/driver/unbind fi echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id } |
Underneath the line containing “else”, insert:
1 2 3 4 | cat $configfile | while read line;do echo $line | grep ^# >/dev/null 2>&1 && continue vfiobind $line done |
You need to create a vfio-pci.cfg file in /etc containing the PCI bus numbers as follows:
1 2 | 0000:00:1a.0 0000:08:00.0 |
Make sure the file does NOT contain any blank line(s). Replace the PCI IDs with the ones you found!
Part 15 – References
For documentation on QEMU, see the following directory on your Linux machine:/usr/share/doc/qemu-system-common. For a partial documentation on QEMU and its various releases see https://qemu.weilnetz.de/doc/. (Note: I’ve been looking for a complete, comprehensive documentation on QEMU and kvm for about 9 years and never found one.)
PCI EXPRESS GUIDELINES – a brief explanation of PCI device configuration in QEMU – some syntax (ioh3420 is now pcie-root-port) has changed in the meantime
Hyper-V Enlightenments – a brief explanation of the Hyper-V enlightenments that are supported
Virtual machines with PCI passthrough on Ubuntu 20.04, straightforward guide for gaming on a virtual machine – one of the most thorough and up-to-date tutorials on the web
https://github.com/bryansteiner/gpu-passthrough-tutorial/ – a Pop!_OS 20.04 based, AMD Ryzen 3900X specific tutorial using libvirt hooks
Creating a Windows 10 VM on the AMD Ryzen 9 3900X using Qemu 4.0 and VGA Passthrough – my own version of a VFIO passthrough guide with tips on audio etc.
VFIO in 2019 – Pop!_OS How-To (General Guide though) [DRAFT] – Wendells VFIO passthrough guide
https://passthroughpo.st/ – a “online news publication with a razor focus on virtualization and linux gaming, as well as developments in open source technology”
https://www.reddit.com/r/VFIO/ – the Reddit r/VFIO subreddit to discuss all things related to VFIO and gaming on virtual machines in general
https://github.com/joeknock90/Single-GPU-Passthrough – as the name says, it explains GPU passthrough when you have only one GPU (for both host and VM)
Passthrough a Nvidia GPU as primary – excellent Youtube video by “Spaceinvader One” on preparing a GPU for primary passthrough
Advanced GPU passthrough techniques on Unraid – another great video on GPU passthrough by “Spaceinvader One” – worth watching
https://www.mankier.com/1/qemu – updated QEMU man page (note that this tutorial is based on Qemu 2.11 and much has changed since)
https://davidyat.es/2016/09/08/gpu-passthrough/ – a well written tutorial offering qemu script and virt-manager as options
https://blog.zerosector.io/2018/07/28/kvm-qemu-windows-10-gpu-passthrough/ – a straightforward tutorial for Ubuntu 18.04 using virt-manager
https://clayfreeman.github.io/gpu-passthrough/ – a libvirt based tutorial for an Intel machine
https://ycnrg.org/vga-passthrough-with-ovmf-vfio/ – a Ubuntu 16.04 tutorial using virt-manager
https://qemu.weilnetz.de/doc/qemu-doc.html – QEMU user manual
http://ubuntuforums.org/showthread.php?t=2266916 – Ubuntu tutorial
https://wiki.archlinux.org/index.php/QEMU – Arch Linux documentation on QEMU – by far the best
https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF – PCI passthrough via OVMF tutorial for Arch Linux – provides excellent information
https://aur.archlinux.org/cgit/aur.git/tree/?h=linux-vfio – a source for the ACS and i915 arbiter patches
http://vfio.blogspot.com/2014/08/vfiovga-faq.html – one of the developers, Alex provides invaluable information and advice
http://vfio.blogspot.com/2014/08/primary-graphics-assignment-without-vga.html
http://www.linux-kvm.org/page/Tuning_KVM – Redhat is the key developer of kvm, their website has lots of information, but is often a little outdated
https://wiki.archlinux.org/index.php/KVM – Arch Linux KVM page
https://www.suse.com/documentation/sles11/book_kvm/data/part_2_book_book_kvm.html – Suse Linux documentation on KVM – good reference
https://www.evonide.com/non-root-gpu-passthrough-setup/ – haven’t tried it, but looks like a good tutorial
https://forum.level1techs.com/t/gta-v-on-linux-skylake-build-hardware-vm-passthrough/87440 – tutorial with Youtube video to go along, very useful and up-to-date, including how to apply ACS override patch
https://gitlab.com/YuriAlek/vfio – single GPU passthrough with QEMU and VFIO
https://null-src.com/posts/qemu-optimization/post.php – methods to optimize and fine tune qemu (I’ve tried only a few)
https://libvirt.org/format.html and https://libvirt.org/formatdomain.html – if you want to play with virt-manager, you’ll need to dabble in libvirt.
Below is the VM startup script I use, for reference only.
Note: The script is specific for my hardware. Don’t use it without modifying it!
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 | #!/bin/bash configfile=/etc/vfio-pci.cfg vmname="win10vm" vfiobind() { dev="$1" vendor=$(cat /sys/bus/pci/devices/$dev/vendor) device=$(cat /sys/bus/pci/devices/$dev/device) if [ -e /sys/bus/pci/devices/$dev/driver ]; then echo $dev > /sys/bus/pci/devices/$dev/driver/unbind fi echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id } if ps -ef | grep qemu-system-x86_64 | grep -q multifunction=on; then zenity --info --window-icon=info --timeout=15 --text="A VM is already running." & exit 1 else #modprobe vfio-pci cat $configfile | while read line;do echo $line | grep ^# >/dev/null 2>&1 && continue vfiobind $line done # use pulseaudio #export QEMU_AUDIO_DRV=pa #export QEMU_PA_SAMPLES=8192 #export QEMU_AUDIO_TIMER_PERIOD=100 #export QEMU_PA_SERVER=/run/user/1000/pulse/native #export QEMU_PA_SINK=alsa_output.pci-0000_06_04.0.analog-stereo #export QEMU_PA_SOURCE=input #use ALSA export QEMU_AUDIO_DRV=alsa export QEMU_ALSA_ADC_BUFFER_SIZE=1024 QEMU_ALSA_ADC_PERIOD_SIZE=256 export QEMU_ALSA_DAC_BUFFER_SIZE=1024 QEMU_ALSA_DAC_PERIOD_SIZE=256 export QEMU_AUDIO_DAC_FIXED_SETTINGS=1 export QEMU_AUDIO_DAC_FIXED_FREQ=44100 QEMU_AUDIO_DAC_FIXED_FMT=S16 QEMU_AUDIO_ADC_FIXED_FREQ=44100 QEMU_AUDIO_ADC_FIXED_FMT=S16 export QEMU_AUDIO_DAC_TRY_POLL=1 QEMU_AUDIO_ADC_TRY_POLL=1 export QEMU_AUDIO_TIMER_PERIOD=50 cp /usr/share/OVMF/OVMF_VARS.fd /tmp/my_vars.fd chown heiko:kvm /tmp/my_vars.fd #taskset -c 0-9 qemu-system-x86_64 \ -enable-kvm \ -runas heiko \ -monitor stdio \ -serial none \ -parallel none \ -nodefaults \ -nodefconfig \ -name $vmname,process=$vmname \ -machine q35,accel=kvm,kernel_irqchip=on \ -cpu host,kvm=off,hv_vendor_id=1234567890ab,hv_vapic,hv_time,hv_relaxed,hv_spinlocks=0x1fff,l3-cache=on,-hypervisor,migratable=no,+invtsc \ -smp 12,sockets=1,cores=6,threads=2 \ -m 16G \ -balloon none \ -mem-path /dev/hugepages \ -mem-prealloc \ -rtc base=localtime,clock=host \ -soundhw hda \ -vga none \ -nographic \ -device ioh3420,id=root_port1,chassis=0,slot=0,bus=pcie.0 \ -device vfio-pci,host=02:00.0,id=hostdev1,bus=root_port1,addr=0x00,multifunction=on \ -device vfio-pci,host=02:00.1,id=hostdev2,bus=root_port1,addr=0x00.1 \ -device vfio-pci,host=00:1a.0 \ -device vfio-pci,host=08:00.0 \ -drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd \ -drive if=pflash,format=raw,file=/tmp/my_vars.fd \ -boot order=c \ -object iothread,id=io1 \ -device virtio-blk-pci,drive=disk0,iothread=io1 \ -drive if=none,id=disk0,cache=none,format=raw,aio=threads,cache.direct=on,discard=unmap,detect-zeroes=unmap,file=/dev/mapper/lm13-win10 \ -device virtio-blk-pci,drive=disk1,iothread=io1 \ -drive if=none,id=disk1,cache=none,format=raw,aio=native,file=/dev/mapper/photos-photo_stripe \ -device virtio-blk-pci,drive=disk2,iothread=io1 \ -drive if=none,id=disk2,cache=none,format=raw,aio=native,file=/dev/mapper/media-photo_raw \ -netdev type=tap,id=net0,ifname=vmtap0,vhost=on \ -device virtio-net-pci,netdev=net0,mac=00:16:3e:00:01:01 #EOF # -drive file=/media/heiko/tmp_stripe/OS-backup/ISOs/win10.iso,id=isocd,format=raw,if=none -device scsi-cd,drive=isocd \ # -drive file=/media/heiko/tmp_stripe/OS-backup/ISOs/virtio.iso,id=virtiocd,format=raw,if=none -device ide-cd,bus=ide.1,drive=virtiocd \ exit 0 fi |
The command
pins the vCPUs of the guest to processor threads 0-9 (I have a 6-core CPU with 2 threads per core=12 threads). Here I assign 10 out of 12 threads to the guest. While the guest is running, the host will have to make due with only 1 core (2 threads). CPU pinning may improve performance of the guest, although I don’t use it on my PC.
Note: I am currently passing through all cores and threads, without CPU pinning. This seems to give me the best results in the benchmarks, as well as real-life performance.
Part 16 – Related Posts
Here a list of related posts:
Developments in Virtualization
Virtual Machines on Userbenchmark
Tuning VM Disk Performance
Blacklisting Graphics Driver
GPU Passthrough with Low-End Intel i3-6100 CPU
Linux Virtual Gaming Machine
Low 2D Graphics Benchmark with Windows 10 (1803) KVM VM
Part 17 – Benchmarks
I have a separate post showing Passmark benchmarks of my system.
Here the UserBenchmark results for my configuration:
UserBenchmarks: Game 60%, Desk 71%, Work 64%
CPU: Intel Core i7-3930K – 79.7%
GPU: Nvidia GTX 970 – 60.4%
SSD: Red Hat VirtIO 140GB – 74.6%
HDD: Red Hat VirtIO 2TB – 64.6%
HDD: Red Hat VirtIO 2TB – 66.1%
RAM: QEMU 20GB – 98.2%
MBD: QEMU Standard PC (Q35 + ICH9, 2009)
Part 18 – Performance Tuning
I keep updating this chapter, so expect more tips to be added here in the future.
Enable Hyper-V Enlightenments
As funny as this sounds, this is another way to improve Windows performance under kvm. Hyper-V enlightenments are easy to implement: In the script that starts the VM, change the following line:
to:
The above is one line! To check that it actually works, start your Windows VM and switch to Linux. Open a terminal window and enter (in one line):
You should get the following output:
hv_vapic
hv_time
hv_relaxed
hv_spinlocks=0x1fff
For more on Hyper-V enlightenments, see here.
Configure hugepages
This step is not required to run the Windows VM, but helps improve performance. First we need to decide how much memory we want to give to Windows. Here my suggestions:
1 2 | No less than 4GB. Use 8GB or more for a Windows gaming VM. If you got 16GB total and aren’t running multiple VMs, give Windows 8GB-12GB, depending on what you plan to do with Windows. |
For this tutorial I use 8GB. Hugepages are enabled by default in the latest releases of Linux Mint (since 18) and Ubuntu (since 16.04). For more information or if you are running an older release, see KVM – Using Hugepages.
Let’s see what we got:
1 | hugeadm --explain |
Total System Memory: 24108 MB
Mount Point Options
/dev/hugepages rw,relatime,pagesize=2M
Huge page pools:
Size Minimum Current Maximum Default
2097152 0 0 0 *
1073741824 0 0 0
…
As you can see, hugepages are mounted to /dev/hugepages, and the default hugepage size is 2097152 Bytes/(1024*1024)=2MB.
Another way to get information about hugepages:
1 | grep "Huge" /proc/meminfo |
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Here some math:
We want to reserve 8GB for Windows:
8GB = 8x1024MB = 8192MB
Our hugepage size is 2MB, so we need to reserve:
8192MB/2MB = 4096 hugepages
We configure the hugepage pool during boot. Open the file /etc/default/grub as root:
1 | xed admin:///etc/default/grub |
Look for the GRUB_CMDLINE_LINUX_DEFAULT=”…” line we edited before and add:
1 | hugepages=4096 |
This is what I have:
GRUB_CMDLINE_LINUX_DEFAULT=”modprobe.blacklist=nouveau quiet intel_iommu=on hugepages=4096″
Save and close. Then run:
Now reboot for our hugepages configuration to take effect.
After the reboot, run in a terminal:
1 | hugeadm --explain |
Total System Memory: 24108 MB
Mount Point Options
/dev/hugepages rw,relatime,pagesize=2M
Huge page pools:
Size Minimum Current Maximum Default
2097152 4096 4096 4096 *
1073741824 0 0 0
Huge page sizes with configured pools:
2097152
The /proc/sys/vm/min_free_kbytes of 67584 is too small. To maximiuse efficiency of fragmentation avoidance, there should be at least one huge page free per zone in the system which minimally requires a min_free_kbytes value of 112640
A /proc/sys/kernel/shmmax value of 17179869184 bytes may be sub-optimal. To maximise shared memory usage, this should be set to the size of the largest shared memory segment size you want to be able to use. Alternatively, set it to a size matching the maximum possible allocation size of all huge pages. This can be done automatically, using the –set-recommended-shmmax option.
The recommended shmmax for your currently allocated huge pages is 8589934592 bytes.
To make shmmax settings persistent, add the following line to /etc/sysctl.conf:
kernel.shmmax = 8589934592
To make your hugetlb_shm_group settings persistent, add the following line to /etc/sysctl.conf:
vm.hugetlb_shm_group = 129
Note: Permanent swap space should be preferred when dynamic huge page pools are used.
Note the sub-optimal shmmax value. We fix it permanently by editing /etc/sysctl.conf:
1 | xed admin:///etc/sysctl.conf |
and adding the following lines:
1 2 3 | kernel.shmmax = 8589934592 vm.hugetlb_shm_group = 129 min_free_kbytes = 112640 |
Note 1: Use the values recommended by hugeadm –explain, but double-check kernel.shmmax! The recommended value is often unrealistic. As a sane value, try 1/2 the amount of your RAM in multiples of 1024.
Regarding vm.hugetlb_shm_group = 129: “129” is the GID of the group kvm. Check with:
Run sudo sysctl -p to put the new settings into effect. Then edit the /etc/fstab file to configure the hugepages mount point with permissions and group ID (GID):
1 | xed admin:///etc/fstab |
Add the following line to the end of the file and save:
1 | hugetlbfs /dev/hugepages hugetlbfs mode=1770,gid=129 0 0 |
where gid=129 is the group ID of the kvm group determined with
It’s best to add your user to the kvm group, so you’ll have permission to access the hugepages:
1 | usermod -a -G kvm user |
where “user” is your user name.
Check the results with
Now we need to edit the windows10vm.sh script file that contains the qemu command and add the following lines under -m 8G :
1 2 | -mem-path /dev/hugepages \ -mem-prealloc \ |
Reboot the PC for the fstab changes to take effect.
Turn on MSI Message Signaled Interrupts in your VM
Developer Alex Williamson argues that MSI Message Signaled Interrupts may provide a more efficient way to handle interrupts. A detailed description on how to turn on MSI in a Windows VM can be found here: Line-Based vs. Message Signaled-Based Interrupts. For those that find this procedure a little challenging, there is a Windows utility that makes it easier: MSI-utility (scroll down towards the end of the post). As always, be careful with unverified 3rd-party software – you have been warned.
Make sure to backup your entire Windows installation, or at least define a restore point for Windows.
In my case it improved sound quality (no more crackle), others have reported similar results – see these comments.
Important: With every major Windows 10 update (e.g. 1803 to 1809), Microsoft’s gifted software engineers manage to reverse your MSI settings. Nvidia is also busy reverting to line based interrupts with every driver update. So after an update, you have to do this step all over again. ??
Tuning VM Disk Performance
I’ve written a separate post on tuning VM disk performance. Under kvm, disk performance tuning can offer a dramatic read/write speed boost. My post describes different scenarios and which configuration might work best. As with every tuning step, take benchmarks to verify it actually works for you!
Low 2D Graphics Performance
Windows release 1803 added protection against the Spectre vulnerability, by means of an Intel microcode update. This update has caused a significant drop in 2D graphics performance inside the Windows VM.
Recent versions of Windows 10 more or less solve that problem. With Windows 10 release 1903 there is no need to disable Spectre and Meltdown protection. Make sure your Linux host is up-to-date, as it’s not clear to me which update fixed the issue.
SR-IOV and IOMMU Pass Through
Some devices support a feature called SR-IOV or Single Root Input/Output Virtualisation. This allows multiple virtual machines to access PCIe hardware using virtual functions (vf), thus improving performance. See Understanding the iommu Linux grub File Configuration.
The SR-IOV feature needs to be enabled in the BIOS, as well as the drivers. See here for an example.
In some cases performance can be further improved by adding the “pass through” option iommu=pt to the /etc/default/grub file:
GRUB_CMDLINE_LINUX_DEFAULT=”quiet intel_iommu=on iommu=pt”
Followed by sudo update-grub
CPU pinning
Many modern CPUs offer hardware multitasking, known as “hyper-threading” in Intel jargon or “SMT” in AMD talk. Those CPUs run two threads on each core, switching between the threads in an efficient way. It’s almost like having twice the number of cores, but not quite so. Each core can only process one thread at a time. Hyper-threading still helps CPU performance because our PC needs to run multiple tasks simultaneously, and task-switching goes faster with hardware support.
Some tasks, however, can be negatively affected by this switching back and forth. One example are high-speed input-output (IO) tasks. Linux enables us to dedicate (“pin”) a core to such tasks, so that the task won’t have to share CPU resources with other tasks.
To discover the CPU topology on your PC, use the following command:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | lscpu -e The output on my Intel i7 3930K CPU is this: CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZ MINMHZ 0 0 0 0 0:0:0:0 yes 5700.0000 1200.0000 1 0 0 1 1:1:1:0 yes 5700.0000 1200.0000 2 0 0 2 2:2:2:0 yes 5700.0000 1200.0000 3 0 0 3 3:3:3:0 yes 5700.0000 1200.0000 4 0 0 4 4:4:4:0 yes 5700.0000 1200.0000 5 0 0 5 5:5:5:0 yes 5700.0000 1200.0000 6 0 0 0 0:0:0:0 yes 5700.0000 1200.0000 7 0 0 1 1:1:1:0 yes 5700.0000 1200.0000 8 0 0 2 2:2:2:0 yes 5700.0000 1200.0000 9 0 0 3 3:3:3:0 yes 5700.0000 1200.0000 10 0 0 4 4:4:4:0 yes 5700.0000 1200.0000 11 0 0 5 5:5:5:0 yes 5700.0000 1200.0000 |
*Note: My CPU is overclocked, hence the high MAXMHZ value.
Note column 4 “Core”: It shows which core the vCPU (column 1 – CPU) is actually using. With this Intel processor, vCPU 0 and 6 are sharing core 0.
A more straightforward way to get the thread pairs of the CPU cores is this:
1 2 3 4 5 6 7 | cat /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | sort | uniq 0,6 1,7 2,8 3,9 4,10 5,11 |
The performance gain (or loss) of CPU pinning depends on your hardware and on what you are doing with the Windows VM. A good benchmark of different tasks can be found here: CPU Pinning Benchmarks.
Some users report that CPU pinning helped improve latency, but sometimes at the cost of performance. Here is another useful post: Best pinning strategy for latency / performance trade-off.
A good explanation on CPU pinning and other performance improvements can be found here: Performance tuning.
On my PC I do NOT use CPU pinning. It is tricky at best, and whatever I tried it did not improve but rather reduce performance. Important: The effects of CPU pinning are highly individual and depend on what you want to achieve. For a gaming VM, it might help improve performance (see CPU Pinning Benchmarks).
Kernel 4.17 and Qemu 3.0 Improvements
If you encounter frame drops, latency issues, or have issues with your VR, check your interrupts:
If you see high RES numbers (like in the millions), see the Interrupt tuning and issue with high rescheduling interrupt counts post. If you follow it, you’ll see that upgrading the kernel to 4.17 and using Qemu 3.0 may help. How to set up QEMU 3.0 on Ubuntu 18.04 provides further instructions.
Qemu 4.0 Audio Improvements
One major issue that has plagued passthrough users is crackling audio. In some cases performance tuning has helped overcome the problem, but Reddit user spheenik has written a patch that might render crackling audio a thing of the past. This audio patch has been merged with Qemu starting with release 4.0. See the Reddit thread for more on this patch and how to properly configure your VM.