this post was submitted on 10 Aug 2024
83 points (98.8% liked)
Linux
48062 readers
694 users here now
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Rules
- Posts must be relevant to operating systems running the Linux kernel. GNU/Linux or otherwise.
- No misinformation
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I’ll have to try that. What I have tried so far is running a different kernel version and making sure my driver blacklists are correct (I found that the GPU shouldn’t ever connect to snd_hda_intel. It briefly eas again, but after fixing it, I still had the problem.).
For me, I have intel integrated + amd discrete. When I tried to set DRI_PRIME to 0 it complained that 0 was invalid, when I set it to 2 it said it had to be less than the number of GPUs detected (2). After digging in I noticed my cards in
/dev/dri/by-path
were card1 card2 rather than 0 and 1 like everyone online said they should be. Searching for that I found a few threads like this one that mentioned simpledrm was enabled by default in 6.4.8, which apparently broke some kind of enumeration with amd GPUs. I don't really understand why, but setting that param made my cards number correctly, and prime selection works again.Huh. My issue seems different, but I’ll still test that flag to see if it changes anything. My problem looks like the device doesn’t return to host after VM shutdown, possibly because of the reset bug (based on my observation of dmesg), which I hadn’t encountered after about a year of GPU passthrough VM usage.
Ahh, yeah if it's specifically when coming back from a VM, that sounds different. Maybe the vfio_pci driver isn't getting swapped back to the real one? I barely know how it works, I'm sure you've checked everything.