01:03Frogging101: buffer set read
09:30X512: Haiku build is broken because of #include <sys/inotify.h> in src/util/fossilize_db.c.
09:31X512: <sys/inotify.h> is not a part of POSIX.
12:14Hi-Angel: Do I do something wrong, or Vulkan env. variables for choosing default device doesn't work? Using a `MESA_VK_DEVICE_SELECT=8086:5916 MESA_VK_DEVICE_SELECT_FORCE_DEFAULT_DEVICE=1 gamescope -- glxgears` results in it using AMD GPU (even though the env. variables supposed to hide it completely), although running a `gamescope --prefer-vk-device=8086:5916 -- glxgears` launches it on Intel GPU
12:59X512: Hi-Angel: Why use Vulkan settings for GLX OpenGL program?
13:05Hi-Angel: X512, the gamescope in my comment is a vulkan program, not OpenGL. Even so, I use `gamescope` as just an example. I actually want to make Skype stop using my discrete card (nonsense, why would they do that 🤷♂️), which apparently does that through means of Vulkan.
13:07emersion: Hi-Angel: have you tried VK_ICD_FILENAMES?
13:11Hi-Angel: emersion, yay, thank worked! So: `VK_ICD_FILENAMES=/usr/share/vulkan/icd.d/intel_icd.x86_64.json gamescope -- glxgears` launches on Intel card, thanks!
13:11emersion: np
13:12emersion: i think MESA_VK_DEVICE_SELECT only works if the mesa device select layer is enabled
13:12emersion: which it isn't by default in general
13:18Hi-Angel: As a side note, skypeforlinux still uses the discrete card even with the variable 🤷♂️ Not sure how else could it do that, from my googling there's no way to choose device with OpenGL or EGL, so Vulkan was the only way… Weird.
13:20emersion: there are ways to select the device with EGL
13:20emersion: e.g. by using EGL_EXT_platform_device
13:21emersion: there are also other APIs with other conventions
13:21emersion: VA-API for instance…
13:21Hi-Angel: Huh. Actually, even when running `VK_ICD_FILENAMES=/usr/share/vulkan/icd.d/intel_icd.x86_64.json gamescope -- glxgears`, the AMD GPU wakes up as can be seen by `[drm] PCIE GART of 256M enabled (table at 0x000000F400000000)` entry in the log. Even though the `gamescope` definitely does *not* use the AMD GPU, because it shows visual artifacts on glxgears (a known bug in combination with Intel GPU) :/
13:25Hi-Angel: emersion, thanks for mentioning `EXT_platform_device`! I'll leave a comment on two StackOverflow questions about choosing a GPU through means of OpenGL as currently nobody mentioned that
13:25emersion: another way would be GBM then using the GBM platform
13:26emersion: in both of these cases, i don't think there is an env var to override
13:36FireBurn: I usually use DRI_PRIME so select which gpu, and there's a MR to make selecting using the pci address work too
13:37FireBurn: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/19101
13:52emersion: DRI_PRIME works with regular EGL apps only, not in the other cases mentioned above
13:59FireBurn: VA-API lets me use DRI_PRIME
14:01Hi-Angel: emersion, the MR FireBurn mentioned makes DRI_PRIME work with Vulkan as well
14:02Hi-Angel: If I'm not misreading it of course
14:03emersion: that surprises me, i don't see any mention of DRI_PRIME in libva
14:04FireBurn: It already works with Vulkan, it just improves it
14:04FireBurn: I've been using DRI_PRIME since it's support was added many moons ago
14:06FireBurn: emersion: You can test it with DRI_PRIME=0 vainfo and DRI_PRIME=1 vainfo, if the second card uses a different driver you might also have to specify it using LIBVA_DRIVER_NAME=
14:07FireBurn: So if you're on an Intel/AMD setup DRI_PRIME=1 LIBVA_DRIVER_NAME=radeonsi vainfo should be enough
14:07emersion: ah, then DRI_PRIME=1 doesn't actually do anything
14:07Hi-Angel: FireBurn, does it already work with Vulkan? Can't seem to confirm: when I run a `DRI_PRIME=8086:5916 gamescope -- glxgears`, it makes use of AMD GPU instead of the Intel one that has VID:PID 8086:5916
14:07FireBurn: On an AMD/AMD setup LIBVA_DRIVER_NAME shouldn't be necessary
14:07emersion: LIBVA_DRIVER_NAME does all of the work
14:08FireBurn: Try DRI_PRIME=0 or 1
14:08Hi-Angel: DRI_PRIME=0 has same effect; however according to docs https://docs.mesa3d.org/envvars.html#envvar-DRI_PRIME the `0` actually does nothing. The `1` implies "run on AMD GPU" which is the default on my system (for some reason, Idk why)
14:09FireBurn: Do you have the vulkan layer enabled?
14:09Hi-Angel: (I meant, default for Vulkan apps, not OpenGL 🤷♂️)
14:10FireBurn: This: -Dvulkan-layers=device-select enabled in Mesa
14:10Hi-Angel: Oh, wow!
14:10Hi-Angel: FireBurn, I instealled mesa-vulkan-layers, and now it does work!
14:10Hi-Angel: Cool
14:11FireBurn: Great news
14:11Hi-Angel: Yeah, that's quite a nuance that for that to work vulkan layers are needed. Probably worth adding that to the docs for DRI_PRIME
14:13X512: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/20793 seems fixed Haiku Zink crashes on window resize.
14:14Hi-Angel: Btw, installing vulkan layers actually improved the default behavior on my system, because now `gamescope` defaults to the integrated card as it's supposed to (but DRI_PRIME does work, I tested PID:VID with the AMD GPU as well)
18:46DemiMarie: What are the advantages of userspace direct-to-FW submission? Seems like it massively increases FW-side attack surface. I guess it makes sense on Windows, where the GPU FW and kernel driver are made by the same company and likely of similar quality, but on Linux the GPU driver gets a lot of review while the FW gets none.
19:17jekstrand: DemiMarie: In theory, it should be faster and more power-efficient since the kernel doesn't have to burn as much main CPU time on juggling GPU work.
19:17DemiMarie: jekstrand: in practice?
19:17jekstrand: DemiMarie: In practice, it's still not incredibly well-proven.
19:18jekstrand: What is pretty well-proven, though, is tha non-firmware-based HW interfaces suck.
19:18jekstrand: But can we get 99% of the efficiency with the kernel still in the loop? Maybe.
19:19jekstrand: As far as security goes, it shouldn't be that much more of an attack surface than your GPU as a whole.
19:20jekstrand: Depends on the HW and FW design, of course.
19:21javierm: jekstrand, DemiMarie: I always thought that was more about protecting the GPU companies IP while allowing to have open source kernel and GL/vulkan drivers
19:31bnieuwenhuizen: jekstrand: benefit is that we finally have an impetus to get the fairly slow BO LRU handling out of the hot path. Submission performance has been pretty brutal
19:31DemiMarie: jekstrand: why do you state that the attack surface should not be much more?
19:46bluetail: Hello. I am on archlinux. I installed mesa-git, but when I want to update I get ":: installing llvm-libs (15.0.7-1) breaks dependency 'llvm-libs=14.0.6' required by mesa-git"
19:47psykose: this sounds like an issue for whoever maintains the aur package you are installing
19:48bluetail: in this case, I can just try the immediate git package right?
19:49bluetail: wait, there are comments on how to use it from the AUR page... I have to read up on it
20:09sravn:just spammed dri-devel with a 86 patch series of trivial header changes. Let's see if the bots hate the patches
20:39eric_engestrom: bluetail: you need to update both llvm and mesa-git in the same transaction
20:39bluetail: eric_engestrom my PKGBUILD was out of date. I removed mesa-git with pacman -Rdd mesa-git and then reinstalled from git
20:41eric_engestrom: or that, uninstall+reinstall works too, but for mesa I wouldn't recommend that as you might end up without a GUI which a lot of users are not going to know how to fix ^^
22:14jekstrand: bnieuwenhuizen: Yeah, I'm not convinced Intel's GuC is really any faster, either.
22:15jekstrand: bnieuwenhuizen: It may be faster doing it kernel-side on Windows where there's more abstractions to punch through to handle an interrupt. On Linux, I'm unconvinced that it should be any faster.
22:15jekstrand: But the execlist submission ports are a pretty bad HW design so not having to mess about with those is pretty nice.
22:21jekstrand: I'm personally inclined to think that FW submission isn't what really matters. One core on your desktop CPU is going to be way faster than whatever that firmware is running on.
22:53bnieuwenhuizen: jekstrand: I dunno, on AMD the direct-to-fw path seems to be pretty much the same as the kernel-to-fw path, so to me it looks like we save kernel costs while not really adding any FW cost
23:34jekstrand: bnieuwenhuizen: Yup. The question is how much cost is being saved. IDK that it's much once we get kernel submit to be non-stupid.
23:36bnieuwenhuizen: jekstrand: once we add in some virtualization?
23:36bnieuwenhuizen: I hear those VM calls are expensive
23:38jekstrand: bnieuwenhuizen: Yeah, for virtualization it's a pretty clear win
23:38jekstrand: Though I think we can get kernel submit a lot better there with work
23:38bnieuwenhuizen: granted I dunno how we end up doing sync primitives
23:38jekstrand: UMF
23:39bnieuwenhuizen: yeah, somehow UMF seems a lot farther away than direct-to-fw submissions
23:39bnieuwenhuizen: though if it were me we'd just do the timeout thing and allow making sync files out of them
23:40jekstrand: I'm hoping UMF is in the 2-4 years timeframe.
23:40bnieuwenhuizen: yeah, some of the stuff I've been seeing from the amdgpu team have me hoping/dreaming for direct-to-fw this year :)
23:41bnieuwenhuizen: which leaves the question what happens for sync in the interim
23:45LaserEyess: https://gitlab.freedesktop.org/drm/amd/-/issues/476 consensus from AMD in this issue is that adding a patch to default to RGB would not be acceptable and they want to do things "properly" by adding a DRM parameter, exposing it to userspace via libdrm, and then having compositors handle the details of setting it
23:45LaserEyess: however, there doesn't seem to be any information if people are working on this or not
23:46LaserEyess: is there an active mailing list thread about this issue and/or a patch that implements some sort of solution to this?
23:47LaserEyess: valve seems to have used the patch in that thread (or something similar) to make this work on the steam deck, so presumably someone has done some work to figure this out "properly", but I can't really find it
23:47jekstrand: bnieuwenhuizen: With out UMF, they're going to be dissapointed. :)
23:48bnieuwenhuizen: note that in amdgpu we kinda have UMF already
23:48bnieuwenhuizen: or really (context, queue, submission seq id) pairs that get signalled to memory if we want to
23:57airlied: I saw some userspace submit patches from amd for gfx recently? but didn't dig in
23:58airlied: bnieuwenhuizen: do they just unmap the doorbell page when the VM invalidates?
23:58bnieuwenhuizen: yeah looked like
23:58bnieuwenhuizen: not sure I understand how that works exactly
23:58airlied: would be fun if you have the two 51% vram users
23:59airlied: I wonder how you'd guarantee any forward progress there
23:59airlied: like by the time you get to submit a command stream from userspace the other process could have unmapped your page, and around it goes
23:59bnieuwenhuizen: note the kernel can still spill to gtt and then enable the VM again