00:06 jenatali: gfxstrand: Not sure I have too much opinion on the unstructured nir patch in general, but I'll try to find some time to review
00:09 jenatali: DXIL is technically unstructured though I don't know of anything that emits anything complicated
07:46 Ermine: Big kudos to people who put up a list of talks and slides in drm docs!
08:49 pq: Ermine, awesome to hear they helped :-) (not taking credit here, I don't remember who collected them)
08:51 daniels: I think it was sima
08:52 javierm: pq: you should take some credit because it was your idea :) I posted the patch but you suggested it
08:52 sima: daniels, I think I only provided some encouragement ...
08:53 javierm: Ermine: glad to know that it was useful for you!
11:16 karolherbst: jenatali: fyi, you might want to take another look at https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/26800
11:16 karolherbst: it became quite the nice cleanup now
12:23 Hazematman: Hello all, I've had this MR sitting on the queue for a bit now and recently got all the CI issues with it resolved. I would appreciate if anyone could give it a review. I've highlighted two sections I'm not 100% confident on and would appreciate feedback on https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27805
13:16 tonyk: emersion: could you review this series? :) https://lore.kernel.org/dri-devel/20240308145553.194165-1-andrealmeid@igalia.com/
13:27 JoshuaAshton: Slowly going insane over the Wayland explicit sync requirement that release point can be signalled before acquire point if the compositor doesn't want your buffer aaAAaaAAaaaAAaaAAaaAAA
13:36 MrCooper: the compositor having to wait for the acquire point to signal first isn't great either, so let's not rush for that but wait a bit more for a reasonable client-side solution
13:39 MrCooper: offhand not seeing why threads would be required with eventfds
13:39 JoshuaAshton: We need two things
13:39 JoshuaAshton: 1) The GPU work for the acquire point to actually be done
13:39 JoshuaAshton: 2) The release point to be signalled
13:40 MrCooper: you can get an eventfd which becomes readable when a fence is available for the acquire point, and can poll for that along with the other fds
13:40 JoshuaAshton: So essentially we need to merge two drmSyncObj points into one and then use that for our wait list
13:41 JoshuaAshton: Yeah, but we need to essentially make that wait an `if ((a1 && r1) || (a2 && r2) || ...)`
13:41 pq: what's the problem with that?
13:41 JoshuaAshton: drmSyncobjWait is any or all
13:42 MrCooper: it's just polling for a bunch of fds until the minimum requirements are met for acquiring a VkImage, not really seeing the big issue
13:42 pq: use eventfd instead?
13:43 JoshuaAshton: I guess we could wait on all of them, accept any wakeup and track stuff, then remove from the wait. That still kinda sucks but I guess it works.
13:44 MrCooper: sounds like a run-of-the-mill event loop to me
13:44 JoshuaAshton: Sure
13:44 JoshuaAshton: Oh
13:44 JoshuaAshton: but we also need to handle that for the GPU side semaphore
13:44 JoshuaAshton: That was the other part
13:45 JoshuaAshton: vkAcquireNextImage can signal a VkSemaphore (the temporary part in Mesa) with that
13:45 JoshuaAshton: which also needs to be r && a
13:46 dj-death: anybody knows the right meson options to have mesa generate a libGL.so.1 ?
13:46 dj-death: somehow it only creates a libGLX_mesa.so
13:46 pq: dj-death, a wild guess: disable glvnd support
13:47 JoshuaAshton: So yes, we still need to actually merge them into a drmSyncObj if we want to keep non-linear timelines
13:47 MrCooper: dj-death: -Dglvnd=false
13:48 dj-death: thanks a lot
13:48 dj-death: I thought I had it built
13:48 pq: dj-death, usually it comes from the glvnd project rathen than Mesa.
13:49 MrCooper: JoshuaAshton: can't vkAcquireNextImage just wait for the acquire point to signal, then use the release point fence for semaphore?
13:50 JoshuaAshton: MrCooper: That's really interesting to think about actually
13:50 MrCooper: I guess waiting for the acquire point to signal isn't ideal, should only need to wait for a fence in principle
13:51 JoshuaAshton: I think you would end up just getting the same image over and over again
13:51 JoshuaAshton: If we did that naiively
13:51 JoshuaAshton: wait
13:51 JoshuaAshton: no
13:51 MrCooper: no, also need to wait for a fence for the release point
13:51 JoshuaAshton: that'd be fine
13:51 JoshuaAshton: You might not want the same image over and over again though that could be held by the compositor
13:52 JoshuaAshton: but we could keep pushing forward I guess
13:52 JoshuaAshton: but you could end up stalling on GPU with an image held by the compositor
13:52 JoshuaAshton: rather than skipping over it if it did that
13:52 MrCooper: no, because there's no fence for the release point in that case
13:53 zamundaaa[m]: You can make that work more nicely if you first check if any images exist where acquire and release points have already been signaled
13:54 zamundaaa[m]: If so, use that one. If not, use the one that's been committed to the compositor the earliest to maximize the chance of it being signaled soon
13:55 JoshuaAshton: MrCooper: You mean wait for the release point to be signalled rather than materialise in Acquire
13:55 JoshuaAshton: ?
13:55 JoshuaAshton: Seems to defeat the benefit of explicit sync
13:55 MrCooper: no, I mean what I wrote :)
13:55 MrCooper: if the compositor holds a buffer, there's no fence for its release point
13:55 JoshuaAshton: Then I don't get what you mean :frog:
13:56 JoshuaAshton: oh
13:56 MrCooper: since the client must wait for a fence for the release point before re-using the buffer, it won't re-use a held buffer
13:56 JoshuaAshton: got it
13:59 JoshuaAshton: I don't think that works though, if there is no fence materialised for it's release point, we have nothing to export on the VkSemaphore
13:59 zamundaaa[m]: JoshuaAshton: if there's no fence materialized yet, you just have to wait for that to happen
13:59 zamundaaa[m]: There's no way around that
13:59 JoshuaAshton: Now we are back to the a && r problem :D
14:00 zamundaaa[m]: I don't really follow
14:00 MrCooper: JoshuaAshton: vkAcquireNextImage can't acquire an image before a fence has materialized for the release point
14:01 zamundaaa[m]: When you pick an image to give to the application, afaiu you'd want to... (full message at <https://matrix.org/_matrix/media/v3/download/matrix.org/aYSarfELexXmrZTshVmbyjyB>)
14:01 zamundaaa[m]: I hope that message is somewhat readable on IRC
14:02 MrCooper: not really, it's mostly an URL
14:03 MrCooper: anyway, I'd say that's it more or less
14:04 MrCooper: might also want to prefer images where the acquire point is signaled / has a fence, everything else being equal
14:05 zamundaaa[m]: yeah
14:31 zmike: is there a nir pass that vectorizes shader_in/shader_out loads and stores with lowered io?
15:16 mareko: zmike: I couldn't find one
15:17 mareko: it should be easy though because it would just be an instruction merging pass
15:17 zmike: me neither
15:17 zmike: yeah I'm maybe sorta giving it a shot
15:18 zmike: I have some test failures that are too hard to figure out with all the scalarized io
15:34 tleydxdy: where can I find some docs for handling gpu reset in vulkan?
15:34 tleydxdy: not sure what's the keyword to search for
15:35 mareko: device lost
15:39 tleydxdy: yeah, google is not giving me much, just some reddit posts and people reporting issues with GPU crash
15:42 tleydxdy: is the expectation that UMD would be able to recover from a reset? or each application (maybe game engine) need to handle it themselves
15:52 DemiMarie: tleydxdy: You need to handle it yourself. The driver will give `VK_ERROR_DEVICE_LOST`.
15:56 pq: tleydxdy, https://dri.freedesktop.org/docs/drm/gpu/drm-uapi.html#device-reset has some general hand-waving.
16:14 DemiMarie: pq: does that mean that debugging a hung GPU context is generally only possible for compute workloads?
17:01 Hazematman: <Hazematman> "Hello all, I've had this MR..." <- This MR adds the extensions to import/export dma bufs to both lavapipe and llvmpipe FYI
17:42 JoshuaAshton: freedesktop dead?
17:43 karolherbst: mhh?