02:46 Lynne: airlied: know if you'll have time to look at supporting multiple queues or the av1 decode issues?
02:58 airlied: Lynne: just back from travelling, so it'll probably end up next week at this stage
02:58 airlied: Lynne: is there issues filed?
03:05 Lynne: nope, not yet
03:19 airlied: Lynne: what's the av1 decode issue btw?
03:24 Lynne: can't figure out the reference frames and their count, and there's an assert that seems to always require all 8 refs
03:27 airlied: Lynne: so the thing if I remember that was different from the mesa implemenation is if all the refs point at one ref we should just pass one slot
03:27 airlied: and there is a remap table
03:27 airlied: I also remember trying to make that work in ffmpeg was hard
04:01 airlied: Lynne: the problem is I don't think ffmpeg tracks them differently, might be possible to dedup it though
08:20 dolphin: airlied, sima: the drm-intel-fixes was rebased on top of drm-fixes after last weeks missed PR was pulled, so the log will appear more than actually ends up being pulled to drm-fixes when you grab it
09:16 MoeIcenowy: sorry for any disturbance, I remember hearing the DRI loader interface is going to be deprecated and private to Mesa, but I couldn't find any reference now
09:16 MoeIcenowy: is there any document for it?
09:16 MoeIcenowy: (and had the interface been properly documented at all?
09:17 emersion: that's correct yeah
09:20 MoeIcenowy: is there any document about the deprecation?
09:20 MoeIcenowy: I recently got a driver package for Zhaoxin iGPU, and it comes with a zx_dri.so
09:21 MoeIcenowy: (well technically it comes with a GLVND variant too, as another driver package
09:22 MoeIcenowy: (and I remember some drivers, inc. swrast and zink, requires hack on the driver loader side, and simply put _dri.so won't work
14:07 koike: hello o/ after drm-misc migrated to gitlab, how should I proceed with dim tool? I pointed to the new url but dim still looks for a remote on the old url
14:16 koike: hmm, I guess this process isn't migrated yet
14:52 mripard: koike: yeah, we haven't migrated drm-misc yet
14:52 mripard: current plan is that it will happen next tuesday
15:25 cmarcelo: marge bot question / idea: could we make marge "give up" (unassign itself) as soon as the first failing result in the run happens? once there's a failing result marge will not land the patch anyway, so empties the queue and enable other MRs to enter it. even if the previous MR is still being worked by later tests (using CI resources), it is likely that some progress can be made on a new MR in early compile stages etc.
15:38 jenatali: cmarcelo: if the failure is a flake, it can be retried (maybe manually after the one auto-retry) and if that passes within the one hour time limit it can still merge
15:38 jenatali: Flakes have been less common these days but I still hit 2 yesterday...
15:40 jenatali: I would personally love some kind of notification when the first failure happens so I can take a look and cancel the pipeline if it's a real failure, or retry if it's a flake
15:44 cmarcelo: jenatali: I did that a few times too in the past but assumed was less common. don't quite get the 1hr limit, I'm assuming it won't actually work once marge "moved on" to work on next MR...? I agree that early notification would be something nice to have. The upside of my original request is that it give benefits when no one is looking closely.
15:45 jenatali: Yeah Marge gives up after 1hr
17:13 DavidHeidelberg: cmarcelo: there is ongoing discussion within Mesa CI team :)
17:14 DavidHeidelberg: we are well... well.. well aware of the problem 😉
17:21 cmarcelo: DavidHeidelberg: Ok cool!
17:26 DavidHeidelberg: in future there will be public (likely bi-weekly) meetings, so in case someone will be interested, will be possible to join and discuss stuff
17:51 Hazematman: Hey, I tried asking this on #virgil3d but didn't get a response. I have an draft MR for lavapipe and its causes a few tests in venus-lavapipe to crash so I'm trying to debug what's going. I've been trying to get venus running (including following steps outlined https://docs.mesa3d.org/drivers/venus.html , https://www.collabora.com/news-and-blog/blog/2021/11/26/venus-on-qemu-enabling-new-virtual-vulkan-driver/ , and
17:51 Hazematman: https://gitlab.freedesktop.org/virgl/virglrenderer/-/wikis/dev_qemu_crosvm) and the only place where I got it to work was with vtest where I couldn't reproduce the issue. Both QEMU and crosvm would crash for me on startup. Is there any instructions for how I can get venus setup like it is in the venus-lavapipe runner so I can better debug the issue?
19:32 zmike: Hazematman: ping @zzyiwei
19:32 zmike: on gitlab
20:08 DemiMarie: What would it take to use the fallible dmabuf import request?
21:47 DavidHeidelberg: eric_engestrom: if you'll plan to patch GL-CTS think of me, I have one patch to throw in 😉
21:48 DavidHeidelberg: s/patch/patch or uprev/
22:05 eric_engestrom: DavidHeidelberg: I think I've flushed my work on that corner, nothing more in progress
22:05 eric_engestrom: but you should just post an MR with your commit
22:06 eric_engestrom: it's not a big deal if there's another rebuild, especially if we generate the image during low-load hours
22:21 DavidHeidelberg: eric_engestrom: okay, thank you :) I'll prepare draft and if you hit something feel free to squash :)