01:34 zzyiwei: anonymix007[m]: link to the ci job? aborting at that line suggests the host side renderer has hit a fatal error, and there's an error log.
01:36 zzyiwei: this line to be specific: https://gitlab.freedesktop.org/virgl/virglrenderer/-/blob/main/src/venus/vkr_ring.c#L330
03:55 zzyiwei: anonymix007[m]: link to the ci job? aborting at that line suggests the host side renderer has hit a fatal error, and there's an error log.
03:58 zzyiwei: oops, sorry for the dup msg...unscheduled interaction with my keyboard
04:24 airlied: glehmann: I just implemented the extra instr type for the fun of it
07:40 glehmann: airlied: sorry for kind of ignoring the cmat MRs, I'm not quite sure what to do myself.
07:40 glehmann: maybe the extra instruction type is a good way to solve this without stepping on anyone's feet
07:41 glehmann: I kind of hoped that more people are interested in NIR decisions like this
07:43 anonymix007[m]: zzyiwei: there's no CI job, I'm running it locally. I wonder how I should debug this further... There doesn't seem to be any logs from the virglrenderer
08:28 zzyiwei: anonymix007[m]: odd, I just double checked all error paths towards that failure have proper logs included. Might wanna check if it logs to syslog instead in your env. Feel free to file a mesa issue and let's follow up there with more details. Gotta sleep now. Cheers!
09:10 Lynne: hakzsam: https://gitlab.freedesktop.org/mesa/mesa/-/issues/14276
09:11 hakzsam: thanks
11:59 rcv11x: Hello, is this the mesa development channel?
12:01 rcv11x: I rarely use IRC and I have a question. I don't know if anyone is online right now, as I'm more used to using Discord.
12:02 feaneron: this channel is used by mesa developers yes
12:03 feaneron: you can ask away, but please remember that irc doesn't log messages when you're disconnected
12:05 rcv11x: Oh, OK, well, first of all, I want to say that I love Mesa and all the work you guys are doing is amazing. It's improving little by little. I wanted to ask you about Mesa 25.3. I saw on the calendar that it should be released today if all goes well. How is the release going? I'm very happy because I have an RDNA4 graphics card, which is the RX 9070 XT, and I think there will be many improvements.
12:15 feaneron: you can keep an eye on https://gitlab.freedesktop.org/mesa/mesa/-/tags for new release tags
12:16 feaneron: not much else i can say other than to have a little patience :)
12:21 rcv11x: Okay, so we'll have to wait (can't wait! ☺️). Is there a changelog we can see? Or will that be available once it's released?
12:29 K900: The changelog will not be things you care about as an end user, probably
12:34 rcv11x: Yes, I'm referring to this: https://docs.mesa3d.org/relnotes/25.2.0.html
12:34 rcv11x: Even though I'm just a user, I'm a bit of a geek and I like to see new things and improvements.
12:44 K900: The final release notes will be compiled once the actual release is done
12:45 K900: You can look at the commit history at https://gitlab.freedesktop.org/mesa/mesa/-/commits/staging%2F25.3
13:03 rcv11x: okey thanks
18:17 Lynne: hakzsam: thanks, that fixed the issue, though I'm wondering if we're also doing something wrong
18:18 Lynne: the push data is 160 bytes which is mod4 already
18:18 hakzsam: the push constant size set by the pipeline layout maybe, but not from shaders?
21:09 airlied: glehmann: yeah I think the extra instruction type is the cleanest, and if someone disagrees we can clean it up then, but I'd like to start landing this stuff soon
21:10 glehmann: I will look at it tomorrow
21:10 glehmann: I hope this is going to work well for all cmat2 instructions?
21:11 glehmann: including the weird tensor loads?
21:21 airlied: glehmann: the tensor load decode function is a bit funky, I've implemented it and it passes tests, but it involves a bit of playing around
21:22 airlied: glehmann: https://gitlab.freedesktop.org/airlied/mesa/-/tree/radv-coopmat2-block-loads-cmat-call?ref_type=heads
21:23 airlied: https://gitlab.freedesktop.org/airlied/mesa/-/commit/de73e50ef59a2163dfae6c63027004e6de92b38f#a9dbd75df12cb79a88cf9786295bb13e80447339_391_392 is the funky bit
21:23 airlied: but I just have workgroup scope left to do, and then optimisation to see if it actually makes things faster :-P
21:24 airlied: workgroup scope is hard, and I'm not sure I really understand the exact mechanics of how to do it
21:25 glehmann: I really dislike it because it's too much hidden driver magic
21:26 airlied: well I was going to try and make it NIR hidden magic
21:26 airlied: so it would at least be consistent across drivers
21:27 airlied: but yeah it does rely on passing the same parameter to two NIR instructions and matching them after the fact
21:28 airlied: the cmat_tensor_load and decode_func both need to take the same dst->def, I could add nir_validate support to ensure it
21:30 airlied: well one needs to have dst->def as the src for the other
21:31 airlied: just not sure of a better way to bind the tensor_load and the call together
21:39 airlied: something like that cleans it up a bit https://gitlab.freedesktop.org/airlied/mesa/-/commit/a9e7e4c9069d34e903d6e1a194b1849a01d1d442
21:50 glehmann: why can't it all be one instruction?
21:52 airlied: because I can't pass callee to it
21:52 airlied: unless I bring back function ptrs
21:52 airlied: or rather load_function_id or whatever I called it
21:53 airlied: I suppose I could spin tensor_load into a call type if it has a decode function
21:53 airlied: though I think that might get ugly
21:54 airlied: since tensor load doesn't have to take a decode fn
21:56 glehmann: so the issue is that the nir instruction currently only supports one callee function?
21:56 austriancoder: airlied: if you have time, it would be great if you could take another look at !36487
21:56 glehmann: but if we already have our own instruction type, what's stopping us from having two?
21:57 airlied: glehmann: no the problem is basic tensor_load isn't a call at all, it's an intrinsic
21:57 airlied: tensor_load with a decode function is an intrinsic + cmat_call now
21:58 glehmann: ah, so I guess the question is why do we need the intrinsic
21:58 airlied: well tensor_load without a decode function would just be a cmat_call with no callee