03:27 Lynne: I hate how YUV formats are treated in Vulkan and by driver devs
03:28 Lynne: the whole committee pretty much agreed "why on earth would anyone want to do anything to YUV images that doesn't involve immediately converting them to RGB?"
03:30 Lynne: the spec outright REQUIRES each YUV image to be accompanied by a YUV sampler, even if its identity
03:31 Lynne: transferring? okay, fine, here, don't complain, you have flags that let you upload to each plane
03:32 Lynne: sampling? well I guess you may need it in your toy video player which copied YUV->RGB formulae from wikipedia
03:32 Lynne: storage? STORAGE? are you nuts? no, most definitely not!
03:39 Company: Lynne: what do you want to do with them?
03:40 Lynne: I want to have them usable as storage images
03:40 Lynne: to do so, I need the extended usage flag
03:40 Lynne: and the storage image flag
03:40 Company: the YUV image or the individual plane?
03:40 Lynne: but most drivers don't signal they allow this
03:41 Lynne: individual planes of course
03:41 Lynne: I had to go vendor by vendor to ask them to support this
03:41 Company: I think that's fair
03:42 Lynne: then when vulkan decoding came about, this broke due to needing images to be usable as both decode sources and everything else, so I did it again
03:42 Lynne: and now the same exact situation happens with the encode extension
03:42 Company: it's not what application devs would like, but the specs are so huge that devs try to limit the scope they need to deal with
03:43 Company: I'm running into that from time to time, too
03:45 Company: currently with dmabuf export from GL
03:45 Company: previously with import of various dmabuf fourccs (usually video ones)
03:45 Company: some simple GL extensions were missing
03:47 Company: you should try not to do weird new stuff - write a game engine maybe ;)
03:58 Lynne: I wrote an entire lossy comms channel simulator just to generate a few one-off LDPC codes in Vulkan, weird is my modus operandi
06:56 tzimmermann: javierm, hi
07:42 sima: agd5f_, hwentlan_ https://lore.kernel.org/dri-devel/CABXGCsNgx6gQCqBq-L2P15ydaN_66sM9CgGa9GQYNzQsaa6Dkg@mail.gmail.com/ maybe just a redirect to bugzilla needed?
08:31 sima: MrCooper_, apologies for filling the dri-devel mod queue ...
08:41 javierm: tzimmermann: hi
08:41 tzimmermann: hi javier. i haven't seen you here in a while. are you still with us?
08:42 javierm: tzimmermann: I'm yes, just busy with other non-graphics work
08:43 javierm: I was in fact, planning to review your "[PATCH v2 00/10] drm/bochs: Modernize driver" today :)
08:43 tzimmermann: i was talking with matthias about the QR panic code. do you have plans for that in fedora?
08:46 javierm: tzimmermann: yes, with jfalempe we will enable DRM_PANIC for F42 https://fedoraproject.org/wiki/Changes/EnableDrmPanic
08:46 javierm: QR panic is the next step
08:46 tzimmermann: i see
08:47 tzimmermann: i though it was your decision
08:48 tzimmermann: not sure why
08:48 jfalempe: tzimmermann: but we need to have drm_panic support for i915 / amdgpu / nvidia to be really useful.
08:48 tzimmermann: jfalempe, ah right.
08:48 tzimmermann: i though this was coming along well
08:48 tzimmermann: thought'
08:48 tzimmermann: 'thought'
08:49 javierm: tzimmermann: I only added DRM panic support to the tidss driver to test jfalempe's patches. But has been leading this effort
08:49 javierm: *but he has been leading
08:49 jfalempe: there is one attempt for amdgpu, but it's only for only older cards https://patchwork.freedesktop.org/series/136832/
08:51 jfalempe: I've also done a PoC for nouveau, but I need more specification on the tiling support, otherwise it works only on my testing card.
08:51 tzimmermann: sounds like a bigger issue
08:54 javierm: tzimmermann: hopefully by F42 release time support will be added to those drivers
08:55 javierm: but as jfalempe mentions in the fedora wiki page, if the drivers are not supported, it just means that the panic won't be shown there so is not a blocker for DRM_PANIC to be enabled in fedora
08:55 jfalempe: in fact un-tiling is not that hard, the hard part is to know the tiling format, as drm currently don't have this info (only hw/firmware knows).
08:55 tzimmermann: javierm, about reviews. if you have the time (and energy :) could you go through the patches for generic client support? some of them are still missing a review and i think that all of the drivers are really maintained. my hope is that i can at least resolve the majority of patches and get everything merged for shmem, dma and ttm fbdev. https://patchwork.freedesktop.org/series/137391/#rev3
08:55 tzimmermann: jfalempe, urghh. sounds like larger driver changes to me
08:56 javierm: tzimmermann: Sure, I only reviewed the patches for the drivers I co-maintain since thought that pinchartl already reviewed the core changes
08:57 tzimmermann: jfalempe, BTW. was it a mistake to put .get_scanout_buffer into plane_helper_funcs. IIRC i suggest this, but now reading the code now it looks like a drm_plane_funcs to me
08:58 tzimmermann: javierm, thanks a lot. no need ot look at all of this. but some drivers have no review at all. but it's usually a mechanical change to each
08:59 jfalempe: tzimmermann: I'm also wondering if we should limit to primary plane only. some drivers have to duplicate the plane helper struct for that.
08:59 tzimmermann: jfalempe, i mean that if you somewhen decide that the func should rather go into drm_plane_funcs, i'd ack that
08:59 javierm: tzimmermann: yeah, I figured due the changes for ssd130x, simpledrm and ofdrm that reviewed
09:00 jfalempe: tzimmermann: ok I will look into it. It's better to do it now that there are only a few drivers that supports it.
09:01 tzimmermann: i don't think it's problem per se. just saying
09:10 javierm: tzimmermann: can you remind the semantics of the drm_$object_funcs vs drm_$object_helper_funcs separation ?
09:11 javierm: it's not clear by reading https://docs.kernel.org/gpu/drm-kms-helpers.html
09:15 sima: javierm, so drm_$obj_func are the official driver entry points for uapi, given the driver full control of what's up
09:15 sima: the helper ones are all optional if you hand roll your implementation completely
09:16 sima: I think judgement call where you want to put panic stuff since it's not directly uapi
09:19 javierm: sima: thanks for the explanation. Then IMO makes sense to leave it in the helper_funcs vtable
09:21 javierm: tzimmermann, jfalempe ^ ?
09:22 jfalempe: Yes, I will leave it there then :)
12:00 sima: mripard, for the dsi discussion, see my latest reply, I got a bit confused
12:00 sima: that should be a lot more solid ...
12:03 DavidHeidelberg: eric_engestrom: did the MesaCI meeting time changed or it's still today around now?
12:07 sergi: DavidHeidelberg: It's at the same place, same time. I see you there, but not responding
12:23 javierm: tzimmermann: I've reviewed all the patches that were missing tags from your series. Patches 74-81 were less trivial but I couldn't spot anything wrong in those patches
12:23 javierm: tzimmermann: I'm not that familiar with those drivers though so I just provided an a-b
13:12 alyssa:wonders if we want a copysign alu
13:12 alyssa: there are at least 4 "reasonable" ways it might be written, depending on $details
13:12 alyssa: and which is optimal probably depends on ISA
13:13 alyssa: it isn't in glsl, though, which makes me wonder if there are shaders that open code it
13:13 alyssa: (in which case it makes sense to algebraic rules to canonicalize all the different open codings and then backends can do what's best)
13:16 alyssa: oh and ugh, signed zeroes :melt:
13:16 tzimmermann: thanks, javierm
13:40 karolherbst: is there an API in vulkan to explicitly assign addresses to device memory allocations?
13:41 alyssa: karolherbst: bufferdeviceaddress capture/replay
13:41 alyssa: though it's not intended for that..
13:41 karolherbst: mhhhhh
13:41 karolherbst: not sure if that's helpful
13:42 karolherbst: I need something where the client comes up with all the addresses (and maybe even reserves a VM range the driver doesn't use internally)
13:43 karolherbst: though I guess with enough wishful thinking it should work (tm)
13:44 glehmann: that sounds like new vk extension territory
13:46 karolherbst: yeah...
13:48 karolherbst: anyway.. I kinda need something like that from gallium drivers (including zink): https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28785/diffs?commit_id=0ff2d1fed9cecbf42579da64e748e5e092f19856
14:20 alyssa: karolherbst: I assume this is for rusticl-on-zink?
14:20 alyssa: because I'd rather not support this in honeykrisp if we have native CL, lolz
14:21 karolherbst: alyssa: rusticl on any gallium driver, but yeah
14:21 karolherbst: this will be needed for cl_ext_bda and SVM
14:21 karolherbst: or something like this
14:21 karolherbst: I'm still playing around with what ideas are actually working out here
14:22 alyssa: karolherbst: I meant the vulkan ext aprt
14:22 karolherbst: ahh
14:23 karolherbst: yeah.. not sure
14:23 karolherbst: it's probably not very important for cl_bda
14:23 karolherbst: single device contexts don't need more than plain bda
14:23 karolherbst: but the cl_bda extension can also be used where addresses are per device
14:24 karolherbst: it's just the "same address across allocations across different devices" which is the pain use case here
15:55 edolnx: Greetings! I've been testing Mesa 24.2 on RISC-V specifically for finding compatibility issues with the new ORCJIT for the llvmpipe backend. I have found a fair number of issues, and I wanted to see what are the expectations for this and what is the best way to report these issues. I'm also happy to help setting up automated testing since I have access to some hardware resources as well. Any feedback would be appreciated!
16:07 DavidHeidelberg: edolnx: connecting TronCI/LAVA farm to MesaCI is most welcome :) thou we'll need to wait before next Debian freeze (at least soft) to bump our CI. Current Debian release doesn't have risc-v support
16:09 edolnx: Good to know, and I'll try to make that a priority within the RISC-V DevBoards group. In the mean time, are there any tests and results I can provide smei-manually? For example, glmark2 fails on the third spinning box test with a JIT error
16:09 alyssa: dEQP
16:10 DavidHeidelberg: - piglit ^
16:36 edolnx: One last question DavidHeidelberg - what is the best way to engage with the TronCI/LAVA team?
16:38 DavidHeidelberg: edolnx: #ci-tron channel for TronCI, for LAVA probably just here. sergi or gallo
16:39 edolnx: Thank you!
18:06 airlied: edolnx: testing orcjit on x86 might also be valuable if there are regressions there, but also filing tickets for risc-v isuses, but there may also be LLVM risc-v backend things we can't fix and have to be taken up with llvm
18:51 edolnx: Good to know airlied and I will start doing some regression testsing. The good news is that it's all in the softpipe/llvmpipe so it's much easier to test in x86_64 VMs :D
18:53 edolnx: Right now I'm just getting some very generic JIT error messages, is there some way via a compile time flag or environment variable to get a more verbose one for bug reporting purposes?
18:54 airlied: don't think there is on llvmpipe side, often compiling llvm with debug/asserts can produce more