06:15 songafear: If none wants to talk to me, i have perhaps problems to send out things to review, hence on the longer run, i would prefer to make a small presentation to the web, if xdc is not desured or possible, that could be some netmeeting conference, or just offline slides and or offline video presentation, that way it would be possible to rotate the head to the correct path on the development. I was not kidding at first i would love to talk with xorg crew as
06:15 songafear: to how to place the systems to the more modern end, ouh yeah i am mentally stable for some time already, lot of work was done mentally to be that.
09:20 hch12907: gentle ping to issue #10185 (dev access to mesa/demos)?
09:22 hch12907: btw, I think giving all mesa/mesa developers the dev access to every other mesa repo (piglit, demos, ...) is a better solution overall.
13:02 karolherbst: jenatali: ever looked into subgroup support for clon12?
13:21 jenatali: karolherbst: not yet
14:00 karolherbst: jenatali: sad... I was looking into openvino and apparently disabling subgroup support makes it work :')
14:01 jenatali: Oof
14:02 jenatali: karolherbst: does CL subgroup support require independent forward progress? Or is that something else?
14:02 karolherbst: only for cl_khr_subgroup
14:02 karolherbst: however.. intel came up with cl_intel_subgroups which is cl_khr_subgroups without independent forward progress (for pre CL 3.0) + a bunch of additional subgroup ops
14:02 karolherbst: they emulate the other ops
14:02 karolherbst: so that might be broken as well
14:03 jenatali: I see
14:03 jenatali: D3D doesn't have IFP guarantees. And I know that WARP can't do it for example
14:03 karolherbst: yeah.. you don't need IFP support to advertize subgroups in CL 3.0
14:04 karolherbst: `CL_DEVICE_SUB_GROUP_INDEPENDENT_FORWARD_PROGRESS`
14:04 jenatali: Any GPU that can run Nanite can do it though
14:04 karolherbst: it's only require for cl_khr_subgroups which is pre CL 3.0
14:04 jenatali: Oh cool. I need to do a full CL3.0 run and actually flip on that switch
14:04 jenatali: It's been a few years
14:04 karolherbst: I don't think the CL CTS actually tests it
14:04 karolherbst: just the API consistency bits
14:04 karolherbst: I think...
14:05 karolherbst: dunno :D
14:05 karolherbst: nvidia doesn't claim support for subgroups apparently....
14:06 karolherbst: mhh maybe opencl.gpuinfo is just weird
14:10 karolherbst: mhh.. but on ROCm (which only advertizes cl_khr_subgroups) it works... maybe my implementation of subgroups is indeed a bit broken...
14:12 karolherbst: it also crashes the GPU with radeonsi... and disabling subgroups also makes it work there.. *sigh*
14:13 karolherbst: jenatali: do you know anything important which uses subgroups? Kinda want to test subgroup support with something else besides multi layer AI/ML stuff :D
14:13 jenatali: 🤷
14:40 karolherbst: sad
14:49 alyssa: 14:03 jenatali | Any GPU that can run Nanite can do it though
14:49 alyssa: cries in m1
14:50 alyssa: karolherbst: Is it expected to have to spill for large block sizes?
14:51 alyssa: => is it expected that launch_grid might trigger a shader variant for variable block size?
15:02 karolherbst: sadly yes
15:02 alyssa: ugh
15:03 alyssa: thanks
15:03 karolherbst: however
15:03 karolherbst: well.. not however
15:03 karolherbst: but you can pin the block size in CL
15:04 karolherbst: reqd_work_group_size
15:05 karolherbst: and if that's set in the kenrel, it's illegal to launch it with a different local size
15:05 karolherbst: alyssa: so you can e.g. compile all variants with specific `reqd_work_group_size` ahead of time and just use those...
15:06 karolherbst: could even have them all in the same source file
15:06 karolherbst: and they just call into a common function
15:06 alyssa: whee.
15:06 karolherbst: `__attribute__((reqd_work_group_size(X, Y, Z))) ` on the kernel
15:35 pinchartl: is anyone working on DP MST support for an ARM-based platform ?
15:39 HdkR: pinchartl: Which ARM platform? I'm sure Tegras already support MST
15:41 HdkR: The question is very broad
15:41 pinchartl: indeed
15:41 pinchartl: any platform that would use drm_bridge
15:41 pinchartl: so not tegra :-)
15:41 pinchartl: I was trying to find prior art and didn't see any in mainline or on the list
15:42 pinchartl: as far as I can see, only i915, nouveau and amdgpu have MST support
15:42 songafear: So only x86
15:43 songafear: Weird, it's fun thing that MST
15:43 pinchartl: HdkR: I don't see any mention of MST in drivers/gpu/drm/tegra/
15:44 songafear: You can easily control sync of the display mux by creating such control sync packets in the stream
15:45 HdkR: pinchartl: Oh sorry, newer tegra which should just use nouveau bits
15:45 HdkR: Tegra the SoC rather than the drm API :)
15:45 pinchartl: :-)
15:45 songafear: Yeah wau, so tegra has MST docks
15:46 HdkR: Slap a radeon GPU in to an ARM device and we could technically claim that one is an ARM platform as well :P
15:48 songafear: But how is the wire be split from the shared port?
15:48 songafear: To additional separate do ports right?
15:48 songafear: DP
15:48 songafear: do/dp
15:49 songafear: So it's analogue stream aggregation
15:50 songafear: And the packets flow through the data channel
15:50 songafear: So monitor or tv displays the stream of media
15:51 songafear: Data is av
15:51 songafear: Audio and video streams
15:53 jenatali: karolherbst: Kernels that came from CL C 1.2 (or are annotated a certain way) need to use a fixed grid size too, just not necessarily statically defined
15:53 karolherbst: there is no restriction on the grid size afaik
15:53 jenatali: Just that it's fixed
15:53 jenatali: You can't have some work groups using different sizes than others. Varying work group size wasn't a thing until CL2
15:53 karolherbst: you mean block or grid?
15:54 jenatali: Er, block I gusss
15:54 karolherbst: yeah.. we were talking about the block size
15:54 jenatali: Sorry this terminology is foreign to me still
15:54 karolherbst: yeah...
15:54 karolherbst: doesn't help that every vendor has different names either
15:54 karolherbst: I think block/grid is what nvidia uses and where it comes from, but not sure
15:55 karolherbst: maybe in the r600 days AMD used the same terms?
15:55 jenatali: Anyway, block size has to be uniform for CL1
15:55 karolherbst: dunno :)
15:55 karolherbst: yes
15:55 karolherbst: but it was about the block size specified through ndrangekernel
15:55 karolherbst: or I guess what the runtime picks for a given grid size...
15:56 jenatali: Right, you just can't use a different size for the last block
15:57 songafear: It actually wasn't that you could not do that, just hw was not being controlled to run kernels on different cu's without restrictions, they could run only copies of kernels which were all same, no sync
15:58 jenatali: Which now that I re-read it, isn't what alyssa was asking about :)
15:58 karolherbst: :)
15:58 alyssa: (:
16:03 songafear: There were no synchronization primitives neither scheduling to control cu on various loads, you could though vary the block size in which case two streams run one kernel and five some other kernel and they would graduate together or something like this
16:04 songafear: But that's just one developers unimportant detail, it still works well for modern compute and graphics
16:06 songafear: But on all my chips I had best cl 2.1
16:08 songafear: Those are very good, cuda even also very good, single source
17:57 illwieckz: can a kind people properly tag my issue there: https://gitlab.freedesktop.org/mesa/mesa/-/issues/10224 😊
18:13 anarsoul: illwieckz: I added intel and iris labels
18:44 illwieckz: anarsoul, thanks a lot!
21:42 robclark: pinchartl: some qc things support MST.. but not sure if anyone is working on that yet
21:44 pinchartl: robclark: so someone will need to be the first to interface this with drm_bridge, and everybody is hoping someone else would do the work ? :-)
21:47 robclark: that is plausible
21:47 pinchartl: it wouldn't be a first
21:48 pinchartl: when that happens to me, if I wait long enough and the frustration builds up too much, it usually explodes in a desire to rewrite the whole subsystem
21:49 pinchartl: (I'm not very frustrated with DRM/KMS if anyone is wondering ;-))
21:49 robclark: tbh I'm not 100% sure why we need a bridge in that case (MST is only with external dp, no physical bridges involved.. but I've managed to avoid the dp code)
21:49 pinchartl: I've managed to avoid the MST code so far
21:49 robclark: maybe abhinav__ is aware of some plans on the mst side of things
21:50 pinchartl: wouldn't it be in theory feasible to have a DSI-to-DP bridge with MST support ?
21:50 pinchartl: DSI has virtual channels
21:51 pinchartl: (not that I wish anyone would make such hardware)
21:51 robclark: idk.. probably.. but I've not seen it
21:54 abhinav__: pinchartl robclark we do have plans to add DP MST support , perhaps in the next 1-2 months. i can share more light on this that time
21:54 pinchartl: abhinav__: nice
21:55 pinchartl: well, before saying nice, I should wait to see how it looks like, and if it's one of those cases where the hardware designers should have stayed in bed on that fateful day
21:55 pinchartl:should go to bed
21:56 abhinav__: pinchartl I will remember to CC you on the changes for DP MST when we post them
21:56 pinchartl: thank you