00:11 zmike: DavidHeidelberg[m]: is there any way to see the log for trace jobs? I wanted to look at https://gitlab.freedesktop.org/mesa/mesa/-/jobs/47941379 but there's no info
00:57 anholt: zmike: you mean other than the problems html from the URL it prints to look at?
00:57 anholt: the python doesn't print useful info, it's all in the generated results.
07:40 tnt: Does anyone know what intel mean by Q Pitch ?
08:01 austriancoder: karolherbst: thanks - vec8/vec16 are gone now \o/
08:28 karolherbst: nice
08:28 karolherbst: I've updated my patch, because there were a few corner cases, not sure if you tried the latest version
08:28 karolherbst: austriancoder: ^^
08:29 karolherbst: ohh, I guess you did
08:29 karolherbst: but yeah, they are also all gone on my end
08:29 austriancoder: karolherbst: pulled your branch about 40 min ago
08:30 karolherbst: I kinda hate how it's done as it relies on copy_prop not being smart enough to reverse the things...
08:31 karolherbst: kinda have to sit down and rethink the entire process, but if you are e.g. before io lowering, you still have vec8/16 derefs and we can't split them up... or if the vec4 is based on anything else which is still a vec8 copy prop might reverse it
08:32 karolherbst: like imagine you have hardware which can do a vec16 load (like if your loads are actually byte based and you can do a 16 byte load, vec1x16)
10:10 DavidHeidelberg[m]: zmike: it seems like downloaded traces not matching checksum, pretty happy to see that it works sometimes. Otherwise it would run the trace and probably generated incorrect screenshot due to damaged trace. But yes - no logging for piglit.
11:11 zmike: DavidHeidelberg[m]: not sure exactly what that means?
11:11 zmike: why did it crash
11:21 DavidHeidelberg[m]: zmike: the downloaded file doesn't match the checksum provided by S3, so the file was (probably) damaged
11:21 DavidHeidelberg[m]: this shouldn't usually happen, maybe some corruption of filesystem on runner
11:22 DavidHeidelberg[m]: zmike: the "crash" here is only wait to communicate here, because it's not fail, pass or timeout
11:23 zmike: ahh ok
11:23 zmike: also would it be possible to change the output to not have the ' at the end of the problems.html link
11:24 DavidHeidelberg[m]: zmike: I have it in not-yet-merged MR :D
11:24 DavidHeidelberg[m]: I got pissed by it few times already :P
11:25 DavidHeidelberg[m]:sneaked the commit into 6.4 kernel uprev, but there was some fighting with Cheza boards in the last minute before merge :)
11:26 zmike: it's always bothered me
11:26 zmike: but not enough to do anything about it
11:36 zmike: DavidHeidelberg[m]: also have you had time to look at that blender trace?
11:44 DavidHeidelberg[m]: zmike: would you mind make the trace performance-testing compatible? (3x2 frames)? I checked, it works for me, but crashes iris when I try to replay it in perf mode)
11:44 DavidHeidelberg[m]: if I dropped last frame (which probably does some cleaning) I think it would work
11:44 DavidHeidelberg[m]: s/3x2/initial frame + 3x2)
11:45 zmike: so...7 frames?
11:45 DavidHeidelberg[m]: yup
11:46 DavidHeidelberg[m]: previous Blender traces didn't work reliably in performance testing, but maybe recent Blender builds will be better
11:46 zmike: ok updated
11:48 DavidHeidelberg[m]: nice, look good. loop=1500, no extra memory consumption, 33 fps on Intel.. so far so good
11:49 DavidHeidelberg[m]: hehe, 150 runs 60 fps.. I think I need improve my laptop cooling :D
13:50 karolherbst: gfxstrand: I got told, that for Vulkan SPIR-V it's technically valid to use vec8 and vec16? I was kinda under the impression it's all vec4 at most.
14:00 zmike: I'm not sure the first part of that is accurate
14:00 zmike: vec16 is only legal with the Kernel cap, and that cap is not legal in vulkan afaik
14:01 zmike: vec8 is a bit more nebulous, but I imagine if you try to use a vec8 somewhere then vvl will tell you why you can't
16:07 karolherbst: nah, it's actually in the spir-v spec, it's just a bit hidden :D
16:07 karolherbst: "Vector types must be parameterized only with 2, 3, or 4 components, plus any additional sizes enabled by capabilities."
16:07 zmike: yes
16:07 zmike: that's not hidden
16:07 zmike: and Vector16 requires the Kernel cap
16:08 zmike: Vector8 doesn't seem to exist in the base spirv spec
16:08 karolherbst: well.. you won't find it if you search for "OpTypeVector" :D
16:08 karolherbst: well..
16:08 zmike: sounds like someone needs to improve their spec-fu
16:08 karolherbst: Vector16: Uses OpTypeVector to declare 8 component or 16 component vectors.
16:08 zmike: mm fair enough
16:08 zmike: still not useful
16:08 karolherbst: yeah..
16:08 karolherbst: it's kernel only :)
16:11 alyssa: DavidHeidelberg[m]: panfrost-g52-vk job seems slow
16:11 alyssa: given that we've sunsetted the g52 vk experiment (and would remove the code from tree if not for $politics), it really shouldn't be a premerge job imho
16:11 alyssa: (It has 0 users and, unless I'm very mistaken, is not being developed.)
16:12 zmike: careful, those sound like the words of a panfrost developer
16:12 DavidHeidelberg[m]: Give me numbers or links:) i'll try to look at it today :)
16:12 alyssa: https://gitlab.freedesktop.org/mesa/mesa/-/jobs/47991585
16:13 alyssa: I don't understand why the job is running in pre-merge at all, the driver is not shipped and not intended to be shipped, it's served its purpose
16:13 zmike: karolherbst: there's a few more in https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/24839 that seem like they could easily be merged
16:13 DavidHeidelberg[m]: 20min is still +- in 15min range
16:13 karolherbst: probably
16:13 alyssa: DavidHeidelberg[m]: Um, wasn't it a 10 minute limit?
16:13 alyssa: Since when was 20 minutes ok?
16:13 DavidHeidelberg[m]: I guess because it's supported HW?
16:14 DavidHeidelberg[m]: Nah, we have 15min, but some jobs are ranging 10 - 20
16:14 alyssa: Ok, it should be 10 minutes
16:14 alyssa: Also, the job is super flaky because panvk on g52 is broken
16:14 karolherbst: zmike: you are free to rb any of those patches and I might extract more of them
16:14 alyssa: and again, nobody is going to be fixing it because it's not a developed project
16:14 DavidHeidelberg[m]: Hmmm..... and now back to reality ...
16:14 DavidHeidelberg[m]: Well, then we should remove it from mesa if it has no users
16:15 zmike: karolherbst: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/24839#note_2054681 ?
16:15 alyssa: DavidHeidelberg[m]: Yes, at least when I was still at Collabora, that was in the cards
16:15 alyssa: It's being kept in tree because deleting it would cause $problems, but the driver as upstreamed is unused and should not be in CI
16:16 karolherbst: zmike: if you think the global_binding one is in order, then yeah, I guess
16:16 karolherbst: _but_
16:16 karolherbst: I think it's causing issues with anv
16:16 DavidHeidelberg[m]: daniels: ^^ Alyssa message
16:16 karolherbst: or maybe that was something else
16:16 alyssa: and again, 20 minutes is unacceptable for the job
16:17 alyssa: that's not 20 minutes due to a retry, that's 20 minutes of actual deqp-runner time
16:17 alyssa: if we bumped the expectation from 10 minute limit, that was a mistake.
16:17 zmike: I think 10 has always been aspirational
16:17 DavidHeidelberg[m]: kk, we'll talk about it tomorrow, anyway in worst case I'll xut it to 15
16:18 DavidHeidelberg[m]: *cut
16:18 alyssa: so now that job has flaked after 20 minutes (because panvk is known broken) is now being retried
16:18 alyssa: so that will be 45 minutes for a job that is providing no value
16:18 alyssa: also, for some reasons t860-traces is running too?
16:19 alyssa: since when is that not manual?
16:19 alyssa: does marge run manual jobs now?
16:19 alyssa: I see it listed as panfrost-midgard-manual-rules but it was triggered in https://gitlab.freedesktop.org/mesa/mesa/-/jobs/47991577
16:21 alyssa: frankly i don't know why we have these jobs at all, they're not providing value
16:22 alyssa: but zmike is right, i'm sounding like a panfrost developer
16:23 alyssa: I thought we had a simple rule, jobs go into premerge if we expect them to provide significant value, to take 10 minutes or less of execution time not including setup/teardown overhead, and to be robust against flakes
16:24 alyssa: something like panfrost-g52-vk fails all 3 principles. I dont know why it's there. Adding testing for the sake of adding testing is actively counterproductive.
16:24 alyssa: and we have a lot of that.
16:24 alyssa: but i should shut up before i get yelled at again for acknowledging the problems that lots of people are thinking
16:24 alyssa: so... never mind
16:25 karolherbst: could send an MR to disable it and then we merge it (or something)
16:25 alyssa: mesa ci is perfect, please don't punish me again like last time.
16:26 zmike: karolherbst: I did some other reviews
16:28 karolherbst: cool, thanks
16:52 DemiMarie: alyssa: what is the problem with panvk?
16:52 Lynne: does any hardware offer real 8-component vectors anyway?
16:55 jenatali: Fwiw I think the dzn jobs are closer to a 15 minute average, which I was told was fine when deciding what level of fraction to use
16:58 gfxstrand: Lynne: Some Mali hardware does (-:
16:58 gfxstrand: vec4, f16vec8, and u8vec16 (-:
17:12 Lynne: what about 16-component vectors?
17:16 DemiMarie: https://github.com/gpuweb/gpuweb/wiki/implementation-status Is there something wrong with GPU drivers on Linux that causes Chrome to disable WebGPU there?
17:17 DemiMarie: And if so, is this specifically the Nvidia problem?
17:22 robclark: DemiMarie: given vk's lack of guarantees about undefined behavior, do you _really_ want webgpu enabled without separate sandboxes for the usermode driver?
17:23 DemiMarie: robclark: what makes desktop Linux different than ChromeOS in this regard, especially with lacros?
17:25 DemiMarie: the answer to your question is of course “no”, but desktop Linux being inconsistent with ChromeOS is confusing.
17:26 robclark: I guess the main difference w/ CrOS is we know what drivers we ship and are in control of uprev'ing them... I'm also not sure what the status of gpu-process sandbox is w/ chrome/ium on desktop linux
17:27 robclark: (but even in the CrOS case I think we need more hardening)
17:29 DemiMarie: robclark: If you decided “we don’t support WebGPU with X11 or with non-Mesa drivers” I would support that. It’s the inconsistency that is confusing.
17:31 robclark: I don't think x11 really changes much.. as much as the random unknown driver thing.. even with mesa drivers we know what versions we ship and can push out an update... that said, I wasn't involved in this decision wrt. linux vs cros, just speculating
17:31 robclark: w/ distro linux, it could be some ancient version of mesa, for ex..
17:34 DemiMarie: robclark: _insert rant about LTS distros here_
17:37 robclark: ;-)
17:37 DemiMarie: Simplest solution for the user-space drivers would be to ship Chromium as a Flatpak and use the up-to-date version of Mesa in the relevant runtime.
17:38 DavidHeidelberg[m]: Demi: that will haunt me in my dreams :D
17:38 DemiMarie: David Heidelberg: what will?
17:39 robclark: webgpu haunts me in my dreams
17:39 DavidHeidelberg[m]: but if I get it right, you can give Chromium flatpak some beta runtime with recent Mesa
17:40 DemiMarie: the stable ones do not have recent Mesa? That’s a problem.
17:41 DemiMarie: Anyway, I’ll stop so that this does not take away people’s time any more than it has.
17:41 DavidHeidelberg[m]: I've been told you can somehow use the system ones, just haven't took notes how :D
17:41 DavidHeidelberg[m]: *system Mesa3D
17:42 DavidHeidelberg[m]: GPU-Viewer reports 23.1.4
17:44 robclark: The real thing would be to have webgl/webgpu canvas's spin off their own private sandboxed gpu-process.. we don't really want usermode part of gpu stack to need to be a security barrier
17:44 robclark: (ofc the sandbox thing itself takes a bit of maintenance and sometimes needs to change with mesa versions)
17:45 DemiMarie: Should Mesa have its own built-in sandbox?
17:48 robclark: I'm not _entirely_ sure how that could work.. I mean if dri_foo.so is dynamically linked against something that hasn't been allowed then we can't even load mesa in the first place.. but there are plenty others who know the mechanics of deploying the sandbox better than I do
17:50 robclark: usually that sort of thing doesn't change _too_ often so it hasn't been enough of a pain point for CrOS to try to come up with something better.. but as they say, patches welcome ;-)
18:06 austriancoder: robclark: had you time to have a look at the isaspec doc poc !23763 ? If we could define what information should be shown how I can spend some time on it
18:13 robclark: austriancoder: idk if we could generate table, and then example syntax (which might just be dumping the display str w/ instruction name plugged in??).. I think that would be easier to read.. ie:
18:13 robclark: https://usercontent.irccloud-cdn.com/file/NGoTZHS7/image.png
18:15 austriancoder: robclark: images .. hmm .. lets see
18:15 robclark: (that is from arm arm, fwiw)
19:40 apteryx: are there free drivers for the likes of Matrox M9128 GPUs?
19:40 apteryx: c.f.: https://video.matrox.com/en/products/graphics-cards/m-series/m9128-lp-pcie-x16
19:43 Lynne: airlied: ping on the scaling list PR
19:47 gfxstrand: apteryx: Given that that's obviously a server card, it almost certainly on Linux (they wouldn't be able to sell it otherwise) and probably out-of-the-box.
19:48 gfxstrand: apteryx: I wouldn't call it a GPU, though.
19:49 gfxstrand: It looks like pretty much just a display card.
19:49 airlied: yeah I've no idea, matrox kinda fail
19:49 airlied: but a lot of their gpus are now just rebadged other people's gpus
19:49 karolherbst: apparently it supports GL 2.0 :D
19:50 apteryx: karolherbst: if it works without proprietary binary firmware blobs, that's already better than AMD!
19:50 karolherbst: ehhhh....
19:50 karolherbst: no
19:50 apteryx: (for my needs)
19:50 karolherbst: yeah, if you all need a display then yeah.. probably
19:51 gfxstrand: And D3D9!
19:51 karolherbst: though I suspect not on linux
19:51 karolherbst: but also kinda depends on what GPU that actually is
19:51 gfxstrand: But only Windows 7 and earlier because no WDDM2 (-:
19:52 karolherbst: it's probably some old AMD gpu
19:52 karolherbst: or something
19:52 gfxstrand: Could be a GeForce2 or similar
19:52 karolherbst: DDR3 128bit ehhhh
19:52 karolherbst: *DDR2
19:54 apteryx: gfxstrand: vista support is advertized for what it's worth: https://video.matrox.com/en/media/957/download
19:54 karolherbst: I kinda hate that they just don't say what it is
19:54 apteryx: could still be their own ASIC
19:55 karolherbst: mhhh... maybe
19:55 airlied: could be a g450 in disguise :-P
19:55 karolherbst: maybe it's also just software gl
19:55 airlied: probably one of their P series
19:55 airlied: which they never supported
19:57 apteryx:is giving them a call
19:59 karolherbst: would be cursed if they have actual GL and if it is their own ASIC and somebody does write a mesa driver for it
20:06 milek7: >High-resolution two DisplayPort monitor support: Support resolutions up to 2560x1600 per output
20:06 milek7: high resolution, yeah...
20:09 karolherbst: well.. that's a 2012 card
20:09 karolherbst: or something
20:09 ids1024[m]: When they advertise "Native PCI express x16 performance" and Windows XP support, would that be PCIe... gen 1?
20:10 karolherbst: I wonder what they mean by "native PCI express" though there were a bunch of GPUs which just had a PCIe to PCI bridge on the board
20:13 apteryx: their 2008 line looks very similar in terms of supported resolutions and memory, and was made of their own ASICs: https://www.techpowerup.com/64033/matrox-introduces-five-new-quadhead-graphics-cards
20:13 ids1024[m]: My interpretation of "Native PCI express x16 performance" is that it actually uses 16 PCIe lanes?
20:17 apteryx: couldn't get them on the phone
20:23 apteryx: otherwise from what year did the mainstream GPUs (AMD, nVIDIA) started requiring signed firmware?
20:30 apteryx: seems their latest offering is powered by Intel ARC: https://www.phoronix.com/news/Matrox-Intel-Arc-Graphics
20:33 glennk: apteryx, https://vgamuseum.info/images/demiurge/m9128/img0062.jpg looks like one of the parhelia variants
20:33 apteryx: and this message suggests their previous GPUs were using AMD ones: https://www.phoronix.com/forums/forum/hardware/graphics-cards/1384974-matrox-announces-luma-graphics-cards-powered-by-intel-arc-graphics?p=1385583#post1385583
20:35 apteryx: how hard would it be to make a crappy 2D video card using an FPGA?
20:35 glennk: anything newer than that card from matrox is probably rebranded radeons or geforce
20:35 karolherbst: apteryx: probably easier to use the CPU for that
20:35 karolherbst: (and more power efficient)
20:36 glennk: apteryx, https://github.com/Wren6991/PicoDVI
20:38 apteryx: karolherbst: I guess that's stops being true the minute I'd want to implement video acceleration?
20:38 karolherbst: depends
20:38 karolherbst: you'd have to compete with modern CPUs or whatever CPU you have with all their SIMD units
20:39 karolherbst: it's probably not hard to be smart about all of it and make it power efficient
20:39 karolherbst: but these days we also don't really have any 2D APIs
20:39 karolherbst: and it's all going through 3D _anyway_
20:39 apteryx: oh!
20:39 karolherbst: I think nvidia is the only GPU vendor still haveing a native 2D interface
20:40 karolherbst: and it hasn't been updated for 10+ years
20:40 karolherbst: so if you want acceleration you have to think about 3D and potentially shaders and....
20:41 apteryx: hm, and that raises the bar for entry
20:41 karolherbst: at which point it's a hell of a project and you'd have to consider if it's worth spending time on :D
20:41 karolherbst: though
20:41 karolherbst: with X you still can get 2D
20:41 karolherbst: but then you need to write your own X driver
20:41 airlied: and it's kinda pointless
20:42 airlied: since most modern stuff uses paths that really need a 3d accel path
20:42 airlied: not seeing anyone implementing Xrender in hw :-P
20:43 karolherbst: I wonder if we could implement Xrender on top of nvidia's 2d stuff :D
20:43 karolherbst: I'm sure nvidia has done it
20:43 airlied: no I don't think their 2d engine is that featureful
20:44 karolherbst: it even has polylines
20:44 airlied: that's ancient X core rendering, not X render
20:44 karolherbst: ahh, fair enough
20:44 karolherbst: so more blending stuff?
20:44 airlied: alpha blending and compositing
20:44 karolherbst: let's see...
20:44 glennk: trapezoids with compositing and masking
20:45 karolherbst: yeah, it supports blending
20:45 karolherbst: it even has two blend modes, but I never fiugred out how to actually use it
20:46 glennk: i think all those methods are firmware emulation
20:46 glennk: shader turtles all the way down
20:46 karolherbst: maybe
20:46 karolherbst: but also not likely
20:46 karolherbst: or maybe it would be
20:46 karolherbst: dunno
20:46 glennk: silicon validation is pricy
20:47 karolherbst: sure, but if you layer it on shaders, why even keep it in hardware?
20:48 glennk: backwards compat for old os:es
20:48 karolherbst: on newer GPUs?
20:48 karolherbst: also.. nothing talks with it directly, it all goes through drivers
20:49 karolherbst: it also doesn't invalidate any of the 3D or compute state using that stuff
20:50 glennk: host visible state
20:51 karolherbst: given that all state generally lives in buffers, that's hard to believe
20:51 apteryx: seems one approach is going straight to vulkan: https://www.phoronix.com/news/Libre-RISC-V-February-Designing; would that be usuable for a general purpose video card?
20:52 karolherbst: anyway, it makes more sense to be it their dedicated stuff for fast path certain operations
20:52 karolherbst: generally in memory
20:52 airlied: apteryx: yeah those guys not really know what's going on
20:53 apteryx: back to the boring real world: I'm recommended this for a cheap, free software friendly GPU: https://www.phoronix.com/review/asus-50-gpu
20:53 apteryx: It seems an AMD RX 580X would also be a fine choice, running the radeon driver
20:53 apteryx: according to https://h-node.org/videocards/view/en/2024/Advanced-Micro-Devices--Inc---AMD-ATI--Ellesmere--Radeon-RX-470-480-570-570X-580-580X-/1/1/undef/2017/works_with_3D/undef/video-card-works/undef
20:53 karolherbst: Intel burned a lot of money on making their CPU ISA viable for 3D
20:53 karolherbst: the conclusion was: don't do it
20:56 karolherbst: anyway
20:56 karolherbst: are they still doing this RISC-V GPU thing or is that abondened?
20:56 karolherbst: ahh, looks like it's dead
20:57 apteryx: this must be keeping Luke's busy: https://redsemiconductor.com/
20:57 apteryx: Luke*
21:06 agd5f: in the R600 days we actually had a set of shaders and that emulated the old 2D engine. You could actually use the old 2D pm4 packets if you loaded the right state and shaders. not sure if it ever got productized.
21:19 Lynne: karolherbst: there's some EU funded project for a custom from scratch GPU using the PPC ISA
21:20 airlied: yeah they got distracted into some sort of network accelerator sidetrack as well
21:20 Lynne: as for risc-v, it's still young, give it time, right now there are no CPUs out that you can buy with the vector extension
21:21 Lynne: though I do feel like the vector ISA may be a bit too flexible/rigid for a GPU
21:22 Lynne: they'd have to noop every instruction to set the vector size, and swizzles are afaik not supported
21:22 airlied: yeah like doing a risc-v gpu should really just be more around the effort of an open isa than reusing the risc-v isa
21:23 airlied: and creating a gpu/compute isa
21:23 airlied: that is scalar
21:36 Lynne: pretty much all popular RISC ISAs are unsuitable for as a GPU ISA base I think, they all use 32-bit instructions which leaves no room for immediates to allow for swizzles
21:38 Lynne: x86 may still be the most optimal general purpose ISA to build a GPU ISA around, avx 512 has the right ideas about swizzles via k-registers which most instructions support
21:39 Lynne: as long as each wavefront has a decently sized uop cache the decoder footprint wouldn't be larger than a CPU's
21:40 karolherbst: no
21:41 karolherbst: it's not
21:41 karolherbst: the best thing about GPUs ISAs are that they are scalar
21:42 airlied: yeah scalar with subgroup ops seems to be the winner
21:42 karolherbst: you don't need swizzles
21:42 karolherbst: so that's a pointless argument against RISC
21:43 Lynne: really? what about vectors?
21:43 karolherbst: they don't exist
21:43 karolherbst: only in memory load/stores
21:43 karolherbst: thats all
21:43 HdkR: Vectors are a figment of your imagination~ They can't hurt you anymore~
21:43 ccr: "there is no spoon."
21:43 karolherbst: nvidia is purely scalar since nearly forever
21:43 Lynne: huh, I was under the impression GPUs had vector units for 4-component float vectors
21:43 karolherbst: I think pre nv50 is vectorized?
21:44 karolherbst: Lynne: silly GPUs do
21:44 karolherbst: but scalar GPU ISAs are always the winner
21:44 karolherbst: because vectorized ISA are just evidence of wrong mindset
21:44 Lynne: ah, alright, I stand corrected then, RISC-V is a good base, especially with compressed instructions
21:44 karolherbst: well
21:44 karolherbst: no
21:45 karolherbst: the thing is, that most of the "let's use CPU ISAs on GPUs" miss the point on what makes GPUs fast
21:45 karolherbst: and it's not the ISA
21:45 karolherbst: it's the programming model
21:45 karolherbst: it's a mood point, because running CPUs code on GPUs is a lost battle
21:46 karolherbst: GPU programming model is fast on GPUs, because parallelism is _implicit_
21:46 airlied: wish someone would tell that to luxcore :-P
21:46 karolherbst: like you run a shader per primitive and only think of each primitive of a scalar program
21:46 karolherbst: and the parallelism happens under the hood
21:48 karolherbst: in theory you can do great things even with x86 SIMD units, but you won't achieve it if you are using C as your language
21:48 karolherbst: because C's programming model maps poorly to GPUs and compilers rely on auto vectorization (which GPUs don't even need)
21:49 Lynne: so what happened to VLIW GPUs?
21:49 karolherbst: they are dead
21:49 gfxstrand: They're a bad idea
21:49 Lynne: they explicitly specify parallelism
21:49 Lynne: what replaced them?
21:49 karolherbst: scalar ISAs
21:50 Lynne: oh, I read implicit as explicit
21:50 karolherbst: ahh, yeah
21:52 karolherbst: Lynne: anyway, I can recommend reading this series of blog posts: https://pharr.org/matt/blog/2018/04/18/ispc-origins it's not _that_ technical, but it explains the problems pretty well and what's the _actual_ problem
21:52 karolherbst: in theory you can also make SIMD ISAs work, they are just pointless and have drawbacks you can't really fix
21:52 karolherbst: at least for GPUs
21:59 airlied: yeah the auto vectorise story is great
22:01 gfxstrand:looks at Intel's ISA
22:01 gfxstrand: The "best" SIMD...
22:01 Lynne: thanks, that looks very interesting
22:02 Lynne: why did they split SPIR-V into kernel and non-kernel mode btw? did it have something to do with opencl and silly vector GPUs?
22:03 karolherbst: llvmpipe even uses the masked SIMD instruction thing described later in the series as well
22:03 karolherbst: yeah
22:04 karolherbst: CL is... weird
22:04 karolherbst: so
22:04 karolherbst: the main differences are that CL has vec8/vec16 types
22:04 karolherbst: and that CL doesn't require a structured CFG
22:04 karolherbst: everything else being different are just details
22:04 karolherbst: those are the two major things
22:05 karolherbst: but you can implement CL just fine on Vulkan SPIR-V
22:05 karolherbst: it's just more work
22:06 HdkR: gfxstrand: The best SIMD because you have effectively infinite encoding space to fix past mistakes :P
22:07 Lynne: I still think vectors have a place in GPUs, if you're the type of maniac who hand-writes assembly :)
22:07 karolherbst: no
22:08 karolherbst: it just doesn't make sense for GPUs
22:08 karolherbst: it makes sense for a couple of instructions, but not for general ALU stuff
22:09 karolherbst: the problem is really, what if you can't make use of the full vector hardware? then it's just wasted silicon with no inherent benefit, as evertyhing is already threaded anyway
22:10 karolherbst: high end GPUs run like 10k+ threads at once
22:10 HdkR: Just scale the scalar GPU hardware wider if you need a wider "SIMD" unit :)
22:10 karolherbst: at which point SIMD just doesn't make a difference
22:11 karolherbst: the entire GPU is already a huge "SIMD" unit, you just describe what each of those threads do
22:11 karolherbst: (divergency is bad on GPUs though for this reason)
22:13 Company: so you're saying the maniacs should code stuff with more threads instead of with more SIMD?
22:14 airlied: I did wonder if we should just do an enable "some kernel SPIRV" in vulkan, instead of trying to do all the CL stuff
22:14 Company: like, one thread for each color channel instead of using SIMD for rgba?
22:14 karolherbst: yeah... but...
22:14 airlied: find the most interesting ibts, though I suspect the CL extended instruction set is probably a lot of it
22:14 karolherbst: airlied: though maybe the middle ground is to allow the CL extended instruction set
22:14 karolherbst: but...
22:15 airlied: karolherbst: yes that + vec8/16 but definitely structured cfg :-P
22:15 karolherbst: yeah.. thing is... we can also just handle it in the driver for everybody instead :D
22:15 karolherbst: not if they would do anything different
22:15 karolherbst: GPU optimized libclc might be a good argument
22:16 karolherbst: but then you can also just write a vk extension adding that instruction set
22:18 karolherbst: but yeah... we'll see how it turns out
22:19 karolherbst: I really should upstream rusticl on zink and file for official conformance :D
22:19 karolherbst: radv is just being a bit annoying with more crashes then anv or lvp
22:19 karolherbst: *than
22:19 karolherbst: and running on nvidia is kinda slow for whatever reason
22:27 alyssa: karolherbst: ruzticl conformance .. i hope that deprecates clvk >:)
22:28 karolherbst: heh
22:28 karolherbst: I'll just try to be conformant before them
22:28 karolherbst: I think I ignored it for long enough
22:28 karolherbst: this xdc conformant rusticl on zink?
22:28 alyssa: >:)
22:28 alyssa: 30 day window, yeah you could make it
22:28 karolherbst: I don't have _that_ much time
22:28 karolherbst: but also
22:29 karolherbst: I think I need to fix like one or two bugs?
22:29 karolherbst: I'm mostly there
22:30 karolherbst: there are just annoying things to figure out
22:30 karolherbst: "Pass 2374 Fails 69 Crashes 11 Timeouts 0" on anv, where 60 fails are CL_FILTER_NEAREST fails, which the conformance test doesn't check at all
22:30 alyssa: nice
22:31 karolherbst: so 9 fails and 11 crashes out of 2400
22:31 alyssa: karolherbst: when you get a chance btw I want to talk CL on M1 >:)
22:31 karolherbst: radv is causing more issues.. and I manage to crash Nvidia
22:31 karolherbst: alyssa: I should upstream my branch :D
22:31 alyssa: we now have ES3.1 class compute/images + working 8-bit/16-bit/64-bit tested against dEQP-VK
22:31 karolherbst: yeah...
22:32 alyssa: right now the thing I'm most worried about are float controls
22:32 karolherbst: I have it all working
22:32 alyssa: I have no idea if there's flush-to-zero in hardware, if it's always on, never on, if we can control in the driver..
22:32 alyssa: I'm nervous abut those contractions tests
22:32 karolherbst: ohh, that's fine
22:32 alyssa: can't.
22:32 alyssa: won't.
22:32 alyssa: shouldn't.
22:32 karolherbst: you don't need flush-to-zero supported
22:32 karolherbst: it's entirely optional
22:33 alyssa: the problem is that our libclc build assumes ftz
22:33 alyssa: and I don't want to sign up for building a denorm-aware libclc
22:33 karolherbst: ohhh...
22:33 alyssa: for Mali, I had to enable the hardware ftz to pass the contractions tests
22:33 karolherbst: uhhh...
22:33 karolherbst: I don't think I had issues with that?
22:33 alyssa: I know it's not an opencl requirement but I think it effectively is a rusticl one ..
22:34 karolherbst: yeah.. at this point I don't support denorms yet
22:34 alyssa: karolherbst: See b261a185508 ("panfrost: Honour flush-to-zero controls on Valhall")
22:35 bnieuwenhuizen: dcbaker: what happened to 23.2? I see rc2 happened more than 3 weeks ago. Any issues we can help with?
22:35 alyssa: karolherbst: Presumably, you don't see issues because the float_controls_execution_mode is honoured by the underlying drivers
22:35 karolherbst: alyssa: mhh.. I can certainly take another look, now that I have working vec8/16 to vec4 lowering
22:35 alyssa: karolherbst: what's that got to do with anything?
22:35 alyssa: AGX is scalar
22:35 alyssa: vec8 stuff gets lowered via alu_width + mem_width
22:36 karolherbst: not all of it
22:36 karolherbst: uhh
22:36 karolherbst: maybe it does now
22:36 karolherbst: but you were left with some leftoever vec8/16 stuff
22:36 karolherbst: I never bothered checking actually
22:37 karolherbst: anyway
22:37 karolherbst: I have some leftover patches I can submit an MR for
22:37 DavidHeidelberg[m]: btw. Intel gles + vk StarWars tellusim engine crashes on recent mesa (but I have nightly build without debug). Btw. it worked on stable mesa
22:37 DavidHeidelberg[m]: I'll compile with debug and drop logs
22:38 karolherbst: alyssa: though what I really need is a proper agx_get_compute_state_info implementation
22:38 karolherbst: specifically that `max_threads` property
22:39 kisak: DavidHeidelberg[m]: file an issue report on gitlab if it hasn't been already so that your findings aren't buried in the backscroll.
22:41 DavidHeidelberg[m]: kisak: sure sure, I also asked for more update version, if there is any from Tellusim (this is 20221109), anyway still it should work, let me compile one beatiful mesa with intel vk and iris :P
22:46 alyssa: karolherbst: should be easy
22:46 alyssa: In the compiler agx_occupancy_for_register_count gives you the max_threads (grep for it in the src)
22:47 karolherbst: nice
22:47 alyssa: so just need to add that to agx_shader_info and then you'll get the value as part of the agx_compiled_shader
22:47 karolherbst: yeah, that should be good enough
22:47 alyssa: can't 1000% guarantee correctness but it should be a good enough approximation
22:47 karolherbst: the CTS will run into issues if that value is too high
22:48 alyssa: good, I want to hear about those since if I got this wrong, perf will suffer
22:48 illwieckz: speaking about rusticl on zink, once terakan works, would it be possible to use rusticl on zink to get opencl on terascale?
22:49 illwieckz: or the vulkan version/features supported would not be enough?
22:49 karolherbst: uhhh
22:49 karolherbst: if it supports bda
22:49 illwieckz: what's bda?
22:49 karolherbst: real pointers
22:49 illwieckz: ah ok
22:49 karolherbst: which I doubt terascale could support
22:50 karolherbst: not sure
22:50 karolherbst: you kinda need an MMU for that and all that
22:50 illwieckz: now that you say it I feel like I already asked and we already had this conversion 🤷‍♀️️
22:50 karolherbst: because with CL you can use arbitrary pointers
22:50 karolherbst: probably
22:51 illwieckz: the information flows in my brain like if that's not the first time it happened
23:16 karolherbst: do gallium drivers expect the frontend to call nir_lower_pack?
23:16 karolherbst: at least the glsl linker does it mhhh...
23:18 karolherbst: at least zink seems to expect that..
23:56 Lynne: airlied: tests passed, but I think someone needs to press the rebase button