00:12 mareko: gfxstrand: what is NAK?
00:13 airlied: nvidia kompiler
00:13 mareko: so not NCO
00:15 mareko: maybe the NXPTX LLVM backend is the answer, who knows
00:15 mareko: *NVPTX
00:16 karolherbst: the answer to what? ptx is a high level language
00:18 airlied: where's my ptx to spir-v translator :-P
00:19 airlied: https://github.com/gthparch/NVPTX-SPIRV-Translator oh someone wrote it :-P
00:19 karolherbst: I wonder how often we can translate in circles until something crashes
00:20 airlied: or someone could write tcg like layer for nvidia binaries :-P
00:21 karolherbst: uhhh
00:35 alyssa: karolherbst: dozen + vkd3d all the things
00:36 karolherbst: mhhh
00:37 alyssa: or vkd3d + dozen if you prefer
01:20 idr: karolherbst: It's like that game of translating some bit of text through various human languages until you get total gibberish.
01:27 alyssa: Es como ese juego de traducir un poco de texto entre varios idiomas humanos hasta que no tenga sentido
01:50 idr: https://media.tenor.com/ts_UxTASGroAAAAC/cant-understand-your-accent-spongebob.gif
03:03 tuxayo: Hi, hola, saluton :) Does anyone know about what could be missing in an AppImage to have Vulkan support? Someone worked on an AppImage for the 0ad game and when enabling the Vulkan renderer, it doesn't detect support (probes for VK_KHR_surface) on what seems to be any Intel and AMD GPUs in general (so it seems to have something to do with Mesa).
03:03 tuxayo: And it falls back on OpenGL.
03:03 tuxayo: On an NVIDIA GPU it works (I'm assuming it was the non-libre driver).
03:03 tuxayo: So it can find the right mesa stuff when using OpenGL but when using Vulkan it doesn't find it. But it does find the non-libre NVIDIA Vulkan driver...
03:03 tuxayo: Any clue? Here is the main build script and it seem to do nothing in particular to give us good Mesa OpenGL support: https://github.com/0ad-matters/0ad-appimage/blob/trunk/workflow.sh
03:03 tuxayo: And here is the head-scratching so far:
03:03 tuxayo: https://discourse.appimage.org/t/vulkan-disabled-when-running-0ad-appimage-with-intel-or-amd-chipsets/2908
03:03 tuxayo: https://github.com/0ad-matters/0ad-appimage/issues/19
03:23 airlied: tuxayo: probably missing the vulkan loader
03:27 airlied: but probably also need the mesa vulkan drivers
03:27 airlied: not sure how appimage works there
03:31 airlied: tuxayo: maybe also the headers to build against, not sure how NVIDIA works
03:44 tuxayo: airlied: thanks for the hints. So likely linuxdeploy/AppRun which build the AppImage takes care of basic mesa stuff but lacks vulkan loader/mesa vulkan drivers/headers
03:50 airlied: yeah if I had to guess
04:03 marcan: looks like gitlab is unhappy...
04:34 Nefsen402: It's an issue for me as well so it isn't localized
07:49 javierm: tzimmermann: hi, I haven't reviewed your optional fbdev series yet, but wondered what did you different than what I attempted in https://lore.kernel.org/lkml/20210827100027.1577561-1-javierm@redhat.com/t/
07:51 javierm: tzimmermann: ah, I see. You want to hide all the fbdev uAPI (/dev/fb?, sysfs, etc) while I tried to only disable the "real" fbdev drivers (but keeping emulated fbdev uAPI)
07:51 javierm: tzimmermann: so you plan to only keep the bare minimum to support fbcon, makes sense
07:53 tzimmermann: javierm, it occured to me that we spoke about that change at some point. but i didn't remember that you even sent a patchset. i'll give you credit in the next iteration of the patchset.
07:53 tzimmermann: javierm, i'm not sure what the difference is. but i was just reading the old discussion and I left a comment about the exisence of the fb device
07:54 tzimmermann: in my patches i remove all of that. everything in devfs, sysfs and procfs is gone
07:54 javierm: tzimmermann: yeah, I wasn't sure about the difference but after reading your cover letter I understand the difference of the approach now
07:54 javierm: tzimmermann: I tried to keep the emulated DRM fbdev while you are also getting rid of that
07:54 tzimmermann: fb_info will only by a data structure that connects the framebuffer device with fbcon
07:55 tzimmermann: s/by/be
07:55 javierm: tzimmermann: I think that did because something still dependend on that (maybe plymouth?) but that has been fixed already
07:55 javierm: so I agree that your apparoch is better, get rid of all the uAPI for fbdev and just keep fbcon for now
07:55 tzimmermann: there's not much in userspace that requires fbdev. i guess most doesn't even support it
07:56 javierm: tzimmermann: yeah
07:56 javierm: tzimmermann: I see that you will post a v2. I'll review that then
07:56 tzimmermann: two thirds of these patches are actually bugfixes :)
07:56 javierm: :)
07:57 tzimmermann: javierm, your review is very welcome. i'll keep the current version up a bit longer.
08:19 javierm: tzimmermann: sure, I'll review v1 then
08:22 MrCooper: DavidHeidelberg[m]: my main point was that the commit logs don't accurately reflect the situation and trade-off being made
08:36 tzimmermann: thanks, javierm
09:01 siddh: Hello, can anyone merge the revert commit [1] I had sent some time ago regarding drm macros? IIRC, the author had blindly used coccinelle and did not consider the unintended change. It is part of the drm macro series, but even if the series is not considered for merge, the revert should be since the change was incorrect.
09:01 siddh: [1] https://lore.kernel.org/dri-devel/e427dcb5cff953ace36df3225b8444da5cd83f8b.1677574322.git.code@siddh.me/
09:36 jani: siddh: it no longer applies, needs a rebase
09:37 dj-death: gfxstrand: do you remember what led you to disable compression for Anv's attachment feedback loop?
09:38 siddh: @jani: oh okay... will send after doing a rebase
09:38 dj-death: gfxstrand: there is a comment about aux data being separately
09:38 dj-death: gfxstrand: but that makes no sense to me
09:39 dj-death: gfxstrand: texturing & rendering having different caches does, but I'm failing to see where the compressed data fits in there
09:58 javierm: tzimmermann: not sure I got your comment about the page_size on the ssd130x driver, that's not the system memory pages but the ssd130x controller "pages", that is how they divide the screen
09:59 javierm: tzimmermann: https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/solomon/ssd130x.c#L442
10:00 javierm: tzimmermann: also, the GEM shmem allocation is only done for the shadow buffer and that's bigger than the actual screen. Since is DRM_SHADOW_PLANE_MAX_{WIDTH,HEIGHT}
10:00 javierm: or am I wrong on that?
10:31 tzimmermann: javierm, what i mean is: userspace allocates a GEM buffer, say 800 x 600. those sizes are aligned to a multiple of 64. so you'd allocate a memory block of 832 x 640 bytes. if these sizes are not dividable by 'page_size' and you do a DIV_ROUND_UP, you might end up with values that refer to areas outside the memory. for example during pageflip's memcpy() . i don't know if that can actually happen in the driver. i was
10:31 tzimmermann: just concerned that the page_size might interfere here
10:36 Hazematman: Hey, I'm working on a driver that doesn't have native support for PIPE_FORMAT_R32G32B32_FLOAT. If an OpenGL app requests that format as a RB, gallium seems to converts it to PIPE_FORMAT_R32G32B32A32_FLOAT (which is supported). Does anyone know where this happens. I've been trying to dig through the gallium infrastructure to see where it handles surface conversion, to find if its possible to access the native requested format. Any
10:36 Hazematman: guidance of where I should look would be appreciated
11:11 danylo: Hazematman: I think it chooses the compatible format with `choose_renderbuffer_format`. I guess to see where it handles the mismatch between formats you'd have to search for where `->InternalFormat` is used.
11:38 javierm: tzimmermann: ah, got it. Good point, I'll look if that can happen and if is a possibility can fix on top. Thanks!
12:42 swick[m]: Lyude: I'm looking at https://gitlab.freedesktop.org/drm/intel/-/issues/8425 again. The intel eDP proprietary backlight control has a bunch of registers and control bits unused which sound like they could be the cause.
12:43 swick[m]: jani: ^
12:43 swick[m]: are there more details on them? I don't have the hardware to test any of that...
14:34 mareko: DavidHeidelberg[m]: do any amd CI tests use LLVM < 15?
14:39 DavidHeidelberg[m]: mareko: I don't think so, so far all images are 15
15:43 mareko: great, thanks
16:18 mareko: karolherbst: when do you think we can drop clover support from radeonsi?
16:35 karolherbst: mareko: I want to wait until proper function calling support
16:36 karolherbst: that's more or less the biggest regression compared to clover
16:37 karolherbst: I kinda plan to prototype this with llvmpipe and radeonsi given they use LLVM so it shouldn't be too hard to do, but nir needs some fixes here and there
16:38 mareko: NIR->LLVM can't do function calls
16:38 karolherbst: I know
16:39 karolherbst: but without that we sometimes get shaders with like 2 million SSA values and RA eats through RAM and takes hours
16:40 karolherbst: there are still some unknowns on how to do things, but my initial plan was to kinda only have function calls between the kernel and libclc
16:40 karolherbst: and maybe only for functions being of specific size
16:40 karolherbst: some of those libclc functions are massive and even use LUTs
16:41 mareko: I don't know if LLVM support function calls with the Mesa (non-native) ABI
16:42 mareko: LLVM compiles shaders slightly differently for radeonsi, RADV, PAL, and clover (same as ROCm)
16:44 mareko: there is an LLVM target triple that we set, radeonsi sets amdgcn--, RADV sets amdgcn-mesa-mesa3d, and I don't know what clover sets
16:44 karolherbst: clover has a CAP for it: PIPE_COMPUTE_CAP_IR_TARGET
16:45 karolherbst: it's amdgcn-mesa-mesa3d as it seems
16:45 mareko: ok
16:45 karolherbst: we don't need an ABI because I'm not planning to link GPU binaries, so as long as the final binary works it's all fine
16:45 karolherbst: or rather, not a stable one
16:46 karolherbst: so whatever llvm does internally for function calls doesn't really matter here
16:46 mareko: arsenm on #radeon might know if amdgcn-- supports function calls
16:48 karolherbst: besides that we have a little delete clover tracker here: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/19385
16:48 karolherbst: fp16 is kinda the only other thing missing, but that should be fairly trivial to add
18:07 mareko: karolherbst: given what arsenm said, the only missing thing is function call support in ac_nir_to_llvm and probably adjacent places
18:23 airlied: karolherbst adding functions to llvmpipe was a bit of a pain, radeonsi might be easier at least as long as it's using llvm
18:23 airlied: but I think with llvmpipe the overheads of sticking stuff onto the stack was quite noticeable
18:36 mareko: wow the Marge queue has 15 MRs
18:36 karolherbst: airlied: yeah.. that's why I only want to turn calls into huge libclc functions into proper calls
18:37 mareko: for Mesa
18:37 karolherbst: where copying them multiple times would just hurt everything
18:40 karolherbst: airlied: I kinda want to figure out why those luxmark benchmarks explode in size and just do function calls to deal with that problem
18:50 mareko: radeonsi also unrolls aggressively
18:50 mareko: see si_get.c
18:50 mareko: loops with up to 128 iterations are unrolled
18:51 mareko: probably regardless of the loop body size
18:51 karolherbst: mhhhh, that would be.. bad
18:52 karolherbst: anyway, I didn't check why those shaders explode in side, I just now they end up with millions of SSA values
18:52 karolherbst: maybe I should do that
18:53 karolherbst: mareko: seems like opt_loop_unroll checks for 26 instructions
18:53 karolherbst: ehh wait
18:53 karolherbst: iterations
18:54 karolherbst: or is it instructions?
18:54 karolherbst: yeah.. it's instructions
19:16 karolherbst: mareko: btw, will you create the MR for the vectorization stuff?
19:18 HdkR: karolherbst: How does one setup rust in meson's cross files for 32-bit? Or do I just ignore 32-bit rusticl?
19:18 karolherbst: HdkR: good question, but I guess you just set rustc and set a 32 bit target as a compiler flag
19:18 karolherbst: I actually did that...
19:19 HdkR: Currently meson just complains that `rust compiler binary not defined in cross or native file`
19:19 karolherbst: ahh yes.. HdkR: rust = '/home/kherbst/.rustup/toolchains/1.59-i686-unknown-linux-gnu/bin/rustc' 🙃
19:20 HdkR: ah
19:21 karolherbst: I think you can potentially also set the target, but I think just pointing to a toolchain is the proper way.. dunno.. I guess it depends on how your distrubtion handles it if you are not using rustup
19:22 HdkR: Currently poking around at ArchLinux
19:23 HdkR: ah, blocked by them not supporting spirv-tools for 32-bit and I'm too lazy to build that :)
19:24 karolherbst: :)
19:24 HdkR: Oh well, not too concerned about 32-bit CL anyway
19:25 karolherbst: one user actually filed a bug, because some 32 bit windows app ran into problems with rusticl 6
19:25 karolherbst: s/6//
19:26 HdkR: I guess they can figure that out if they want it running under FEX :P
19:26 karolherbst: :D
19:26 karolherbst: fair enough
19:26 karolherbst: at some point I also have to check out FEX on my macbook
19:26 HdkR: Finally getting around to creating an Arch image so Asahi users can have a nicer experience
19:26 karolherbst: but CL doens't run there very well except for llvmpipe
19:27 karolherbst: ahh, cool
19:27 karolherbst: but the new and hot asahi distribution is fedora based :P
19:27 HdkR: Next step Fedora I guess
19:47 DemiMarie: Is the simplest solution to the LLVM problems to stop using LLVM? Walter Bright wrote a C frontend to Digital Mars D in ~5000 lines of D, and I suspect Mesa has far more code than that that just works around LLVM problems. LLVM isn’t magic, and from what I have read it seems that its optimizations don’t really do anything useful. If one needed a C++ frontend that would be another matter, but my understanding is that none is needed.
19:49 karolherbst: 1. we'd still have to maintain it 2. llvmpipe 3. C isn't just the language
20:01 karolherbst: I won't say no if somebody comes around and writes a full C compiler + all the OpenCL API nonsense bits, but I won't do it
20:02 DemiMarie: I see
20:02 DemiMarie: Clang having OpenCL support is not something I expected.
20:02 karolherbst: yeah.. we just use clangs support there
20:02 karolherbst: they deal with most of the extension + header nonsense
20:03 karolherbst: well.. builtins at this point, using the headers is slower than using the new and fancy stuff, which isn't headers
20:03 karolherbst: kinda don't want to replicate all of that
20:05 karolherbst: also.. writing a new C frontend is all cool and everything, but 5k just for parsing/lexing? kinda brutal
20:06 karolherbst: anyway.. the CL bits dealing with LLVM are small, most of it is dealing with spir-v stuff.
20:06 karolherbst: the part where LLVM matters more is on the backend side
20:06 HdkR: Considering lexing is my least favourite part, I'll never do that :D
20:07 karolherbst: llvmpipe and radeonsi do a lot of LLVM backend stuff, none of it is even remotely frontend related
20:07 karolherbst: radeonsi problem will be solved with ACO, probably
20:07 karolherbst: and to replace LLVM's use in llvmpipe we'd have to support _multiple_ CPU architectures with all their nonsense
20:07 karolherbst: no thank you :D
20:09 DemiMarie: Yeah LLVM is awesome at generating CPU code.
20:10 HdkR: LLVM and the CPU side, great
20:10 karolherbst: we even found an auto vectorization issue recently ...
20:10 karolherbst: now radeonsi calls a nir pass to vectorize so LLVM can still mess up and we won't care
20:11 DemiMarie: Not suprising. I imagine llvmpipe generates very easily vectorizable code.
20:11 karolherbst: probably
20:11 karolherbst: airlied: you might want to call nir_opt_load_store_vectorize :D
20:11 karolherbst: in llvmpipe
20:13 DemiMarie: Curious: what does llvmpipe-generated code wind up bottlenecking on?
20:16 jenatali: Yeah WARP's JIT backend for multiple CPU architectures is a mess...
20:16 HdkR: llvmpipe is usually bottlenecked on vertex processing isn't it?
20:17 HdkR: Since that was one of the things that SWR targeted as an improvement
20:35 karolherbst: antoniospg____: btw, did you made some progress on fp16 support?
20:35 airlied: for most workkloads it bottlenecks on memory bandwidth around fragment shading
20:36 airlied: there are some vertex heavy workloads where binning hits hard
20:36 Lynne: isn't the mess in writing custom jit mostly in the platform ABI differences?
20:36 karolherbst: good thing is: we have no ABI to care about
20:37 airlied: yeah I'd hate to have to write backends for every processor in mesa itself
20:37 karolherbst: anyway...
20:37 airlied: karolherbst: not sure, llvmpipe doesn't do vectors like others do vectors
20:38 karolherbst: airlied: you still want to give nir_opt_load_store_vectorize a go :D somehow llvm is too dumb to merge loads and ditch repeated address calculations in loops
20:38 karolherbst: nah.. it has like _nothing_ to do with vectors
20:38 karolherbst: airlied: https://gitlab.freedesktop.org/mesa/mesa/-/issues/9139#note_1940698 and following comments
20:38 karolherbst: just vectorizing loads ditches some alus on address calculations
20:38 karolherbst: it's very dumb
20:39 karolherbst: might be some amdgpu backend specific issue though
20:39 airlied: yeah as I said when llvmpipe translates from nir it doesn't a whole lot of address translations itself
20:39 karolherbst: mhh, fair enough then
20:39 airlied: but yeah I should throw it in at some point
20:39 airlied: but I've no real way to notice it working :-P
20:39 karolherbst: I'm just super surprised it even matters for radeonsi
20:39 airlied: shaderdb someday :-P
20:40 karolherbst: heh
20:40 karolherbst: maybe I should check with luxmark
20:44 DemiMarie: airlied: bottlenecking on memory bandwidth explains why llvmpipe works so well on Apple Silicon.
20:44 DemiMarie: Because they have loads of it.
20:44 karolherbst: kinda, but less on the CPU side sadly
20:45 HdkR: 800GB/s is very much in dGPU territory :)
20:45 DemiMarie: Why can GPUs have so much better memory bandwidth?
20:45 karolherbst: because it needs more
20:45 karolherbst: the CPU seems to have slower access but it might also be because the CPU is too slow
20:46 DemiMarie: I’m more interested in what is different about the GPU memory systems, especially on iGPUs where the DRAM and DRAM controllers are identical.
20:46 HdkR: To note, CPUs tend to have lower latency on their memory accesses
20:47 DemiMarie: Why is there a latency vs throughput tradeoff there?
20:47 karolherbst: CPUs cheap out on memory bandwidth because they are still DIMM
20:47 airlied: the other things GPUs have is tiling
20:47 karolherbst: and it's all very limiting
20:47 karolherbst: Apple uses 128 bit for memory transfers
20:47 karolherbst: where on x86 you always get 64
20:47 airlied: tiled textures are a big advantage if you are doing all the address translation in hw
20:48 karolherbst: and the "channel" situation with x86 memory is also just silly
20:48 DemiMarie: karolherbst: for iGPUs both the CPU and GPUs have the same DRAM chips, so DIMMs are not relevant here.
20:48 karolherbst: but how to you connect the memory?
20:48 karolherbst: *do you
20:48 karolherbst: the DIMM spec specifies memory operation latencies + transfer rates
20:49 karolherbst: can't really fix that
20:49 karolherbst: so you are just stuck with whatever that uses
20:49 DemiMarie: How is this relevant? My point is that iGPUs have the same memory the CPU does, so it must be something other than the RAM chips.
20:49 karolherbst: CPU memory is _slow_ on x86 systems
20:49 DemiMarie: what part of the CPU memory is slow?
20:49 karolherbst: you get like 50 GB/s on normal consumer systems
20:49 karolherbst: the DIMM :)
20:50 DemiMarie: then why does i915 not have garbage performance?
20:50 karolherbst: it does have garbage perf
20:50 karolherbst: the M2 is like 8 times that?
20:50 karolherbst: the normal M2
20:50 DemiMarie: Is this Intel-specific or does AMD also have bad memory bandwidth?
20:50 karolherbst: same on AMD
20:51 karolherbst: it's just that consumer systems are dual channel 64 bit at most
20:51 karolherbst: and that's like around 50-60 GB/s
20:51 HdkR: M2 gets 100GB/s
20:51 DemiMarie: Does this mean that the M2 could do faster shading in software than i915 can in hardware?
20:51 airlied: steamdeck does 88GB/s
20:51 DemiMarie: At least on some workloads where fixed function isn’t a bottleneck
20:51 karolherbst: mhhh.. probably not
20:52 karolherbst: airlied: quad channel?
20:52 karolherbst: or what is differnet on the steamdeck apu?
20:52 HdkR: 128-bit bus, technically quad channel because of DDR5
20:52 karolherbst: ahhh
20:52 airlied: karolherbst: yeah
20:52 karolherbst: 128 bit then
20:52 karolherbst: well.. it's easily fixable on consumer hardware, but no vendor is tackling it
20:52 HdkR: Desktop class would get roughly equivalent on DDR5
20:53 karolherbst: Dell kinda tried that with replacing DIMMs, but that's not going anywhere as it seems
20:53 karolherbst: HdkR: that's so sad
20:53 HdkR: It's a shame that desktop class has been stuck on 128-bit for so long
20:54 HdkR: 192-bit would be a cool upgrade for that segment
20:54 karolherbst: yeah..
20:54 HdkR: Or just go straight 256-bit since most every board supports quad dimms anyway
20:54 karolherbst: but that's not getting you 400 or even 800 GB/s :D
20:54 DemiMarie: Can we start working on some design specs and see of SiFive can actually build a fast chip?
20:54 psykose: people would have riots if you removed replacable dimms
20:55 DemiMarie: What does get one that?
20:55 karolherbst: psykose: well.. dell suggested something better
20:55 karolherbst: but...
20:55 psykose: haha
20:55 karolherbst: but we can also just stick with slow memory :D
20:55 karolherbst: something has to change or it's the end of x86 for real
20:55 HdkR: Does CAMM allow 128-bit per module?
20:55 DemiMarie: karolherbst: x86 needs to die
20:55 karolherbst: HdkR: good question
20:56 psykose: riscv also needs to die but nobody wants to hear it
20:56 HdkR: Four dimms of 128-bit each would get desktops a hell of a lot closer
20:56 airlied: riscv will eat itself
20:56 karolherbst: HdkR: probably if you call it QIMM and bumb it to 128 bit :P
20:56 DemiMarie: airlied: eat itself?
20:56 HdkR: :D
20:56 karolherbst: or di 96 bit first and call them TIMMs
20:56 airlied: it'll just be incompatible fork after incompatible fork, until there is no "risc-v"
20:57 DemiMarie: karolherbst: the task is not keeping x86 alive, but rather ensuring that open platforms do not die with it.
20:57 karolherbst: don't use DIMMs
20:57 karolherbst: that's the way
20:57 karolherbst: just do whatever apple did with memory
20:57 airlied: solder that shit down
20:57 HdkR: karolherbst: Anything more than 64-bit per DIMM would be an upgrade and I'm all for it.
20:57 airlied: or HBM it up
20:57 karolherbst: yeah.. soldering is the solution here
20:57 DemiMarie: Why????
20:57 karolherbst: but "people need to replace it" no
20:57 karolherbst: well
20:58 karolherbst: it's either that or it dies :)
20:58 HdkR: Memory on package is how Apple managed to get those numbers
20:58 karolherbst: yep
20:58 airlied: the DIMM socket is an impediment to speed
20:58 karolherbst: and how GPUs get those numbers for years
20:58 airlied: all sockets are
20:58 HdkR: It's infeasible in a current spec socketed system
20:58 DemiMarie: karolherbst: are you saying that replacable memory simply cannot be anywhere near as fast as soldered memory?
20:58 karolherbst: the Dell thing was interesting, but not sure what peak speeds they have
20:58 psykose: it's pretty much electrically impossible yes
20:58 karolherbst: DemiMarie: correct
20:59 psykose: there's too many wires and length of wire to make it fast
20:59 DemiMarie: Even with active signal regeneration in the sockets?
20:59 karolherbst: the RAM on the M2 is right beside the SoC
20:59 karolherbst: like literally right beside it
20:59 karolherbst: and it's super small
20:59 DemiMarie: Maybe we need optical on-board interconnects
20:59 karolherbst: the entire SoC is smaller than an entire DIMM module
20:59 psykose: IBM was doing some serial memory thing with firmware on the ram modules
20:59 psykose: weren't they
21:00 HdkR: optical would introduce /more/ latencies. Short runs of optical are actually slower than just copper. Ask people that use direct-attached-copper cables in networks
21:00 puck_: psykose: there's also CXL now
21:00 psykose: interesting
21:00 karolherbst: ahh CAMM is the Dell thing.. right
21:00 DemiMarie: HdkR: Signal propogation velocity is _not_ the limiting factor here.
21:01 karolherbst: I acutally don't know if it fixes the perf problem
21:01 puck_: i'm reminded of the AMD 4700S
21:01 puck_: which is very distinct and has 16GB of soldered RAM used for both the CPU and what would be the GPU but i think they fused off the APU bits
21:02 DemiMarie: Even in optical fiber light still goes 6cm in a single clock cycle.
21:02 DemiMarie: CPU clock
21:02 puck_: ..but it's 16GB of *GDDR6* as main memory
21:02 DemiMarie: At 3GHz
21:02 karolherbst: yeah but you also have to translate it into electrical signals and all that
21:02 puck_: which is fast but has higher latency
21:02 DemiMarie: karolherbst: my point is that the signal integrity problems simply vanish
21:02 karolherbst: but it comes with massive latency costs
21:02 DemiMarie: Why
21:02 DemiMarie: ?
21:03 karolherbst: because translating to optical signal isn't for free?
21:03 karolherbst: we talk single digit ns here
21:03 DemiMarie: Let me guess: the real limitation is cost?
21:03 DemiMarie: I know
21:03 karolherbst: maybe?
21:04 karolherbst: but in any case, just soldering it together solves the problem in a simpler way
21:04 DemiMarie: And I am 99.99% certain that e.g. optical modulators have latencies far, far lower than that
21:04 karolherbst: close to nobody actually upgrades RAM
21:04 DemiMarie: fair
21:04 karolherbst: and it needs more space to be replaceable and everything
21:04 DemiMarie: my point was that a high-speed socketed system is possible, not that it is going to be cost-effective
21:04 puck_: i wonder if we'll see an era where there's soldered-on RAM plus CXL if you really need more bandwidth (aka more distinct tiers of RAM)
21:05 puck_: s/bandwidth/memory/
21:05 karolherbst: soldered RAM even leads to less ewaste in avarage, because it's needing way less space and everything
21:05 DemiMarie: True
21:05 DemiMarie: Honestly what I really want is for Moore’s Law to finally peter out.
21:05 karolherbst: like to match the 800GB/s you need like... 16 DIMM slots I think? :D
21:05 HdkR: Say that optical does solve the signal integrity problem. You now need 16 DIMMs worth of bus width to match M1/2 Ultra bandwidth
21:05 karolherbst: but yeah...
21:05 karolherbst: DIMM is stupid
21:05 HdkR: Sixteen!
21:06 DemiMarie: HdkR: yeah, not practical
21:06 HdkR: Because the M1/2 Ultra has 8 LPDDR5 128-bit packages on it
21:06 karolherbst: maybe CAMM would need more
21:06 karolherbst: *less
21:06 karolherbst: but...
21:06 DemiMarie: Serious question: is there room for something that is a GPU/CPU hybrid?
21:06 karolherbst: it's still huge
21:06 karolherbst: the M2 24GB memory is so _tiny_
21:06 dj-death: airlied: what's the current rule to update drm-uapi headers in mesa? take drm-next or drm-tip?
21:06 airlied: dj-death: drm-next usually
21:06 DemiMarie: Something made for those workloads that are easy to parallelize, but are somewhere between hard and impossible to meaningfully vectorize?
21:07 karolherbst: DemiMarie: good question.. intel kinda tries that with AVX512, no?
21:07 karolherbst: but....
21:07 DemiMarie: karolherbst: anti-AVX512
21:07 karolherbst: yeah well.. more threads would help
21:07 karolherbst: but we are already going there
21:07 DemiMarie: I’m thinking of stuff where the hard part is “what the heck do I do next?”
21:07 HdkR: SVE2-512bit :P
21:08 karolherbst: yeah.. more threads if you can parallelize
21:08 karolherbst: more low power ones even to make use of low power consumption at lower clocks
21:08 DemiMarie: Modern compilers are highly parallelizable, but nigh impossible to vectorize
21:08 karolherbst: I think most CPU manufacturers will see that high perf cores give you nothing
21:08 DemiMarie: Same
21:08 karolherbst: and we'll end up with 4+20 core systems, 4 high perf, 20 low perf
21:08 DemiMarie: Except for security holes
21:09 DemiMarie: Yup
21:09 karolherbst: intel kinda moves into having same high/low perf cores :D
21:09 karolherbst: it's kinda funky
21:09 DemiMarie: Xen is having a really hard time with HMP right now
21:09 DemiMarie: Mostly because Xen’s scheduler is not HMP aware
21:09 dj-death: airlied: apparently some amdgpu headers where pulled from neither : https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/21986
21:09 karolherbst: it's also funky that the difference between 12700 and 12900 was not more perf cores, but 4 more energy cores
21:10 DemiMarie: Not at all surprised.
21:10 karolherbst: heh
21:10 karolherbst: 13th gen is already there
21:10 karolherbst: 8 high perf, 16 low perf :D
21:10 DemiMarie: is that a comment on Xen?
21:10 karolherbst: https://en.wikipedia.org/wiki/Raptor_Lake#Raptor_Lake-S
21:10 karolherbst: kinda totally forgot about that
21:11 dj-death: airlied: not quite sure what do to since we want to update the intel ones to the next drm-next
21:11 karolherbst: so yeah.. intel is already there
21:12 karolherbst: I wonder when Intel kills hyperthreading
21:13 DemiMarie: To me the problem with big cores is that the stuff they do well on are:
21:13 DemiMarie: 1. Wizard-optimized programs written with lots of SIMD intrinsics or even assembler.
21:13 DemiMarie: 2. Legacy single-threaded programs that cannot be parallelized.
21:13 HdkR: Once the sram cost of duplicating all the register state costs too much die area to them :P
21:13 DemiMarie: 3. have lots of security holes
21:13 karolherbst: DemiMarie: well... some things are hard to parallelize, like game engines
21:13 DemiMarie: karolherbst: why?
21:13 karolherbst: because things depend on each other
21:14 karolherbst: AI in games is not trivial
21:14 karolherbst: game devleopers can probably explain it a lot more
21:14 karolherbst: there are things which can happen in parallel, but it's not as trivial as it might sound at first
21:15 karolherbst: also think sound mixing and stuff
21:15 DemiMarie: Sound mixing should happen on another thread.
21:15 karolherbst: yeah
21:15 karolherbst: so that's what I meant with some things can happen in parallel
21:16 karolherbst: but you still need high single thread cores if you want to mix more sources in realtime
21:16 karolherbst: in some games you notice that sound sources get disabled on demand, because of load
21:16 DemiMarie: Should that be handled by a dedicated DSP?
21:16 karolherbst: maybe?
21:16 karolherbst: maybe not
21:17 karolherbst: might be not flexible enough
21:17 DemiMarie: I wonder if graph-reduction machines might help.
21:17 karolherbst: but the point is rather, that there will be need for perf cores
21:17 HdkR: Just throw another E-core at the problem, homogeneous programming model is better here
21:17 DemiMarie: Basically a processor designed for Haskell and other purely functional languages, where everything is safe to run in parallel unless a data dependency says otherwise.
21:18 DemiMarie: Where if there is a hazard that means the program has undefined behavior because someone misused unsafePerformIO or similar.
21:18 psykose: even "in haskell" the above issues apply
21:18 psykose: parallelism is not magic
21:19 karolherbst: also.. caches
21:19 DemiMarie: HdkR: Mobile devices have lots of DSPs IIUC
21:19 psykose: strong '1 person in 12 months 12 people in 1 months' manager vibes
21:19 HdkR: DemiMarie: And nobody's game uses them directly
21:19 HdkR: Burn a Cortex-A53 to do the sound mixing, let the OS use the DSP for mixing
21:20 DemiMarie: HdkR: maybe we need higher-level sound APIs that have “sound shaders” or similar
21:20 karolherbst: in theory everything can be perfect, but practically we have the best outcome possible :P
21:20 HdkR: DSP also takes up AI and modem responsibilities there...
21:20 karolherbst: DemiMarie: cursed
21:20 HdkR: OpenAL 2.0
21:20 DemiMarie: karolherbst: cursed?
21:20 karolherbst: very
21:20 DemiMarie: I meant, “define cursed”
21:21 karolherbst: it just sounds cursed
21:22 Lynne: don't AMD have some weird GPU sound mixing thing?
21:22 DemiMarie: I mean eBPF and P4 are basically shading languages for network devices.
21:22 karolherbst: yeah, and many think eBPF is very cursed
21:22 karolherbst: not saying I disagree, but...
21:22 DemiMarie: part of that is because of the need to prove termination
21:23 DemiMarie: In hardware that can be solved by having a timer interrupt.
21:23 karolherbst: well.. on the kernel side you can also just kill a thread
21:23 karolherbst: but you don't want to do that
21:23 karolherbst: like never
21:23 DemiMarie: longjmp()?
21:24 karolherbst: so... you can't really do that wiht random applications, because they have to be aware of getting nuked at random points
21:24 karolherbst: so they have to be written against that
21:24 karolherbst: otherwise you risk inconsistent sate
21:24 karolherbst: *state
21:24 DemiMarie: the other possibility is that if your program doesn’t finish soon enough, that’s a bug
21:24 karolherbst: if the modules work strictly input/output based, then yeah, might be good enough
21:25 karolherbst: but then it's more of a design thing
21:25 karolherbst: oh sure
21:25 karolherbst: but you still can't kill it if it doens't know it will be killed randomly
21:25 airlied: dj-death: just pull the intel ones and agd5f can chase down what happened with amd ones maybe
21:25 karolherbst: you kinda have to enforce that in the programming model
21:25 DemiMarie: yeah
21:28 DemiMarie: Also, I hope these conversations are interesting and not wasting people’s time! (Please let me know if either of those is false.)
21:29 karolherbst: nah, it's fine
21:31 psykose: what else would we be discussing
21:35 mattst88: development of dri?
21:41 karolherbst: that X component?
21:41 karolherbst: hell no!
21:41 mattst88: might as well just ramble on about optical interconnects and haskell machines for 90 minutes instead :P
22:37 Lynne: I understand fences are not quite a replacement for mutexes
22:38 Lynne: but damn it, they should've added a wait+unsignal atomic operation on fences in vulkan
23:40 memleak: Hello, I'm using kernel 6.4-rc5 (patched with PREEMPT_RT) and DRM/KMS works just fine on both AMDGPU and Radeon, I'm using an R9 290 (Hawaii) however when starting SDDM or LightDM, USB breaks
23:40 memleak: If I use radeon then I get garbage on the screen, USB is dead, if I use AMDGPU, the screen at least looks fine but USB also dead.
23:41 memleak: This problem does not exist on 6.1.31 (have not tried 6.1.32 yet)
23:43 airlied: memleak: anything in dmesg?
23:43 memleak: I set the panic timeout to -1 (instantly reboot on panic) and enabled panic on oops, the cursor for the login screen keeps blinking and the system stays on.
23:44 memleak: I can't quite check it once the USB is dead lol i may have to grab a PS/2 keyboard if that works I don't have serial debug either
23:44 memleak: I'll try and get dmesg output
23:45 memleak: I have to head out, I'll be back later, just wanted to get this down in the channel. airlied nice to see you again btw, it's NTU/Alec from freenode lol
23:49 memleak: Oh, just want to note that USB works indefinitely as long as X doesn't start :)
23:50 airlied: oh hey, have you another machine to ssh in from?