04:24fdobridge: <samantas5855> I wonder if nvidia will ever open source parts of the vulkan driver like amd did with amdvlk
04:24fdobridge: <samantas5855> that would probably be very helpful to nvk
04:58fdobridge: <gfxstrand> I highly doubt it
04:58fdobridge: <gfxstrand> They have no reason to
11:03Sid127: can someone with an nvidia gpu tell me what `/proc/driver/nvidia/gpus/0000:01:00.0/numa_status` contains
11:04dwfreed: can't help, don't have that file (of course, having a kepler may have something to do with that >.>)
11:12Sid127: thanks, got the info from someone else
14:18juri_: is anyone here working with LLMs on nouveau supported hardware?
15:04Sid127: juri_: on nouveau supported hardware? not LLMs but I do use CUDA. On nouveau itself? no
15:05juri_: sure, nouveau supported gear. is there a framework for using cuda + nouveau to perform computation, without having to fire up a dummy X server?
15:13Sid127: no
15:13Sid127: nouveau does not support CUDA or OpenCL
15:15juri_: ok, that's what i was wondering. sad to hear.
18:54juri_: https://github.com/pierremoreau/mesa/wiki/OpenCL-support-for-Nouveau <-- this never went anywhere?
18:54fdobridge: <karolherbst🐧🦀> it did
18:54fdobridge: <karolherbst🐧🦀> the initial approach was just a bad idea
19:25juri_: oh, so where did it go? :)
19:25fdobridge: <karolherbst🐧🦀> clover mostly
19:25fdobridge: <karolherbst🐧🦀> (and now rusticl)
19:42juri_: wow. i keep finding your fingerprints on this stuff. :P
19:47juri_: maybe i should just ask you directly: is there a good enough stack for me to start porting code to / debugging openCL + nouveau, and if so, how fussy is it about which hardware?
19:47juri_: I'm looking at a K80 right now, and pondering.
19:47karolherbst: mhhh good question
19:47karolherbst: the issue is that with the nouveau gallium driver that the compiler is kinda broken for CL
19:47karolherbst: soooo
19:48karolherbst: you kinda want to try rusticl + zink + nvk
20:16airlied: I did try running llama.cpp on nvk, but I can't remember what happened
20:20juri_: I'm running llama.cpp on xeon PHIs.
22:30fdobridge: <airlied> lols reproduced the runpm page table issue is what it does 😛
22:31fdobridge: <airlied> then ooms once I turned off runpm, should try on monster gpu
22:52fdobridge: <airlied> oh wrong kernel, no gsp