04:24 fdobridge: <s​amantas5855> I wonder if nvidia will ever open source parts of the vulkan driver like amd did with amdvlk
04:24 fdobridge: <s​amantas5855> that would probably be very helpful to nvk
04:58 fdobridge: <g​fxstrand> I highly doubt it
04:58 fdobridge: <g​fxstrand> They have no reason to
11:03 Sid127: can someone with an nvidia gpu tell me what `/proc/driver/nvidia/gpus/0000:01:00.0/numa_status` contains
11:04 dwfreed: can't help, don't have that file (of course, having a kepler may have something to do with that >.>)
11:12 Sid127: thanks, got the info from someone else
14:18 juri_: is anyone here working with LLMs on nouveau supported hardware?
15:04 Sid127: juri_: on nouveau supported hardware? not LLMs but I do use CUDA. On nouveau itself? no
15:05 juri_: sure, nouveau supported gear. is there a framework for using cuda + nouveau to perform computation, without having to fire up a dummy X server?
15:13 Sid127: no
15:13 Sid127: nouveau does not support CUDA or OpenCL
15:15 juri_: ok, that's what i was wondering. sad to hear.
18:54 juri_: https://github.com/pierremoreau/mesa/wiki/OpenCL-support-for-Nouveau <-- this never went anywhere?
18:54 fdobridge: <k​arolherbst🐧🦀> it did
18:54 fdobridge: <k​arolherbst🐧🦀> the initial approach was just a bad idea
19:25 juri_: oh, so where did it go? :)
19:25 fdobridge: <k​arolherbst🐧🦀> clover mostly
19:25 fdobridge: <k​arolherbst🐧🦀> (and now rusticl)
19:42 juri_: wow. i keep finding your fingerprints on this stuff. :P
19:47 juri_: maybe i should just ask you directly: is there a good enough stack for me to start porting code to / debugging openCL + nouveau, and if so, how fussy is it about which hardware?
19:47 juri_: I'm looking at a K80 right now, and pondering.
19:47 karolherbst: mhhh good question
19:47 karolherbst: the issue is that with the nouveau gallium driver that the compiler is kinda broken for CL
19:47 karolherbst: soooo
19:48 karolherbst: you kinda want to try rusticl + zink + nvk
20:16 airlied: I did try running llama.cpp on nvk, but I can't remember what happened
20:20 juri_: I'm running llama.cpp on xeon PHIs.
22:30 fdobridge: <a​irlied> lols reproduced the runpm page table issue is what it does 😛
22:31 fdobridge: <a​irlied> then ooms once I turned off runpm, should try on monster gpu
22:52 fdobridge: <a​irlied> oh wrong kernel, no gsp