13:55cap: Greetings. Currently the graphcis card GTX 1650 TI is missing in the feature matrix list, while the GTX 1650 is listed (NV160 family). I got the TI version by myself though my workplace and would really like to help develop support. Since I'm not a dev but a sysadmin I'm searching possibilities to help. Has anyone an adivce for me?
14:30karolherbst: cap: those things don't matter at all. If it's not listed then somebody just didn't put it here
14:31karolherbst: we don't rely on PCI IDs or marketing names for supporting devices or not
14:32karolherbst: cap: best is to just try it out with updated kernel and userspace stack and look into what issues you are running
14:32karolherbst: and then we can go from there
16:22cap: Thank you karolherbst. I will try it out.
21:49chelium: Does nouveau work for rtx 3080?
21:52imirkin: is that turing? or ampere?
21:52HdkR: Ampere
21:52chelium: ye Ampere. Is Ampere supported?
21:52chelium: or do I have to download the driver directly from nvidia?
21:53chelium: *instead of using nouveau
21:53imirkin: sorry, no ampere support atm
21:53imirkin: no clue if it's in the works or not, but presumably is for at least modesetting
21:53chelium: oh damn. Is there a timeline for ampere support?
21:53imirkin: depends on what you mean by 'support'
21:54imirkin: either way, i don't have anything concrete. nor do i have concrete information where i can say that work is underway
21:54chelium: ah I see
21:54imirkin: if you want open support, go with amd
21:55RSpliet: (probably nobody does for that matter. nouveau is understaffed)
21:56chelium: Lol that's what I've heard but I also heard people say, unfortunately, that no serious ML work is being done on AMD.
21:56HdkR: There's also no serious ML work being done on Nouveau :)
21:57chelium: Lol fair point
21:57imirkin: i'd say no ML work being done on nouveau, serious or not
21:58RSpliet: Is AMD support that bad yeah? Thought they were doing a decent job with amdgpu...
21:58RSpliet: Or has that changed for the worse in the past 2-3 years?
21:58chelium: tbh I joined this channel b/c I'm not sure if there's a dedicated nvidia linux driver chat lol
21:58chelium: *channel
21:58RSpliet: there's a closed-source driver channel on here somewhere. Not official I don't think
21:58imirkin: #nvidia
21:58chelium: ah thank you
21:59imirkin: afaik no one's answered a question in that channel
21:59RSpliet: last time I checked there were significantly more people here than there :-P
21:59RSpliet: which reminds me. maybe I should check
21:59imirkin: but it's not like there are other avenues for getting information
21:59imirkin: unless you're a substantial customer
21:59chelium: ah that's unfortunate
22:00RSpliet: yeah we still have the popular vote here :-D
22:01chelium: I heard Nvidia linux support is pretty bad, but I also heard that most ML is being done on Nvidia cards. However, I can't imagine ML servers are running on Windows lol.
22:02imirkin: well
22:02imirkin: depends what you're looking for
22:02imirkin: if you're looking to run a modern system/desktop/etc - it's not great
22:02imirkin: if you're looking for a special-purpose use-case, then it's either supported or not
22:03ericonr: chelium: it's bad in ways that enterprises (and most users, for that matter) can learn to deal with
22:03chelium: ah I see
22:04ericonr: don't use latest released kernel, but also don't use old LTS
22:04ericonr: reboot after driver updates
22:04ericonr: no musl for you, etc etc
22:05RSpliet: Oh, right. ML -> Machine Learning. You had me there, I thought you were talking about mainlining... because the ML programming language seemed so unlikely
22:06imirkin: ML is great.
22:06imirkin: like scheme is great.
22:07imirkin: aka theoretically nice, practically unusable
22:07ericonr: there are whole distros built on top of scheme :P
22:08RSpliet: I consider them academic and/or curro-territory
22:08imirkin: ericonr: big usage base i assume?
22:08RSpliet: Although there's a handful of fintech companies that seem to embrace ML-dialects now or something similar
22:08imirkin: popular?
22:08imirkin: RSpliet: i think Jane Street is known for using like ocaml or something
22:08imirkin: or maybe haskell
22:09RSpliet: Yeah. They're recruiting quite actively in Cambridge, so that must be where I heard it
22:10ericonr: hm, I know Jane Street from Matt Parker videos
22:10ericonr: they sponsor him
22:10ericonr: imirkin: probably not, it's GUIX :P
22:10RSpliet: as for machine learning... do we now have McGyver'd OpenCL support in upstream mesa? Or still pending merges?
22:11imirkin: ericonr: exactly.
22:36karolherbst: RSpliet: we are more or less CL 1.2 complete
22:36karolherbst: but yeah, random stuff missing
22:37karolherbst: mostly fixes
22:37karolherbst: I even have CL 1.2 images woring
22:37karolherbst: full range
23:03RSpliet: \o/
23:09chelium: hmm what does CUDA have that OpenCL doesn't?
23:09imirkin: chelium: different API, a lot more NVIDIA-specific
23:10imirkin: chelium: also afaik CUDA is a single-source program, while CL is separate
23:10imirkin: (i.e. the program which runs on the CPU vs the program which runs on GPU)
23:12chelium: oh I see
23:30HdkR: imirkin: You can write PTX in a device specific file and load it in through an API
23:30HdkR: Similar to CL
23:30imirkin: HdkR: but you don't have to, i think?
23:31HdkR: yea
23:31imirkin: anyways, i'm no expert on CUDA
23:31HdkR: I believe it is CL 2.0 or 3.0 that also allows single source?
23:31imirkin: apparently i don't know jack about CL either
23:31imirkin: o well
23:32airlied: HdkR: SYCL
23:32airlied: not CL
23:33airlied: is the C++ single source language
23:38HdkR: ah right