00:05DemiMarie: Looks like a P core has maybe 50% more ALU area than the XVEs in the Xe<sup>2</sup>-LPG core, but there are half as many of them and probably a latency vs throughput tradeoff (higher throughput on the GPU). The GPU almost certainly is much more power efficient at math, and that is before fixed function is considered.
00:12DemiMarie: HdkR: does anyone actually have a use for those NPUs?
00:13HdkR: They certainly have ideas of using them
00:14HdkR: Like the Microsoft Copilot thing
00:14HdkR: My eyes glaze over whenever AI is mentioned so I don't know what these tiny things can even do. LLMs or something?
00:18DemiMarie: Text to speech, speech to text, and object recognition are the obvious ones to me.
00:18DemiMarie: All done on device, which is good for privacy.
00:27karolherbst: alt text generation could be a cool feature as well
00:28karolherbst: I think if it's pre generated and users just need to fix a few bits would be able to nudge people enough to write more alt-text for posted images
00:28karolherbst: there are quite some interesting use case for accessibility in general
00:30karolherbst: I wonder if AI/ML could also help with some sort of hearing disabilities, where audio is adjusted enough to account for those. Not every hearing disability is about hearing loss
00:31airlied: the problem for Linux is even if we could access the hw devices (which is a mess), we don't really have the models to deploy
00:31karolherbst: noise suppression apparently is also a great use case as non AI solution kinda sucked
00:32karolherbst: yeah....
00:32karolherbst: the ecosystem isn't great atm 🙃
00:35DemiMarie: airlied: isn't Intel upstreaming drivers? Or did they leave out the userspace?
00:36karolherbst: it's more about the AI modles
00:36karolherbst: *models
00:36karolherbst: mostly ethical and licensing concerns
00:36DemiMarie: Which the proprietary vendors just ignore?
00:37karolherbst: would a linux distro be allowed to ship those?
00:37karolherbst: would they want to take the legal risk?
00:37DemiMarie: Depends on the model, obviously
00:38karolherbst: not really, because there aren't models free of copyright violations as far as I know
00:38DemiMarie: Even small recognition models.
00:38DemiMarie: ?
00:39karolherbst: mhh.. they _might_ be okay
00:39karolherbst: it kinda depends
00:39karolherbst: there is also always fair use
00:39karolherbst: but the original point still stands: it's a legal risk
00:39alyssa: nothing fair about cat i farted
00:40DemiMarie: Will it ever be resolved in any way?
00:40karolherbst: so if linux distros use those models for accessibility features, it might be considered fair use even if there are copyright violations
00:40karolherbst: mhh going to court?
00:40karolherbst: kinda need either legislation or court rulings
00:40airlied: depends on of the owners of the models, provide licenses properly
00:40airlied: or indemnity
00:41karolherbst: yeah...
00:41karolherbst: I think some fair use rulings could help out a lot as well
00:41karolherbst: but yeah...
00:42karolherbst: but then it also differs from country to country
00:42DemiMarie: Fair use is very dependent on both the exact use and on jurisdiction. Lots of places don't have it at all.
00:42DemiMarie: Also distros want stuff that is free for any purpose.
00:43DemiMarie: That means stuff that is properly licensed.
00:43karolherbst: yeah...
00:44DemiMarie: The problem is that training data isn't just something one can produce oneself.
00:44DemiMarie: Public domain works aren't useful either (too old).
00:45DemiMarie: Somebody like Mozilla could do it, though, by being very careful at every step and ensuring they had licenses from everyone involved.
00:46DemiMarie: I think recognition models are going to be much less risky than generative models.
00:47DemiMarie: Generative models don't do well on clients anyway because of the enormous model sizes.
01:18iive: surveillance camera footage of public spaces (roads) is Public Domain. That's why a lot of captcha uses them.
01:19iive: this is about training AI models.
01:21iive: but yeh, open source will have to make their own models from scratch, to be sure they are proper.
01:33DemiMarie: Purely a curiosity question: can pvr ever become conformant to Vulkan 1.0 or is it too broken?
01:33DemiMarie: (it = the hardware)
01:48Shibe: soreau: tried in a gentoo chroot with your patches applied to mesa, same issue unfortunately, can't get pipewire streams working on iGPU
01:51Shibe: still not sure if this is a mesa issue
01:55Shibe: because if I run a compositor using DRI_PRIME=1! then an application inside using DRI_PRIME=0, then there's no graphical issues displaying that application
01:56Shibe: but otoh that means every compositor (or application)'s portal implementation is broken
02:54alyssa: karolherbst: so.. AGX needs some int64-bit lowering, but it needs to happen late for address mode fusing to work properly
02:54alyssa: my backend calls nir_lower_int64 itself
02:54alyssa: if I comment out the int64 call in rusticl, things work
02:54alyssa: but I don't see any way around this, since lower_int64 takes its options from the options struct which is const
02:59alyssa: I've added a nir option in my branch, but that seems pretty icky
07:31airlied: sima: fyi just pushing out a backmerge, I'll merge the trees that needed it tomorrow
08:59sima: airlied, aye
09:11pq: Shibe, importing dmabuf allocated on one device to another device that does not support modifiers is very likely to break for mismatching modifiers indeed. One could argue that even attempting that might be a bug in the compositor or display server, assuming this is what happens.
09:12pq: Shibe, another possibility is the screenshooting somehow missing the modifier. A modifier problem seems likely to me. I'd report it to the compositor or display server first.
09:30Shibe: pq: but couldn't it equally likely be a bug in the application using the portal since those would have to import the dmabuff too?
09:31Shibe: but i will report it in kwin regardless
09:32pq: it could indeed
09:34pq: If you know the app is getting a dmabuf to import, then reporting to them could be worthwhile. Especially if some debug tool can tell you that the dmabuf has a pixel format with a not linear modifier.
09:39pq: A dmabuf with any valid modifier should only be imported to a device/API that supports explicit modifiers.
09:40pq: A dmabuf with the invalid modifier (no modifier) can only be imported to the same device where it was allocated, but one cannot tell the device by looking at the dmabuf AFAIK, so that's possible only if the device is explicitly mentioned along with the dmabuf.
09:44emersion: see https://www.kernel.org/doc/html/next/userspace-api/dma-buf-alloc-exchange.html#formats-and-modifiers
10:10Shibe: I guess for compositors using INVALID/implicit modifiers the correct thing to do would be to create a linear buffer when doing screen sharing?
11:34emersion: depends what the destination GPU will be
11:35emersion: if the dest GPU is the same as the source, INVALID/implicit gives better perf
11:57lumag: narmstrong, robertfoss, jernej et al: I'm not sure how we should proceed with https://lore.kernel.org/dri-devel/20241104-v7-upstream-v7-0-8b71fd0f1d2d@ite.com.tw . Do we have anybody with the deep HDCP knowledge to review remaining patches?
13:28frankbinns: DemiMarie: to answer your question about pvr, the answer is yes: https://www.khronos.org/conformance/adopters/conformant-products#submission_767
15:03Ermine: airlied, mlankhorst, mripard, tzimmermann, sima: (pinging you as maintainers of virtio gpu driver): may I ask you to review https://lore.kernel.org/dri-devel/20241021115210.5439-1-mustela@erminea.space/ ? Thank you in advance
18:11DemiMarie: frankbinns: Nice! Do the PVR maintainers plan to make it fully conformant eventually?