01:43DemiMarie: Why does Xe use GuC submission instead of execlists? Also, what is the right place to ask questions like this?
03:49kode54: aren't execlists slower?
08:46dolphin: DemiMarie: Do you mean the Xe platform or driver?
08:46dolphin: Short answer is that firmware based scheduling is the direction hardware has taken.
09:44karolherbst: gfxstrand: okay.. I think I came up with a plan for the first part of the transition towards multi entry point nirs: we have to split `struct shader_info` or rather.. attach it to an entry_point nir_function instead so we can have multiple of those
09:45karolherbst: from a quick scan almost all of the fields are only relevant for a given entry point
09:50karolherbst: I think I also want the `create_library` thing to go away. Users of spirv_to_nir should be able to just select an entrypoint in a nir_shader (after shader_info was moved to nir_function) and then it could look like the way it was before. Or we go all in and have to change nir->info to nir_shader_get_entrypoint(nir)->info
09:51karolherbst: but nir_shader_get_entrypoint also needs rework I think and we should have a nir->entry_point to mark the selected one, because otherwise that loop inside nir_shader_get_entrypoint will be quite expensive
09:51karolherbst: not sure
09:55ishitatsuyuki: why is __vk_append_struct named with two underscores in front?
09:56bnieuwenhuizen_: to make it invalid C?
09:57karolherbst: it's valid C
09:57karolherbst: two underscores just means: you shouldn't use it
09:57karolherbst: but stronger as just one underscore
09:57karolherbst: one means: you might use it if you think you know what you are doing, two means, even if you think you know, you don't
09:58bnieuwenhuizen_: the compiler is completely free to break it, as identifiers starting with two underscores are reserved for the implementation
09:58karolherbst: sure
09:58karolherbst: but yeah.. probably should have been one underscore
09:58karolherbst: :P
11:23melissawen: hey, I'm adding more than 15 KMS driver-specific properties, but although we have DRM hooks for driver-specific properties, the attachment function only considers `#define DRM_OBJECT_MAX_PROPERTY` here https://cgit.freedesktop.org/drm/drm-misc/tree/drivers/gpu/drm/drm_mode_object.c#n248
11:23melissawen: I did it: https://gitlab.freedesktop.org/mwen/linux-amd/-/commit/9d92d6c3063ed53b3fe80edab34f48c28604eb9f
11:24melissawen: but as it's driver-specific properties, increasing on DRM interface seems weird for me
11:24emersion: i think it makes sense to increase
11:24emersion: is 47 arbitrary?
11:25melissawen: oh, it should be 41 :) 24 + 17 new KMS properties
11:26emersion: i'd recommend picking something larger, so that this doesn't need to be bumped everytime a new prop is introduced
11:26emersion: like, 64 or 128
11:29melissawen: oh, so we don't need to say exactly the amount of properties we have enabled... okay, I will increase to a higher number
11:29melissawen: thanks!
11:37emersion: yeah it's just a max, for the array capacity presumably
13:16DemiMarie: dolphin: why is hardware moving in the direction of firmware-based scheduling?
13:31karolherbst: DemiMarie: because it offers lower latencies
13:32karolherbst: for desktop usage it might not matter, but some compute folks are very sensitive about that
13:33agd5f: DemiMarie, leads to user mode submission as well. E.g., user mode drivers can manage their own hardware queues
13:34karolherbst: well, there are kernelspace drivers doing user mode submission as well
13:35karolherbst: or rather allowing for
13:36karolherbst: but yeah.. doing scheduling on the hardware removes the need of interrupting the host if the hardware wants a context switch or something.
13:37karolherbst: we see similiar things happening for CPUs as well, no?
13:50tjaalton: dcbaker: hi, 7e68cf91 is not in 23.0.x for some reason, though it is in 22.3.x?
13:54robclark: tursulin, sima: any thoughts about moving forward with https://patchwork.freedesktop.org/series/117008/ and https://patchwork.freedesktop.org/series/116217/ ? I think the discussion has settled down on both
13:57tursulin: robclark: syncobj deadline sounded fine to me, I believe you have explained that it is not any random wait that gets "deadline NOW" set but a smaller subset and AFAIR I was satisfied with that explanation. It was on Mesa to ack or not the userspace changes.
13:57tjaalton: dcbaker: okay, found the note on .pick_status.json
13:57tursulin: robclark: fdinfo I planned to revisit this week but ran out of time, promise to do it next week. But I think that too looked acceptable to me.
13:59tursulin: robclark: ah that u32.. I really think it needs to be u64
13:59tursulin: that was possibly my last open but as said, I need to re-read it all one more time
14:00tursulin: u32 IMO I don't see how that works. With i915 I could overflow it in two GEM_CREATE ioctls due delayed allocation.
14:00tursulin: and I don't know what we are going to do with gputop
14:13robclark: tursulin: oh, yeah, that was meant to be u64
14:14robclark: I think I've read (and rebased) enough of the rest of the gputop series to be happy with it.. I'll reply w/ r-b on list
14:35tursulin: robclark: thanks! I'll see if it needs yet another rebase and merge next week. Will aim to r-b your fdinfo series next week too.
14:37robclark: thx, I'll try and find a few min to re-send w/ s/u32/u64/.. that was just a typo when I addressed your suggestion to not size_t
15:44mattst88: karolherbst: I'm enabling rusticl in gentoo. can you remind me what hardware it supports vs what hardware clover supports?
15:44mattst88: and also what opencl version each supports?
15:46karolherbst: mattst88: llvmpipe, nouveau, panfrost, iris, radeonsi and up to 3.0
15:46kisak: the joys of OpenCL 3.0 is that the base requirements are much lower than 2.x, so every new driver starts there.
15:47mattst88: karolherbst: awesome, thank you. I think r600 is the only driver that was supported by clover that is not currently supported by rusticl?
15:48karolherbst: correct
15:48karolherbst: though it might just work, dunno
15:48karolherbst: I don't have the hardware to try
15:53mattst88: thanks
16:28jenatali: Anybody want to review/ack a build fix? https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/22994
16:28jenatali: Not sure why none of the Linux release build pipeline caught this, but the MSVC release pipeline we have did
16:34mattst88: done :)
16:34jenatali: Thanks :)
16:51alyssa: jenatali: yo i heard you like reviewing nir patches
16:51alyssa: so i put nir patches in my nir patches
16:51jenatali: I'm reviewing
16:51alyssa: I'm shitposting
16:51alyssa: We're all busy this afternoon it seems
16:51alyssa: ;-D
16:59sima: robclark, if tursulin finds it acceptable I don't think it needs me on top
16:59sima: unless you want me to add an ack
17:05karolherbst: alyssa: you are busy with the wrong stuff
17:06alyssa: karolherbst: gasp
17:08robclark: sima: t'was mostly just to keep you in the loop
17:09robclark: (since uabi things)
17:26alyssa: anholt_: I'm seeing dEQP-GLES2.functional.shaders.algorithm.rgb_to_hsl_vertex fail on asahi with mediump lowering enabled, IIRC you hit something similar with angle?
17:28alyssa: CTS bug maybe?
17:56DemiMarie: karolherbst: why does firmware submission have lower latencies? And I am not sure what you mean regarding CPUs, unless you are talking about coprocessors used just for power management.
18:05karolherbst: DemiMarie: yeah not quite sure what's the CPU thing, I think I've read something somewhere about moving more of it into the CPU firmware instead of the kernel, but might as well just be power managmeent so far
18:05karolherbst: anyway, for the GPU you don't want to involve the kernel all too much getting your jobs scheduled by the hardware
18:05karolherbst: usermode command submission already is a big chunk of it
18:06karolherbst: but if scheduling involves the kernel, you still have those round trips
18:06karolherbst: which you kinda don't want
18:07robclark: also microcontrollers are much better at interrupt latency than full on desktop CPUs
18:07karolherbst: context switching is already done in firmware on some GPUs at least, usermode command submission is mostly a kernel feature giving userspace permission to do it (my mapping GPU memory)
18:08karolherbst: on nvidia command submission doesn't require the kernel at all and what I know is, it's performance critical enough so it's worth it
18:08DemiMarie: karolherbst: is the syscall overhead noticeable?
18:08jenatali: Yes
18:08DemiMarie: robclark: does the firmware ever just busy spin? I imagine that it could afford to.
18:09karolherbst: DemiMarie: yes
18:09robclark: implementation detail I suppose
18:09karolherbst: for Vk/GL it doesn't matter much
18:09karolherbst: for compute it does
18:10karolherbst: usermode submission is also kinda incompatible with graphics anyway
18:10karolherbst: not sure it's feasable at all unless you move more bits out of the kernel
18:12DemiMarie: robclark: my thought was that the microcontroller uses so little power that it could busy spin without anyone caring. Is the lower interrupt latency due to having shallower pipelines and fewer registers?
18:13DemiMarie: karolherbst: why is compute so latency sensitive? Also, my concern with userspace submission is obviously security.
18:13karolherbst: there is no pcie bus in between
18:14robclark: and a lot less state to save/restore, etc/etc.. some u-controllers don't even have to save/restore regs on irc. Also they tend not to be running a full blown operating system ;-)
18:14karolherbst: DemiMarie: ehh.. it's not a security thing really. It's just that you fill your command buffer, enqueue it in a command buffer submission ring buffer and ring a doorbell telling the GPU there is mre work
18:14karolherbst: the kernel still sets up all the stuff and permission thing
18:14robclark: uc's range from things like full blown cortex-m or riscv to things that are much smaller and very special purpose
18:15karolherbst: so userspace just gets some memory mapped
18:15karolherbst: and uses that
18:15DemiMarie: karolherbst: I meant that with userspace submit, the firmware is parsing untrusted input, so those parsers had better be secure.
18:15karolherbst: the situation is different on hardware where you could enqueue commands which could be security relevant
18:16karolherbst: I mean.. it's done in hardware anyway
18:16karolherbst: well you can submit broken stuff and the GPU hangs
18:16karolherbst: or something
18:16karolherbst: but.. the kernel can't fully protect you from those things anyway
18:16DemiMarie: karolherbst: oh, I thought there was some C code actually parsing messages from the user.
18:17karolherbst: depends on the driver/hardware
18:17karolherbst: on nvidia there is not much
18:17DemiMarie: What about AMD?
18:17karolherbst: good question
18:18karolherbst: I know that we have some GPUs (*cough* broadcom *cough*) without an MMU and yeah... the kernel is absolutely required here
18:18DemiMarie: Also my understanding is that at least for AGX full isolation should be possible, even if not yet implemented.
18:19karolherbst: yeah, modern GPUs are usually like that
18:19karolherbst: you really do all that stuff in userspace, and the kernel just submits it to the hardware + fencing
18:19karolherbst: which matters for presentation
18:19karolherbst: biggest reason it's more of a compute only thing
18:22karolherbst: one thing which I think is still happening on the kenrel side if with nvidia usermode submission is all that VM stuff and binding buffers, because managing physical addresses in usermode would be... insecure :)
18:28robclark: actual cmdstream "parsing" is some combo of hw and fw.. whether kernel is involved in submit doesn't really change that.. kernel's involvement is more about fencing and residency
18:29DemiMarie: Makes sense
18:30karolherbst: unless you have to do relocations and stuff :')
18:31DemiMarie: Why can a malicious userspace program interfere with other users of the GPU?
18:32robclark: it is still a shared, non-infinite, resource
18:32karolherbst: also.. driver don't clear VRAM :')
18:32robclark: but different processes should have their own gpu virtual address space, etc (oh, and vram.. but that is one of the 99 problems I don't have :-P)
18:33DemiMarie: karolherbst: report that to oss-security and get a CVE assigned and I suspect that would change
18:33karolherbst: it's a known issue for like 10 years, but I guess
18:34DemiMarie: Also virtio-GPU native contexts needs this to change because cross-VM leaks are an obvious non-starter.
18:35karolherbst: yeah, that's why hypervisors/browsers clear all GPU memory before anybody can read it out
18:35karolherbst: normally
18:35DemiMarie: With native contexts the kernel is the hypervisor 😆
18:36DemiMarie: My understanding is that native contexts expose the standard uAPI to the guest
18:36DemiMarie: robclark: could something like GPU cgroups be made?
18:38DemiMarie: I’m more concerned about e.g. GPU hangs and faults causing other (innocent) jobs to be crashed.
18:38DemiMarie: On the CPU such processes would just be preempted by the kernel and nobody else would care.
18:39karolherbst: depends on the hardware/firmware really
18:39karolherbst: on AMD it's a lost cause
18:39DemiMarie: And my (perhaps naïve) expectation is that the GPU should provide the same level of isolation.
18:39karolherbst: (but apparently it's changing with new gens)
18:39karolherbst: on Nvidia you can kill contexts and move on
18:39karolherbst: yeah.. Nvidia is quite far on that front actually
18:40karolherbst: newest hardware also has native support for partitioning
18:40DemiMarie: karolherbst: why is it a lost cause on AMD, what changes are fixing it, and why is Nvidia better?
18:40karolherbst: so you can assign a certain amount of SMs to each context
18:40karolherbst: or partition VRAM even
18:40karolherbst: on AMD it's either full GPU reset or nothing
18:40karolherbst: and I mean full GPU reset literally
18:41karolherbst: I think they can kinda preserve VRAM content though
18:41DemiMarie: karolherbst: if it is not present on the hardware Qubes users actually have, then for my purposes it does not exist.
18:42karolherbst: yeah.. I don't think sharing the same AMD GPU across VMs is a working use case
18:42DemiMarie: What is changing on newer AMD HW? Dropping the 1-entry implicit TLB that makes TLB invalidation take a good chunk of a second?
18:42karolherbst: I think they started to implement proper GPU recovery
18:42karolherbst: not sure
18:42robclark: DemiMarie: yeah, using cgroups or some sort of "protection domain" to trigger extra state clearing is a thing I've been thinking of
18:43DemiMarie: robclark: protection domain = process?
18:44robclark: it could be a more sensitive process.. or you could setup different domains for different vm's vs host.. maybe cgroups is the right way to do that, idk.. just more of an idea at this stage than patchset ;-)
18:45robclark: so far I've mostly cared about iGPUs but I think we'd want something like that for dGPUs with vram..
18:46DemiMarie: robclark: what about using a ptrace check? If one process cannot ptrace another, they are in different protection domains.
18:47robclark: DemiMarie: btw, semi related, I guess qubes would perhaps be interested in a host<->guest wayland proxy, ie is you could have vm guest app fw it's window surface to host compositor
18:47DemiMarie: That said, most people will assume protection domain = process, so I recommend that as the default.
18:47robclark: ptrace could work.. for vram where you have to clear 16GB of vram, that might still be too much
18:47DemiMarie: robclark: why do you need to clear all 16GiB?
18:48DemiMarie: Just clear the buffers userspace actually requests.
18:48robclark: well, that is probably worst case
18:48DemiMarie: And yes, such a proxy would be awesome. I’m aware of two of them.
18:48robclark: but it could still be a lot to clear.. which is (I assume) why it isn't done already
18:49DemiMarie: robclark: have a shader do the clearing? Linux already needs to handle zap shaders for some mobile GPUs.
18:50robclark: crosvm has such a wl proxy.. that is how vm apps work on CrOS.. but I kinda would like to see an upstream soln based on new virtgpu (drm/virtio) context type so we can drop downstream guest driver
18:50robclark: zap only needs to clear ~256kb to 4Mb of gmem (plus shader regs, etc) so that isn't quite as bad
18:52javierm: robclark: we would also need something like sommelier but with proper releases so that distros could package it, right ?
18:52javierm: robclark: since mutter already supports to be nested, I wonder if there could be a mutter variant that would do the same than sommelier
18:53robclark: could be sommelier.. which appears to already have some support for cross-domain (which _could_ be the wl proxy virtgpu context type.. but also carries some extra baggage due to how it is used for minigbm/gralloc)
18:54javierm: robclark: yeah, that's the part that I'm not sure. How Cros specific sommelier is or if could be used in general Linux distros
18:54qyliss: javierm, robclark: have you both seen https://github.com/talex5/wayland-proxy-virtwl?
18:54qyliss: sommelier is not too bad for distros
18:54qyliss: we have a sommelier package in Nixpkgs
18:55qyliss: although I switched to the above when Sommelier didn't work on Linux 5.19 for a long time
18:55javierm: qyliss: ah, interesting
18:56qyliss: sommelier being useful of course means packaging crosvm, which is less easy
18:56qyliss: although has been getting better
18:59robclark: I guess in theory qemu support could be added.. it would be kind of convenient if it could just live in virglrenderer but then it wouldn't be rust
20:47qyliss: yeah, I've thought about that a bit
20:47qyliss: could Rust be (optionally) added to virglrenderer, like in mesa?
20:50puck_: robclark: hrmm, doesn't modern chromeos already use virtio-gpu cross-domain?
20:54puck_: the issue i had with virglrenderer/qemu is the fact that qemu doesn't support cross-domain yet
20:58robclark: puck_: we do use cross-domain for some things but still have downstream virtio-wl driver.. looks like sommelier has support to _some_ degree for cross-domain but looks like it is missing some things like fence support (not that the virtio-wl fence support is actually correct)
20:59robclark: we use cross-domain for minigbm/gralloc for android vm, for ex.. but I'm not sure how well tested the wayland part of it is
20:59puck_: robclark: yeah, the fence support was something i was having trouble with - i had some fun bugs with that, and i stiill don't entirely know how the kernel drm fences work :D
21:00puck_: robclark: at one point i got out-of-process crosvm virtio-gpu working with cloud-hypervisor, combined with cross-domain wayland + virgl (with a hacky patch to fix the stride, because virgl doesn't pass in the host stride for buffers, and amdgpu uses a non-standard one afaict)
21:01robclark: well, every gpu uses a non-standard stride ;-)
21:01robclark: but if virgl goes away, so does that problem ;-)
21:01puck_: yeah exactly :p
21:02puck_: i was thinking about how it's funny that they first invented "just run OpenGL commands over a pipe" before going with the seemingly simpler solution of "just pass the kernel API through" -- but then i remembered the latter requires IOMMUs that didn't exist back then
21:03robclark: it doesn't _strictly_ require iommu.. but does require context isolation.. ie. different guest and host processes should have separate gpu address space
21:03robclark: but that is pretty much a given these days
21:03puck_: right, yeah
21:16alyssa: dj-death: Kayden: btw, unified atomics just landed.. looking forward for the Intel code deletion :~)
21:18jenatali: alyssa: ~1hr for me unless you want to rebase + land, CI was clean on my last push I believe
21:22alyssa: jenatali: ?
21:22alyssa: oh
21:22jenatali: You pinged me to rebase+merge my atomics change, just gonna take a few lol
21:22alyssa: oh yes I see
21:22alyssa: sorry I'm context switching too much right now ;p
21:24jenatali: Oh actually it's going to conflict with another change in the queue. I'll wait til that lands first (assuming it does)
21:28alyssa: choo choo
23:02jenatali: alyssa: Assigned :)
23:06jenatali: Side note, I wish Marge was able to move on once there's failing jobs in a pipeline (i.e. the whole pipeline won't succeed). I feel like that'd save a lot of time, where now if a job irreparably fails and needs new changes it'll just sit there waiting for the rest of the jobs to finish
23:16ickle_:
23:18zmike: jenatali: try filing ci ticket?
23:19jenatali: zmike: Just, in the Mesa project with the CI label?
23:19zmike: ci label
23:19zmike: maybe someone will see it and have ideas
23:24dj-death: alyssa: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/23004
23:46jenatali: Nice, that's a good negative line count