09:05pq: vsyrjala, the kernel cannot degrade other connectors harder unless their CRTCs were added to the commit, right?
09:35mlankhorst: sima airlied: https://patchwork.freedesktop.org/patch/716063/?series=164264&rev=1 should this be drm-next or drm-xe-next?
09:36sima: mlankhorst, if you have the backmerge already in drm-xe-next then I think best to apply it there (since that's where most xe testing/bisecting happens) and then feed it into drm-next through the usual pr train
09:37sima: also I guess since this looks eerily like the one I've screwed up already, but once again, make sure it's extra highlighted in the pr mail so we don't screw up a 3rd time
09:38sima: or is this a new one and somehow the xe wa table is just extremely prone to mismerges by airlied&me?
09:38sima:a bit confused
09:40mlankhorst: hm I thought it was a new failure, let me find the original kernel builds
09:44mlankhorst: Yeah the most recent failure broke on 30 march 18:44 and 19:20, not sure why exactly.
09:44mlankhorst: Also uncertain what timezone
09:46mlankhorst: Around the time drm-xe-next was merged into drm-next
09:53mlankhorst: Yeah looks like drm-xe-next is the correct place now
10:00vsyrjala: pq: it can with allow_modeset=true
10:23pq: vsyrjala, huh? That's new.
10:25vsyrjala: been like that forever
10:25pq: vsyrjala, that means that userspace needs to inspect the feedback properties of everything on the modeset of anything.
10:25vsyrjala: i suppose
10:26pq: I find that unexpected.
10:27pq: even more reason to get pairs of setting/feedback properties for everything that drivers choose automatically.
10:28vsyrjala: just a small matter of convicing someone to write the code
10:29pq: yeah, this half a piece at a time doesn't seem to work too well
11:41karolherbst: glehmann: well libclc comes with software emulation for fma, so that's not necessarily a problem
11:41karolherbst: just makes things very slow :)
11:42karolherbst: which also means we could emulate fma for games that really really will require it
11:42karolherbst: anyway.. I really should do the proper fma work, because I need it for CL, because atm zink gets the emulated fma :')
11:49glehmann: same for radeonsi before rdna2
12:04zmike: in zink you'd have to hook up the extension anyway
12:13karolherbst: yeah...
12:13karolherbst: it's on my todo list...
12:13karolherbst: glehmann: got fma added that late? I thought there was slow fma before that
12:14karolherbst: gfx6 seems to have fma, but not sure what RDNA that is
12:15karolherbst: ohh looks like that GCN so yeah.. even with 1/4 of the speed of fmad, this is still faster than emulated ffma :)
12:16karolherbst: anyway, there is a plan to properly model all of this
12:42glehmann: fma is full rate since gfx9, but since we had mad too, it was judt easiert to use that
12:42glehmann: rdna2 removed mad
13:08anonymix007[m]: Am I supposed to export sync files from DMA buffers when importing into Vulkan? I tried not to and there clearly is tearing. I also tried to do so and the Vulkan driver hangs in drmSyncobjTimelineWait
15:01karolherbst: glehmann: right, the point is just that using ffma on the older hw is better than the emulation, and with the plan I was discussing with gfxstrand we do have a way that drivers can make the correct choices
15:22stefan11111: Ping to get some eyes on this: https://gitlab.freedesktop.org/mesa/demos/-/merge_requests/250 19:13anholt: radv folks: are there any metrics I should consider tracking for gpu memory usage (vram and/or sysmem) from sysfs/debugfs, in the new trace replay tool (and possibly in the future in deqp-runner). The goal is to make identifying various oom situations easier for CI maintenance.
19:15anholt: (also, in an ideal world I'd be capturing high watermark memory usages of heaps per trace/deqp invocation, but I don't have a concrete plan for that yet because I don't think we have driver support for such reporting on any driver)
20:09Kayden: anholt: ooh. yeah, I was thinking it'd be great if we had an idea of the memory usage of each deqp test, so we could maybe segregate the really memory hungry ones to run at lower parallelism. but, yeah, hadn't thought through how to capture that data yet
20:09Kayden: (I was seeing some machines in the intel CI being underutilized, likely because we reduced concurrency to avoid OOM-killer scenarios)
20:10Kayden: but maybe a bit -too- far
20:10anholt: https://anholt.pages.freedesktop.org/-/mesa/-/jobs/96540683/artifacts/results/graphs.html <-- an example of the monitoring output for turnip.
20:20Kayden: very nice :)
20:29zmike: ooooooo
20:34openglfreak: Where would I make an issue report about amdgpu / AMD hardware (MES) / user queues?
20:36anholt: -j 1 with the monitor running is not a terrible way to evaluate memory usage of traces, I guess.
20:40robclark: anholt: idk if useful for what you are doing but gpu_mem_total ftrace event will give you some global and per-pid mem usage info.. kinda an android-centric tracepoint and not sure I remember how it slipped in to drm. But perfetto used it so I wired it up for drm/msm
20:41robclark: tho I guess fdinfo also gives you some of this
20:46anholt: hmm. getting back from per-pid to which trace is being run might suck. I'm pulling the global out of /sys/kernel/debug/dri/1/gem currently.
20:54robclark: I guess you could get proc name from $debugfs/gem... but I suspect you'll just see a bunch of apitrace/etc processes which isn't too useful
20:55glehmann: openglfreak: https://gitlab.freedesktop.org/drm/amd probably
20:55robclark: if you had the pid's you could, I think, just use fdinfo.. to avoid driver-specific things
21:25openglfreak: glehmann: Thx dad!
21:25openglfreak: :D
21:36anholt: robclark: that's won't be pretty, but I also won't feel bad about making claude sort this out for me. thanks for the pointer.
21:36robclark: heheh, yw