00:06DemiMarie: Mesa (and anything else proprietary programs might need) should be statically linked against all dependencies except libc.
00:07DemiMarie: Alternatively, the libc dynamic linkers need to be fixed to perform two stage symbol lookup, like Solaris/illumos and Windows do.
09:15tzimmermann: mripard, hi. i noticed that drm-misc-next has a number of patches that rather belong into drm-misc-fixes. maybe you want to cherry-pick
10:41mripard: tzimmermann: which ones do you haev in mind?
10:45tzimmermann: mripard, the ones i noticed were 7a5115ba1d691bd14db91d2fcc3ce0b056574ce9 and eb4accc5234525e2cb2b720187ccaf6db99b705f and 53bd7c1c0077db533472ae32799157758302ef48
10:45tzimmermann: their Fixes tag revers to a commit in upstream
10:46tzimmermann: but these fixes won't make it there before v6.12-rc1
10:55mripard: I don't think the two first are actually fixes. They just replace the same code with other macros
10:55mripard: it's more of an abuse of Fixes if anything
13:24amber_harmonia: I've been trying to run a ci target with `ci_run_n_monitor.py ` (specifically a750_vk), but it has been stuck on "⏲ for the pipeline to appear in ['mesa/mesa']" for more than an hour by now, does anyone know what could be going on here?
13:27zmike: use --target with the pipeline url
13:37amber_harmonia: trying with that
13:55amber_harmonia: doesn't seem to be working either, in the same way
13:56zmike: is that a manual job?
13:56zmike: you might need --force-manual
14:06valentine: amber_harmonia: I've started your int64 mr with ./bin/ci/ci_run_n_monitor.sh --pipeline-url full_pipeline_link --target "a750_vk" --force-manual
14:08valentine: But keep in mind that it you would have to use ./bin/ci/ci_run_n_monitor.sh --pipeline-url full_pipeline_link --target "a750_vk" --exclude-stage '' after a rebase because the script changed
14:19daniels: valentine: --force-manual is a no-op now
14:19valentine: It is, but that's an older branch
14:19daniels: ah :)
14:20valentine: Thanks tho :D
14:21valentine: It fails to start https://gitlab.freedesktop.org/amber/mesa/-/jobs/63267382
14:21valentine: daniels ^^
14:25daniels: mupuf: ^ a750 fails to boot
14:25zmike: this is like 4 weeks running now
14:25mupuf: Thanks, on it!
14:28mupuf: zmike: never the same bug though! That's the problem of having only ONE device at home to test both the next infra AND mesa. Hopefully I can merge my 100 patches long MR next week and stop this insanity
14:29mupuf: valentine: can you retry the job?
14:29daniels: mupuf: if you've got a moment could you also please review https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/31059
14:36valentine: mupuf: amber_harmonia Well it's running now but literally everything's crashing or timing out
14:36mupuf: daniels: done, thanks/
14:38mupuf: valentine: I see! Well, I reverted to a know good version of the infra (hurray for image-based infras), so it likely is a regression from this branch
14:38mupuf: But we can check on main to make sure
14:39valentine: I'm pretty sure it's just this MR, yeah
15:09amber_harmonia: valentine: huh, i am not getting that locally
15:12valentine: amber_harmonia: IIRC I also saw the same crashes on other gpus when I tried to enable shaderint64 globally on your branch
15:16amber_harmonia: oh yeah i had that disabled here, let's see
15:20amber_harmonia: yeah there are some unsupported nir operations, i guess we probably wanna do those in a separate MR since it's not directly related?
15:25amber_harmonia: hm, on a further look it's probaby a better idea to just keep it together
15:30valentine: sounds good :)
19:25Lynne: so what do compute_shader_derivatives do? most documentation says "check out the NV extension, its the same"
19:26Lynne: as far as I understand, it just divides the total number of workgroups submitted by 4, and I guess it requires that all 4 workgroup groups run in parallel?
19:29glehmann: compute_shader_derivatives just allows you to use dFdx/dFdy and implicit lod in compute shaders. For that it defines which compute shader invocations in workgroup will be one quad group for derivatives
19:31glehmann: not sure what you refer to with dividing workgroups by 4
19:57airlied: robclark: msm inteoduced a python dep in the kernel, not sure I saw any disvussion on that before it landed
19:58airlied: i think you should consider making it as optional as possible
19:58robclark: iirc there was some discussion
19:59airlied: with core kernel folks and distros?
19:59robclark: and some adjustment/patching to relax python version requirements
19:59robclark: probably just dri-devel
19:59airlied: introducing a new hard build time dep is not something we can do on our oen
19:59robclark: maybe lumag remembers?
20:00airlied: its outside my authority zone
20:00robclark: hmm
20:00airlied: pretty sure there has been pushback in the past on it
20:00robclark: even documented: https://lore.kernel.org/all/20240509-python-version-v1-1-a7dda3a95b5f@linaro.org/
20:01robclark: "docs: document python version used for compilation"
20:01airlied: the optional is load bearing
20:01robclark: https://www.irccloud.com/pastebin/lahPzugI/
20:01airlied: like is the option you dont get msm?
20:01robclark: yeah
20:02airlied: that really isnt optional then :-)
20:03airlied: though id have expected to hear from distros about it
20:03airlied: and get more pushback
20:03robclark: idk if it would work as short term option but maybe distro patch that checks in the generated headers?
20:03robclark: tbh, I hadn't thought py was that exotic of a build dep
20:04airlied: i dont think anything in core kernel uses it
20:04airlied: it might be used in tools
20:04robclark: checking in massive generated headers isn't awesome.. so defn been a nice developer QoL improvement to generate them at build time
20:05airlied: it might be all distros already have python in their build chain implicitly
20:05airlied: and if it stays py3 generic nobody will notice
20:06airlied: maybe we should open the floodgates, but with rust deps coming up, it might piss some ppl off to add even more hard deps
20:08robclark: maybe, but I at least would have thought py3 would be an easier dep to swallow.. defn trying to keep it pretty py3 generic
20:09HdkR: Switch the header generator script over to using Rust instead. Drop one dependency for another hard dependency :)
20:10airlied: we do have rust hostprogs but only for rust kernel builds
20:10airlied: so it might be an option :-)
20:11airlied: but i expect pushback will come from some distros eventually but maybe nobody will notice
20:13robclark: do we end up with a py dependency anyways for kerneldoc?
20:23airlied: robclark: yeah it might be that most people builds docs now :-)
20:49iive: isn't meson running on python3? It's already hard dependency for mesa.
20:50Sachiel: they are talking about the kernel
20:50iive: oh. i see.
20:51karolherbst_: the kernel isn't using python already?
20:51jannau: the fedora kernel already requires python3
20:52karolherbst_: msm already requires python as far as I can tell
20:57jannau: since v6.10 which is what this is about
21:05karolherbst: ahh
21:09DemiMarie: If Qubes OS winds up needing to do a buffer copy in a compositor, is it better to use OpenGL or Vulkan?
21:10DemiMarie: s/compositor/proxy/
21:10robclark: from performance standpoint, it wouldn't matter
21:11DemiMarie: What about sync?
21:11DemiMarie: I want to support drm-syncobj
21:11DemiMarie: Can one use OpenGL in an explicit sync world?
21:12robclark: yes, android is explicit sync and always has been
21:12DemiMarie: What about with Mesa?
21:13DemiMarie: The reason I mention drm-syncobj is that syncobjs are allowed to be long running. The kernel does not provide a guarantee that one will complete in finite time, which means that a syncobj cannot be turned into a sync file and imported into a dmabuf.
21:14robclark: drm-syncobj is just a layer on top of dma_fence
21:14robclark: but pretty sure there is some extension for syncobj.. there is support for it at the gallium layer
21:15DemiMarie: Why is linux-explicit-synchronization deprecated in favor of drm-syncobj?
21:16robclark: I guess syncobj fits better with vk.. other than that I'm not sure there is much advantage.. it is all just dma_fence under the hood
21:18DemiMarie: I thought there were future driver changes (page faults and compute interop IIUC) that only work with syncobj.
21:19DemiMarie: Those lose the "will complete in finite time" guarantee.
21:20robclark: AFAIU that is all still theoretical
21:23alyssa: why does devicecoherent memory result in non-coherent load/store_global with piles of device-scope barriers?
21:23alyssa: for AGX, implementing those semantics would require piles of cache flushes
21:23alyssa: but the original glsl can be done without cache flushes, by setting the "bypass caches" bit on the load/stores
21:23alyssa: is that... unusual? I had the impression AMD was similar with the .glc bit
21:25alyssa: it also seems harder to optimize this than to lower the other way around in NIR
21:27alyssa: oh, that's nir_lower_memory_model isn't it
21:27alyssa: thanks pendingchaos !!
21:28alyssa: :)
21:55airlied: pretty sure syncobjs have the same rules mostly as fences in terms of long running behaviour
22:41DemiMarie: robclark airlied: Ah, the advantage of syncobjs is that one can create a syncobj for unsubmitted work, which will never complete.
22:43DemiMarie: Is using syncobjs with OpenGL going to trip a bunch of driver bugs?
22:54robclark: idk if it is as well supported on gl side
22:54robclark: if you just need to do a blit, fence fd is more than sufficient
23:33DemiMarie: robclark: can I get that from a syncobj?