04:24 sumits: danvet: on the dma-buf name vs gem names, I feel the gem bits will need to be updated to use dma-buf name fields / ioctls, since dma-buf must not depend upon its users. thouhgts?
06:27 anarsoul: ajax: starting xcompmgr didn't help :(
06:27 anarsoul: it still fits fbPutImage
06:27 anarsoul: s/fits/hits
09:14 MrCooper: anarsoul: FWIW, x11perf -shmput500 hits CopyArea here most of the time, not PutImage
09:20 MrCooper: in fact, it never hits PutImage here
10:28 danvet: sumits, are you taking care of the dma-buf set_name patch?
10:28 danvet: and imo needs the cc: stable as akpm suggested
10:28 danvet: also needs to be applied today so we're not missing the pull request train from mlankhorst_
11:04 tomeu: daniels: not sure what happened at https://gitlab.freedesktop.org/alyssa/mesa/-/jobs/1754950 , but I cannot retry Marge's jobs and if I reassign it to her then she just realizes the CI failed and reassigns herself right away
11:09 MrCooper: tomeu: does force-pushing the same commits to the source branch again trigger another pipeline?
11:09 daniels: i just retried by hand
11:10 MrCooper: clearly your powers exceed ours
11:10 tomeu: MrCooper: not sure, guess the commit ids would be different if the commit data changed
11:10 tomeu: daniels: will the dependent jobs be ran automatically afterwards?
11:11 MrCooper: tomeu: I meant literally the same head commit
11:11 tomeu: Marge's commits, or Alyssa's?
11:11 daniels: tomeu: can't remember if they are, but if not then I'll retry them too
11:11 MrCooper: Marge's
11:12 tomeu: MrCooper: don't know, we should try :)
11:12 MrCooper: don't think they are automatically retried
11:12 daniels: hopefully by the time the jobs have re-run, someone else has pushed to master and we get to do it all over again
11:12 tomeu: daniels: thanks a bunch!
11:12 tomeu: hopefully, it would have been Marge :p
11:12 MrCooper: daniels: reassign to Marge once the test jobs are restarted?
11:16 MrCooper: anholt robclark: dEQP-GLES3.functional.fbo.blit.conversion.r16f_to_srgb8_alpha8 seems to be the mesa-cheza flake du jour
11:22 tomeu: daniels: the artifact upload succeeded now, but the jobs aren't retried indeed
12:14 sumits: danvet: yes, I am. will cc:stable as well
12:15 danvet: thx
12:35 sumits: mlankhorst_: pushed the dma_buf set_name patch to drm-misc-fixes, fyi!
13:11 pendingchaos: jekstrand, cmarcelo: is spirv->nir supposed to create nir_intrinsic_memory_barrier_tcs_patch when use_scoped_memory_barrier=false?
13:11 pendingchaos: I don't think it currently does that because SpvMemorySemanticsOutputMemoryMask isn't in vtn_emit_memory_barrier()'s all_memory_semantics
15:07 jekstrand: pendingchaos: It probably should but...
15:08 jekstrand: pendingchaos: SpvMemorySemanticsOutputMemoryMask is only allowed when you support VK_KHR_vulkan_memory_model
15:08 jekstrand: You really shouldn't be claiming support for that extension unless you support soped memory barriers
15:08 jekstrand: pendingchaos: Also, we should really get RADV and Turnip both moved over to scoped barriers and delete the old path from SPIR-V -> NIR
15:09 jekstrand: anholt, krh, robclark: ^^
15:14 pendingchaos: spirv->nir always adds a SpvMemorySemanticsOutputMemoryMask when handling a TCS SpvOpControlBarrier though
15:15 jekstrand: hrm...
15:15 bnieuwenhuizen: jekstrand: I believe the main blocker for us is keeping the cache-skipping for coherent accesses (i.e. the local visibility/availability stuff)
15:15 jekstrand: bnieuwenhuizen: Not sure what you mean there
15:16 jekstrand: bnieuwenhuizen: If scoped barriers aren't sufficient, I'm open to suggestions. The non-scoped stuff is objectively terrible.
15:18 bnieuwenhuizen: jekstrand: we have loads/stores/atomics that skip the cache (which also flush/invalidate that cacheline). So preferably loads/stores with the visibility/availability bits set would be able to use those. But with the barrier you lose the association with the address
15:20 jekstrand: bnieuwenhuizen: Ah, that makes sense.
15:21 jekstrand: bnieuwenhuizen: Isn't the scoped barrier stuff still strictly better than the old thing though?
15:22 jekstrand: bnieuwenhuizen: One thing I had considered was to add vis/avail bits to the access flags on the load/store/atomic_* intrinsic and then have a pass which turns those into barriers.
15:22 bnieuwenhuizen: jekstrand: thing is you are potentially regressing the performance of SPIR-V coherent buffers (which kinda happens to work because all accesses to the buffer are coherent)
15:22 jekstrand: But short of anyone coming out and stating their requirements when cmarcelo was working on it, doing itn SPIR-V gives correctness even if it's not optimal.
15:23 bnieuwenhuizen: jekstrand: and yes, adding those bits was also my idea, I've just been too lazy as of yet to figure out what all the passes are where I might need to block code movement across it
15:24 jekstrand: fair
16:07 jekstrand: bnieuwenhuizen: About to head out for breakfast but I had a thought about how to solve your over-synchronization problem with descriptor indexing in RADV. Add a bit to radv_device_memory which is "I'm a WSI buffer RADV doesn't own" and set it in vkQueuePresent and unset it in vkAcquireNextImage. Then, when you go to exec, only add it to the kernel ist if !bit.
16:07 jekstrand: bnieuwenhuizen: We can add some WSI API for it if that'd help.
16:08 jekstrand: It could also potentially be set implicitly somehow by a transition to PRESENT_SRC but that seems sketchier
16:36 bnieuwenhuizen: jekstrand: there are lots of options to go most of the way, but not the entire way ... . your solution wouldn't work in all cases because async compute. I also had the idea of only adding the image to the global list the first time it is added to an update_after_bind descriptorset, which would probably in practice work for pretty much the same set of applications
16:55 cmarcelo: pendingchaos: jekstrand: adding the OutputMemoryMask when handling TCS ControlBarrier is just a way for us to pipe the information that we need output syncrhonization in that case, which is required by SPIR-V spec (the commented quote in that code). due to that quote seems to me it is correct to see barrier_tcs_patch generated even in the case use_scoped_memory_barrier=false.
16:58 cmarcelo: s/even//
17:01 cmarcelo: bnieuwenhuizen: interesting. I'm thinking if in your case would be enough to just keeping the AV/VIS info in the load/stores, or also embed the other bits into it as well (and only later a NIR pass lower them as needed)
17:08 cmarcelo: anholt: jekstrand: could you take another look at the barrier combining pass https://gitlab.freedesktop.org/mesa/mesa/merge_requests/3224?
17:08 gitbot: Mesa issue (Merge request) 3224 in mesa "intel/fs: Combine adjacent memory barriers" [Anv, Intel-Fs, Opened]
17:47 imirkin_: tomeu: just glacing at the current panfrost caps -- you definitely shouldn't need PIPE_CAP_IMAGE_LOAD_FORMATTED for ES3.1 -- that enables a non-core ext which allows imageLoad() to be used without an explicitly-specified format in the shader.
17:48 imirkin_: tomeu: nor should you need cube arrays -- those are in ES 3.2
17:48 anholt_: krh: you said yesterday that with cross compiles, buildtype didn't apply. it seems to apply fine for me.
17:50 krh: anholt_: it takes the optimization flags from the cross file
17:50 anholt_: krh: I just tried with my aarch64 cross file and -Dbuildtype=release, and it switched the optimization flags from -O2 -g to -O3
17:51 anholt_: it may be that default buildtype is plain in cross mode, but specifying a buildtype does seem to work.
17:51 krh: anholt_: I've not gotten meson to use -O0 by passing --buildtype=debug
17:51 anholt_: (I normally pass -Dbuildtype=debugoptimized, so the O2 -g I was comparing to makes sense. those flags don't appear in my cross file)
17:52 krh: hm
17:53 MrCooper: same meson version?
17:53 anholt_: 0.53.1 here
17:53 krh: 0.49.1
17:55 krh: always seem like odd behaviour that the cross file would override the buildtype and could probably be fixed/changed between 0.49 and 0.53
17:58 krh: and just confirmed that doing a meson configure with --buildtype=release and a cross file that specifies -O0 generates ninja files that build with -O0
18:00 krh: idr: any more feedback on !3929 ?
18:23 idr: krh: Not yet, but I will get back to it today.
18:49 krh: idr: cool, thanks
18:52 krh: jekstrand: what motivated 9807f502eb7a023be619a14119388b2a43271b0e ?
18:52 krh: jekstrand: the simplified version breaks down when lowered to fp19
18:52 krh: fp16, even
18:55 jekstrand: krh: Likely I came across it and whent "this could be simpler"
18:55 jekstrand: krh: You'd have to ask 4-year-ago me about it though.
18:55 jekstrand: krh: Be warned, that guy had no clue what he was doing. :-P
18:56 krh: jekstrand: hehe, ok
18:57 krh: for a second there I read "4-year-old me"
18:58 krh: jekstrand: so there's not tanh using benchmark that triggered this? I'd like to go back to the old version
18:58 jekstrand: 4-year-old me really didn't know what he was doing.
18:59 jekstrand: krh: Goodness, no.
18:59 jekstrand: krh: tanh does not occur in shader-db.
19:00 jekstrand: krh: Knock yourself out
19:00 krh: when does it ever occur
19:00 jekstrand: krh: Just be warned that if we fix one lowering, we also need to fix SPIR-V
19:00 krh: jekstrand: ok
19:00 krh: ah
19:00 krh: sure
19:01 jekstrand:has a PhD in math and still doesn't know what a hyperbolic trig function is good for.
19:02 imirkin_: hyperbole's, of course
19:02 Sachiel: hyperboles and a half
20:00 anholt_: https://gitlab.freedesktop.org/mesa/mesa/-/jobs/1760204 : "ERROR: No files to upload"? sure looks like the artifacts dir is getting populated above.
21:10 mareko: can somebody from Intel please review: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/3591/diffs?commit_id=5e88351bb831848cf5105ad28aafbeaed6e9d3cf
21:16 mareko: tpalli: do you know who can review this? ^^
21:31 idr: mareko: I sent some review.
21:32 idr: It took awhile because I was waiting to get results back from the CI for the series up to that point.
21:34 daniels: anholt_: seen the same earlier. looks like 12.8 regression.
22:23 craftyguy: mareko: your intel CI results look OK. the ~1k vk cts failures are not related to your branch tested, and the piglit test failures on 'gen9' were not related either