01:21 tarceri_: mareko: it is first populated by the interstage linking right? In case there are multiple shaders used in the same stage. This is something spirv doesn't have to worry about.
05:56 hakzsam: anholt: yes, still trying to stabilize uprev cts but !25284 is only for rdna3 CI. What's the thing are you working on?
07:17 tjaalton: something weird happened with staging/23.2? it doesn't even have rc3 now
07:27 tzimmermann: javierm, arnd, may i ask you for a review of https://lore.kernel.org/dri-devel/20230912135050.17155-1-tzimmermann@suse.de/ ? i'd need at least an r-b for the first two patches, so that i can add them to fbdev/drm. the powerpc patches can be merged later.
07:33 javierm: tzimmermann: I see that Geert already acked patch #2
07:33 javierm: tzimmermann: let me take a look to #1
07:33 tzimmermann: javierm, geert only acked one architecture :/
07:34 tzimmermann: the powerpc devs appear to be ok with the ppc changes. it's the code that they sugggested to me
09:26 arnd: tzimmermann: I took another look at your patches, and they all look sensible to me, including the revised interface names. The one part I haven't figured out so far is the purpose of this cleanup, and it does make it harder to change powerpc over to use the open-flags in phys_mem_access_prot (like arm) if we ever want to do that
09:27 arnd: I think the powerpc method works for them because all framebuffers are behind PCI bridges and they are unlikely to ever have a SoC with built-in GPU in the future
10:23 tzimmermann: arnd, thanks a lot. to answer your question on the purpose: the call to fb_pgprotect() currently happens in the fbdev core code at https://elixir.bootlin.com/linux/v6.6-rc2/source/drivers/video/fbdev/core/fb_chrdev.c#L368 . it's the default mmap code for fbdev framebuffers. as part of a modernization of this code, i want to move it into a helper callback. but our current callback for per-driver mmap does not support
10:23 tzimmermann: the file argument. see https://elixir.bootlin.com/linux/v6.6-rc2/source/include/linux/fb.h#L294 . the solution is to either clean up fb_pgprotect(), or to change the fb_mmap callback and all affected drivers. fixing the arch seems to be the correct thing to do
10:35 javierm: tzimmermann: dropping the struct file argument seems the correct thing to do, what I don't understand is the value on on passing vm_{start,end} and offset instead of the vma
10:43 tzimmermann: javierm, "principle of least surprise": the new api is close in spirit to the existing pgprot_() functions. none of them uses vm_area_struct directly. interacting with the vma is left to the caller. the new helper simply blends in
10:47 javierm: tzimmermann: fair, I guess the https://en.wikipedia.org/wiki/Law_of_Demeter also applies
10:48 javierm: tzimmermann: if you add some words on the why, feel free to add my r-b to patch #2 as well
10:48 tzimmermann: javierm, thanks!
14:01 jani: hey all, I wanted to add issue templates to https://gitlab.freedesktop.org/drm/intel. however, in gitlab they need to be in the default branch under .gitlab/issue_templates. this is annoying for kernel repos (even if we don't host the drm-intel.git in that project yet). the alternative is to add a dedicated repo for issue templates at the group level. any opposition to adding an issue template repo under the drm group?
14:02 jani: it could either be a dedicate repo for templates, or we could reuse the maintainer-tools repo for this
14:03 koike: why not a general drm issue repo with tags ?
14:05 jani: I think drivers want to manage their issues as they see fit
14:06 jani: within the fdo gitlab instance you can still move issues between repos
14:07 jani: plus I presume most non-intel folks don't want to see our firehose of CI issues
14:09 jani: (afk->)
14:34 robclark: jani: I'
14:35 robclark: jani: I'd be interested in templates for drm/msm as well
15:05 arnd: tzimmermann: ok, got it. thinking about it some more, I suppose we could still make powerpc work more like arm, in which case the pci sysfs path would use the file flags when passed but fall back to the rewritten code that does not need the file anyway
15:35 anholt: hakzsam: I'd like to get vulkan video tested, but it looks like the cts we're on is missing some significant fixes.
15:39 alyssa: gfxstrand:
15:39 alyssa: gfxstrand: https://oftc.irclog.whitequark.org/asahi-gpu/2023-09-20#32502164;
15:40 alyssa: thoughts on preferrd solution to make vk_meta suitable for layer-as-varying drivers?
15:40 alyssa: i509vcb: ^^
16:17 jani: robclark: I wonder if we should just give it a go with setting maintainer-tools as the template repository according to https://docs.gitlab.com/ee/user/project/description_templates.html#set-group-level-description-templates
16:19 jani: robclark: unfortunately, the default template would still have to be common for the group (can't set it per project in the free version afaict) but it could be a template telling folks to choose a suitable template
16:21 robclark: jani: that works for me.. I think we could come up with templates that work reasonably across drivers. Let's just try it and we can improve the templates as we go
16:27 jani: robclark: ack
16:28 jani: airlied: sima: if you agree, could you set maintainer-tools as the template repository for drm group please? instructions at https://docs.gitlab.com/ee/user/project/description_templates.html#set-group-level-description-templates
16:45 mareko: tarceri_: I don't actually know how it's populated, only that UniformBlocks is set before UBO linking for GLSL, but it's NULL for SPIR-V
16:46 mareko: tarceri_: it would also be useful to have dead UBO elimination in the future
16:47 alyssa: idr: you assigned marge as I was typing! =D
16:47 _DOOM_: Are there any deeper explanations for the property types in drm_mode.h, like what is DRM_MODE_PROP_RANGE?
16:47 idr: alyssa: I tried to unassign it. Does that work?
16:48 alyssa: probably?
16:50 idr: I had to cancel the pipeline too.
16:52 alyssa: ah
16:55 alyssa: idr: fwiw I don't particularly object to the MR, just suspect the shaderdb results may look very different once opt_sink'ing alu is enabled and the easy cases with compares happen automatically
16:59 idr: alyssa: After looking at that pass, these are generally doing opposite things. nir_opt_sink is a simplified GCM/GVN kind of thing. The rematerialization pass tries to create more copies of an instruction whose result is used by a comparison.
17:00 idr: The goal there is to reduce register pressure... only x needs to be live instead of perhaps x and bool(x == 0).
17:00 idr: But... it's for sure worth looking at the interation.
17:00 idr: *interaction
17:05 alyssa: idr: sure :)
17:25 zzag: Do you know why `glFenceSync(GL_SYNC_GPU_COMMANDS_COMPLETE, 0)` can take 500-800ms to execute?
17:25 zzag: kwin uses a persistently mapped vertex buffer
17:25 zzag: after finishing every frame, it inserts a fence to avoid overwriting vertex buffer that might be used by the gpu
17:26 zzag: but we've noticed that sometimes inserting can take a really long time
17:26 zzag: thus resulting in frame drops
17:26 zzag: is it a bug or do we need to look out after something?
17:27 cmarcelo: eric_engestrom: dcbaker: are meson dependencies transitive? e.g. if I have a library b that depends on idep_a, then declare a dependency idep_b; finally would a library c that depends on both a/b need to dep both on idep_a and idep_b or is idep_b enough?
17:34 eric_engestrom: cmarcelo: if idep_b had idep_a in its dependencies, then anything that has idep_b in its dependencies also has idep_a
17:34 eric_engestrom: *also gets idep_a (to be clearer)
17:36 idr: zzag: Is this on a specific driver, or have you observed this on multiple platforms?
17:36 eric_engestrom: that's why I had started rewriting a bunch of headers and static libs into ideps, so that we don't have to care about what that header or static lib uses internally, the dependencies gets transmitted automatically
17:37 eric_engestrom: but iirc I posted a few conversions like this, but I never finished
17:37 zzag: idr: so far we've observed it only on machines with intel gpus
17:37 eric_engestrom: cmarcelo: if you're thinking about doing that for your builtin header target, I approve and will review your MR :)
17:39 zzag: https://www.irccloud.com/pastebin/HI8bMhWx/
17:39 zzag: idr: I can try to get a backtrace with debug symbols
17:51 dcbaker: cmarcelo: link_(with|whole) are not guaranteed to be transitive in a situation like `liba = library(... dep : foo); lib = library(..., link_with : liba)`, (though as an implementation detail it may), but as eric_engestrom said, `declare_dependency(dependencies : ...)` does have that guarantee, so bundling generated headers and libraries for linking is a good way to go for that
17:52 koike: mairacanal: o/ I just submitted vkms support to the ci https://lists.freedesktop.org/archives/dri-devel/2023-September/423719.html please try it out when you have some time, it would be great to get your feedback and see how to better integrate it in your workflow
17:52 cmarcelo: eric_engestrom: yes the plan is to add idep_compiler
17:52 koike: mairacanal: there are some tests I couldn't run (please see -skips.txt file in the patch) and some consistent failures, could you please confirm that those were expected?
17:53 koike: mairacanal: I also see the configfs patch, will that change vkms ? Would that require more vkms jobs in different configurations?
17:55 cmarcelo: dcbaker: so do I need to explicitly add the dependencies again inside the idep? here's the sequence, the XXX marks the spot where I'm unsure I need to add dependencies again: libcompiler = static( .... generated_sources_h ... ); idep_compiler = ( includes... link_with: libcompiler); nir = static( .... deps: idep_compiler); idep_nir = ( link_with: nir... XXX); vtn = static( deps: idep_nir).
18:06 dcbaker: cmarcelo: yes, or you could do something like `_private_dep = declare_dependency(<everything except the lib itself>); lib = library(..., dependencies : _private_dep); idep = declare_dependency(link_with : lib, dependencies : _private_dep)`
18:09 cmarcelo: dcbaker: ok.
18:09 cmarcelo: tks
18:10 mareko: tarceri_: FYI, there are some fields in gl_shader_program that aren't serialized
18:18 mareko: I don't know if that's expected
18:56 karolherbst: I wonder if I can reduce an uniform loop (each instruction is converged and all outputs are uniform) with n iterations to one with n / subgroup_size iterations... and replace alu operations (e.g. iadd) with subgroup operations...
19:05 pendingchaos: sounds like basically an extreme form of loop vectorization
19:05 pendingchaos: reminds me of https://github.com/sebbbi/perftest#uniform-load-investigation
19:05 pendingchaos: which is a form of it that's limited to just a memory load from the loop
19:05 karolherbst: pendingchaos: it's for shaders/kernels like this: https://gist.github.com/karolherbst/d9cb39f00329014550eacca62536544a
19:09 pendingchaos: does CL allow that? you would be changing the order of additions
19:09 karolherbst: mhhh
19:09 karolherbst: good question
19:10 pendingchaos: the shader might still benefit from vectorizing the load though
19:10 pendingchaos: or maybe even just putting the loop in a "if (get_local_id(0) == 0)"
19:10 daniels: koike: mvlad is writing new igt for vkms+configfs :)
19:12 karolherbst: pendingchaos: OpenCL C has to match C99 expectations btw..
19:12 pendingchaos: VALU might be faster on wave64, and I don't think AMD automatically optimizes LDS loads with a uniform address
19:12 pendingchaos: (that message was about doing get_local_id(0) == 0)
19:14 koike: daniels: nice, thanks for letting me know
19:15 karolherbst: yeah.. I might want to see what Intel is doing with this kernel tbh...
19:15 karolherbst: or maybe I ask nvidia :D
19:16 karolherbst: nvidia just unrolls it
19:17 karolherbst: but they do vectorize the load at least to 128 bit
19:17 karolherbst: yeah....
19:17 karolherbst: I think that would be the safer optimization
19:18 karolherbst: just ditch some iterations and vectorize the load
19:18 mvlad: koike: https://lists.freedesktop.org/archives/igt-dev/2023-September/060717.html
19:18 karolherbst: I think alyssa had some ideas on how to partially unroll loops? or was that somebody else?
19:19 karolherbst: mhh, but nvidia also uses `HFMA2.MMA` :D
19:20 koike: mvlad: awesome
19:20 pendingchaos: there was a loop unrolling issue here: https://gitlab.freedesktop.org/mesa/mesa/-/issues/7161
19:22 karolherbst: ahh
20:16 cmarcelo: is rusticl build broken in main or is it just my env? trying with "meson configure -Dgallium-rusticl=true -Dllvm=enabled -Drust_std=2021"
20:17 cmarcelo: I'm getting: error[E0433]: failed to resolve: maybe a missing crate `mesa_rust_util`?
20:24 cmarcelo: eric_engestrom: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/25314
20:54 karolherbst: cmarcelo: that happens if you update rustc via rustup or other random reasons. You can just remove all those *.rlib files and it should fix it
20:57 cmarcelo: karolherbst: trying removing them. shouldn't "ninja clean" be able to remove them though?
20:57 karolherbst: it should...
20:57 karolherbst: but then you need to rebuild everything
20:58 karolherbst: ohh.. you might have to delete librusticl_proc_macros.so as well
20:58 cmarcelo: (It did ninja clean and there were a few *.rlib.p directories inside build
20:58 karolherbst: yeah.. that's fine
20:59 karolherbst: it's just about the rlib files, because rust wants everything to be built with the same rustc compiler
20:59 karolherbst: and meson doesn't pick up that rustc changed if using rustup
20:59 cmarcelo: trying with a brand new build dir
21:00 karolherbst: normally you'd have some more lines before that failed to resolve which should tell you what's up
21:17 alyssa: karolherbst: that was Marek I think
21:21 cmarcelo: karolherbst: still failing :(, here's a larger log: https://gitlab.freedesktop.org/-/snippets/7687
21:22 cmarcelo: karolherbst: (meson version 1.2.1)
21:30 karolherbst: cmarcelo: ohh.. you might have to set `rust_std` again...
21:30 karolherbst: `extern crate` is something you don't have to do with 2021 anymore
21:30 karolherbst: maybe it was 2018..
21:31 karolherbst: but anyway, meson doesn't store the `rust_std` value if rust wasn't enabled yet
21:31 karolherbst: it's kinda weird...
21:31 karolherbst: but a clean build should do it..
21:31 cmarcelo: karolherbst: I'm already setting it: meson configure -Dgallium-rusticl=true -Dllvm=enabled -Drust_std=2021
21:32 karolherbst: let me try locally with main and 1.2.1
21:32 karolherbst: but I would have noticed if that wouldn't work anymore
21:32 cmarcelo: for some reason rust_std option doesn't appear in "meson configure" output
21:32 cmarcelo: ok
21:32 karolherbst: heh...
21:32 karolherbst: yeah, that's odd
21:33 karolherbst: yeah.. I have a `rust_std 2021 [none, 2015, 2018, 2021] Rust edition to us` line
21:34 cmarcelo: ok. here's something odd
21:34 karolherbst: yeah...
21:34 cmarcelo: if I do meson setup build -D.... I get to see it... if I do meson setup build, and later meson configure -D.... I don't see the rust_std
21:34 karolherbst: strange...
21:35 cmarcelo: trying a build with the first option
21:35 karolherbst: maybe reconfigure after enabling rusticl and set it again?
21:35 karolherbst: btw, it's not a rust specific problem, the same happens with cpp_std if c++ gets added to an existing build...
21:35 cmarcelo: I'm pretty sure after a configure the build was self reconfiguring through ninja
21:35 cmarcelo: (but I'll test)
21:36 cmarcelo: karolherbst: question: could we just hardcode rust_std in our meson for now?
21:37 karolherbst: we do...
21:37 karolherbst: but meson doesn't store it if rust isn't enabled
21:37 karolherbst: the alternative is to always require rust for builds...
21:38 karolherbst: we really should fix this in meson tho
21:38 karolherbst: heh...
21:38 karolherbst: `rust_std none [none, 2015, 2018, 2021] Rust edition to use`
21:38 karolherbst: I get this when I enabled rusticl later :'(
21:38 karolherbst: and yeah..
21:39 karolherbst: I need to meson yet another time get 2021
21:39 karolherbst: *meson configure
21:39 cmarcelo: still unsure what's up with rust_std not appearing for me if I do setup annd then configure
21:39 cmarcelo: it seems it does show to you but as 'none'
21:39 karolherbst: do you enable rusticl when running setup?
21:40 cmarcelo: I was enabling it after: meson setup build; cd build; meson configure -D...
21:40 karolherbst: if you enable it later via configure, you have to run configure a second time
21:40 karolherbst: and I think reconfigure has to run between them
21:41 cmarcelo: karolherbst: got it. (still checking with the good meson setup here.)
21:41 cmarcelo: it works
21:44 karolherbst: last time I spoke with dcbaker about it, the issue appeared to be non trivial and a pita to fix
22:15 cmarcelo: dcbaker: (or eric_engestrom): do I need to explicitly include the generated headers in the declare_dependency { sources: ... } for my new compiler dependency? I'd expect no, but alpine run seems still grumpy.
22:16 cmarcelo: the compiler dependency already depends on the compiler library (which depends on the generated headers to be built), so I was assuming it was not necessary.
22:24 tarceri_: mareko: i don't think it is set before linking. It's still set in the GLSL IR linker not the NIR linker like spirv.
22:30 dcbaker: cmarcelo: If the consumers of that dependency need to be ordered after the headers are generated then yes
22:31 cmarcelo: but aren't they already ordered after the library (compiler)?
22:31 cmarcelo: dcbaker: ^
22:32 cmarcelo: (trying out the CI with that added)
22:38 dcbaker: cmarcelo: if you have two librarys, A and B, and B has a dependency on generated header H, then the object files for A can be compiled before the Header is generated, but A cannot be linked until B is linked
22:38 dcbaker: Which means if a generated header is private to B, then by not putting it in A you can parallelized the build better
22:41 mareko: tarceri_: I see