06:49mripard: airlied: sima: could you pull 6.5-rc1 in drm-fixes? we'll need it for drm-misc
06:49sima: oops will do asap
07:30mripard: sima: thanks :)
07:49sima: mripard, I forgot to tell you it's done ...
08:13mripard: I noticed, don't worry
08:58Company: while playing with my stuff, I've been looking at radeontop for too long now - and it seems when I use Vulkan, it never downclocks the memoryclock
08:59Company: I originally didn't think too much about it, but it turns out when I use zink, that's also the case
08:59Company: so now I wonder: what is the memory clock and why wouldn't it be downclocked?
09:00Company: side note: zink achieves only half the framerate of GL, and that's visible in radeontop with the shader clock running way lower - but the memory clock is at max
09:07MrCooper: Company: it's the clock for the GPU-local VRAM; one common reason for it not clocking down is having multiple monitors connected with different modes
09:11Company: MrCooper: it does clock down with GL though
09:11Company: it's chilling at 0.1/1.12 now
09:12MrCooper: can't really explain that, FWIW the clocks are generally controlled by an SMU in the GPU
09:12Company: 0.54/1.12 when bechmarking the GL renderer
09:12MrCooper: based on load and possibly other factors
09:12Company: 1.12/1.12 when benchmarking the GL renderer with zink (and having half the framerate)
09:13Company: I was originally wondering if I do something stupid - like putting images in the wrong vram area or linear tiling vs optimal tiling or something like that
09:14Company: but got really curious once zink had the same issue
09:15MrCooper: clocking up isn't necessarily issue, maximum clocks are required for maximum performance
09:15MrCooper: *an issue*
09:16Company: yeah, but we're pretty obviously not hitting that (this is all CPU limited)
09:48dj-death: someone would have an idea why this pipeline is failing : https://gitlab.freedesktop.org/mesa/mesa/-/pipelines/934092 ?
09:48dj-death: the logs appear to be cut off without an actual error in there
09:51MrCooper: looks like they timed out after 75 minutes, retry?
10:10dj-death: I guess I have to push an new revision
10:10dj-death: the second retry seem to have failed as well
10:14dj-death: yeah I probably have to wait for someone else to push some commits on main
11:17Company: "GPU hung on one of our command buffers" means I put too many commands into a single buffer and should have submit()ed more often?
11:21dj-death: Company: no, it's a hang
11:21pq: sounds to me like it might be "bad" commands rather than too many commands, unless you intend to do several seconds worth of work in one err.. "primitive?"
11:21dj-death: Company: usually it's a driver bug
11:21dj-death: Company: but a while (true); loop in a shader can also do that
11:22Company: well, I was generating random crazy tests
11:22Company: so it might just have generated one drawing 20.000 widgets or so
11:22dj-death: that should be fine
11:22Company: it was something that took a few seconds with a smaller number
11:23dj-death: ah okay
11:23dj-death: then yeah, maybe it's taking too long
11:23Company: I'm trying to anticipate what I need to account for
11:24dj-death: it's odd that 20k widgets would be a problem
11:24Company: because unlike games, which have full control over stuff, GTK gets to deal with whatever application developers throw at it
11:24dj-death: there are apps like gravity mark rendering way more than that per frame
11:25Company: if in doubt, blame my renderer for being slow
11:30Company: hahaha, yeah
11:31Company: renderering the contents of a scrolled window without clipping may do things
11:32Company: so yeah, I'll need to submit smaller buffers to stop things from breaking accidentally
12:15mairacanal: sima, is there any future plans or ideas for a generic global GPU stats interface such as we have with fdinfo for processes?
12:16sima: mairacanal, iterate over all fdinfo and add up in userspace?
12:22mairacanal: it can be a possible implementation. do you think it would make sense to create an infrastructure for it on the DRM?
12:37sima: mairacanal, don't we have some gputop or so that's trying to be cross-vendor?
12:38sima: mairacanal, but yeah unless it's a perf problem I think the dumb impl should be good enough
12:38sima: top also works like this (but I think tries to be better with file watches in procfs maybe)
13:34zmike: DavidHeidelberg[m]: https://gitlab.freedesktop.org/mesa/mesa/-/issues/7144#note_1998904 can probably be pruned from perf traces too
13:38DavidHeidelberg[m]: zmike: which trace, refering JS loaded comments doesn't work well here
13:38zmike: iris-glk-traces-performance paraview/pv-waveletvolume-v2.trace
13:39zmike: it's gotten notifications like 5 times in the past 12h
13:39zmike: going from 10-11 fps
13:48mairacanal: sima, about the gputop, currently, it only displays the per process GPU percentage, not the global stats
13:55mairacanal: intel-gpu-tools has a global stats monitor, but AFAIK it uses the intel perf infrastructure
13:55mairacanal: i thought it could be interesting to other drivers to have a similar monitor for global stats
13:56mairacanal: but i don't know the best way to expose this global stats information (sysfs? debugfs?)
13:57penguin42: mairacanal: There's also radeontop
14:24MrCooper: jani: defconfigs enabling CONFIG_WERROR is pretty insane, can break with any previously-untested compiler
14:24zmike: is there a way to get mesa to build/install a libGLX without merging it into libGL?
14:27pq: Doesn't enabling glvnd do roughly that?
14:28zmike: huh so it does
14:29zmike: apparently qtwebengine doesn't work without it 🤔
14:30pq: how do they manage to do that? Link both libGL and libGLX?
14:30zmike: confusing
14:32pq: I think one is supposed to link either only libGL, or libOpenGL+libGLX. If you don't have glvnd providing those libs, then... or if you mix glvnd frontend libs with Mesa standalone libs... boom?
14:37MrCooper: pq: looks like GLVND's libGL.so.1 links libGLX.so.0
14:37pq: MrCooper, right, so if you use glvnd, it works even if you link both yourself.
14:38pq: but if you use stand-alone non-glvnd libGL.so or you even mix glvnd and non-glvnd libs, boom?
14:38MrCooper: right, though it's not clear what the issue was in the first place
14:39pq: Going home is better \o.
14:39MrCooper: in particular, what exactly links libGLX, and whether that also links libGL or libOpenGL
14:40pq: maybe also confused installation with a local build of non-glvnd Mesa?
14:50penguin42: karolherbst: On the profiling, you say we can't return 0 is the ..if let Some(...) fails and usggest a read_blocking instead of the read, am I ok to define a read_lbocking as a wrapper around read() that just unwraps it?
14:53karolherbst: penguin42: yeah, I think that would be fine
14:53penguin42: karolherbst: Great just fixing thos eup now
14:53karolherbst: might make sense to point to the gallium docs though, but yeah... I'd just like to formalize whatever the doc is saying there
15:01penguin42: oh hang on
15:01penguin42: karolherbst: There's another problem there; that we're expecting a U64, and better had get a U64 back, so we still need to unpack
15:01penguin42: karolherbst: So, I'm not sure how to do that and not get your 0
15:02eric_engestrom: mesa group owners/maintainers, could you add mairacanal (`@mairacanal` on gitlab) to the CI-OK group?
15:06karolherbst: penguin42: I wonder if that can be encoded at query creation time via a generic argument or something. We could potentially wrap around pipe_query_type with an enum, and add a `create_query` method on that enum. And then we can specizlie the return value per query type
15:09penguin42: ewww hairy :-)
15:10penguin42: karolherbst: I think it can for some of the queries, but the general case allows groups of queries and driver specific stuff where you have to query the driver to see what size it will return
15:12karolherbst: mhhhhh
15:12penguin42: damn, and if I take just the read_blocked out I hit a 'irrefutable if let pattern' warning because I've only got U64's for now
15:13karolherbst: not sure we'll ever have to use that, so that might be something someobdy has to figure out if they want to use the wrapper for non compute stuff
15:16penguin42: let me see if I can wrangle that type magic
15:28penguin42: karolherbst: Actually I think it's always safe as is, if the read_blocked succeeds (which it always will) then it will be the right type, so we're only using the if let as a destructure
15:29karolherbst: right
15:29karolherbst: you don't need an if then, right?
15:30karolherbst: uhh.. or I guess you would
15:30penguin42: can you destructure without the if?
15:30karolherbst: I think that only works on tuples, not enums
15:31penguin42: karolherbst: It swats you either way....
15:31penguin42: karolherbst: I added a _U32 so the 'if let' didn't complain; but the let without the if complains I didn't cover the other case
15:31karolherbst: right...
15:32karolherbst: I think the generic thing would help in that case, just requires it to be defined at creation time
15:32penguin42: karolherbst: So I could remove the _U32, but as soon as someone adds it for real then those let's will fail to compile which would be mean
15:32karolherbst: but groups are weird then...
15:32karolherbst: how are groups queried?
15:32penguin42: we gently ignore them for now
15:32karolherbst: just a raw buffer with the values after each other?
15:33penguin42: karolherbst: I think it's a variant array in the pipe_query_result
15:33karolherbst: ohh rght
15:33karolherbst: it's just an array of values
15:34penguin42: it's a bit of a disaster of an interface
15:34karolherbst: it is
15:36penguin42: karolherbst: Especially when you see some drivers just gently ignore the request made and return one magic 64 bit value :-)
15:36karolherbst: but anyway, I don't think having the enum on the read makes much sense as the return type isn't variable. It's already decided when you create the query
15:37penguin42: blech that's going to be a bit of a rework; ok, let me think about that when I come back later - I guess it's going to be some fun with associated types and things, I'm not sure what the compiler will let me do, I think it's pretty flexible these days
15:38penguin42: karolherbst: My understand is for some of the driver queries, you have to do a reuqest to the driver to ask it what size you're going to get
15:38karolherbst: ehh, I can write down my idea here, it's not _that_ complicated I think.. give me a sec
15:40penguin42: karolherbst: I think you're saying parameterise a type based on query value, and have it have an associated type which is the result
15:40penguin42: or at least have it define it's read() individually
15:40penguin42: (I've just pushed a version with everything else fixed)
15:41karolherbst: ehh wait... I don't think it works the way I wanted it to work....
15:46karolherbst: okay, got something
15:48karolherbst: penguin42: https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=386712d1dab1eeb9d6245edd6003834c
15:48karolherbst: something like that or so
15:49karolherbst: not sure if QueryBuilder can be skipped here
15:49karolherbst: but... might be fine
15:50karolherbst: could maybe also generalize with blanket impls? mhhh
15:50karolherbst: maybe not
15:51karolherbst: could also add a macro to need less typing
15:51penguin42: this is turning into a bit of a long hole! But yeh I'll have a look at that
15:52penguin42: karolherbst: I think you might end up with code duplication in the compilation for all of the identical queries but I'm not sure
15:52penguin42: karolherbst: Anyway, I've done worse things with the type system, so I'll have a look after I go and get rained on
15:52karolherbst: I'm sure the compiler is smart enough :D or maybe not
15:53karolherbst: mhhh....
15:55penguin42: karolherbst: Anyway, I'm sure it's doable like that, not too sure if it'll like the enum as the query parameter, but we'll see, anyway, time for a walk!
15:56karolherbst: penguin42: PIPE_CAP_TIMER_RESOLUTION
15:56karolherbst: ...
15:56karolherbst: penguin42: https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=adb52841715ad7f45399e63c8dc8065a
15:56karolherbst: didn't even know one can do that :D
15:57karolherbst: and have fun
16:26jani: MrCooper: agreed, yet I think that pretty much still tips the scales for __diag_ignore_all. I was thinking if a user intentionally sets -Werror, they get to keep both pieces, but they just didn't know why the build started failing for them
16:27MrCooper: yep, I consider enabling CONFIG_WERROR for unsuspecting users cruel
16:29jani: :)
16:35MrCooper: mind-boggling that Linus et al don't understand this
16:47emersion: -Werror has a positive outcome in my experience
16:47emersion: people just send fixes
16:48zmike: -Werror is a cancer if every single person isn't using the exact same compiler
16:48emersion: there will always be errors when someone compiles your stuff on their platform, because their platform isn't exactly the same as yours
16:48emersion: and it's a good thing if they send fixes
16:51daniels: emersion: breaks bisection tho
16:52emersion: it is very easy to disable in the cases that you need to disable it
16:52emersion: but i find -Werror a good default
16:52daniels: amazing in CI, less amazing to ship to users
16:53emersion: users who build from source have a level of technical expertise
16:53zmike: the ability to follow random instructions someone posted on reddit?
16:54emersion: are you really interested in supporting users who don't want to learn and only want to blindly copy-paste commands found on reddit?
16:54Sachiel: yes
16:54Sachiel: we kinda have to
16:54Sachiel: it's the job
16:54emersion: if they do want to learn, they will just come and ask me on IRC
16:54emersion: and i will explain
16:55emersion: ideally guiding them to post a patch
16:56Sachiel: and as someone with the technical expertise to deal with it, when I need to test something in an older version of the cts, that's building against an older version of amber and suddenly a bunch of new gcc warnings cause the build to fail, I don't care how easily I can disable it, every single time is a huge pain in the ass
16:56zmike: ^this
16:59pixelcluster: tbh I think the average person needing help with -Werror issues today doesn't have IRC
17:02emersion: pixelcluster: just need to click on the webclient link in the readme
17:03zmike: I don't even read the readmes for projects I work on
17:04emersion: maybe I should consider renaming it to MEMES.md to spark interest
17:05zmike: memes don't end with .md
17:05Sachiel: MEMES.phd
17:06zmike: now THAT'S a meme
17:18Company: people who build from source are often people who just want to help
17:18alyssa:is looking forward to nir_def *
17:18Company: like me when I was asked about compiling git to test
17:18Company: I don't want to deal with -Werror because my compiler isn't blessed
17:20penguin42: karolherbst: You can do pretty much anything with the type system; it's very turing complete
17:20alyssa: stop doing generics
17:20alyssa: types were not meant to be turing complete
17:21penguin42: alyssa: I wrote this http://www.treblig.org/daveG/rust-mand.html a few years back; you might want to get a bucket
17:21alyssa: wanted to mess with types anyway for a laugh? we had a tool for that: void*
17:21alyssa: penguin42: nice, closing the tab now ;)
17:22penguin42: alyssa: Since then Rust let you use constant values in the type system and the equivalent is all very boring
17:26penguin42: hell, that was 6 years ago
21:37DemiMarie: alyssa: Tell that to the dependent type people, where type-checking requires evaluating arbitrary runtime programs (albeit without I/O).
23:13airlied: do we have a nir pass that moves global loads closer to their first use?
23:14airlied: I thought I could persuade opt move but it doesn't seem to be doing what I want even after I modififed
23:16jenatali: airlied: nir_opt_sink?
23:16jenatali: I don't see load_global handled though
23:18airlied: yeah I've been adding load global support
23:18airlied: opt sink seems more block focuses
23:18airlied: I've only got a single block
23:19jenatali: Ah I see
23:22airlied: just have a shader with a lot of spilling and I think just pushing the global loads down would alleviate some of it
23:41Kayden: airlied: sounds kind of like nir_schedule.c (which also doesn't do load_global)
23:41Kayden: I think that nir_schedule is for ordering instructions within basic blocks; nir_opt_sink and nir_opt_gcm (including GVN) are for moving things across blocks
23:42Kayden: not a lot of drivers are using it today
23:42Kayden: but maybe it'd be helpful
23:52airlied: okay if I hack on can reoder and global support it moves things., but then it looks worse :-P
23:53airlied: reasons I'll never be a compiler engineer