17:21 fdobridge: <g​fxstrand> I've verified this, sort-of. Running the compositor and old GL driver under GDB, I can see that it gets `tile_mode = 0` and `tile_flags = 0` from nouveau.ko so it's seeing a linear buffer. I don't know where those flags are getting thrown away yet.
17:24 fdobridge: <g​fxstrand> When I run gears from the same GPU, I get the same flags.
17:25 fdobridge: <g​fxstrand> Just so we're all clear, I'm fine with the multi-GPU case not working correctly with old-school nouveau GL.
18:37 fdobridge: <d​wlsalmeida> hey, anybody care to educate me on the differences between the method types, i.e.: ninc vs 0inc vs 1inc etc?
18:37 fdobridge: <d​wlsalmeida> hey, could somebody please educate me on the differences between the method types, i.e.: ninc vs 0inc vs 1inc etc? (edited)
18:39 fdobridge: <d​wlsalmeida> is this just about how the command is laid out in the push, or are there other implications I should be aware of?
18:42 fdobridge: <k​arolherbst🐧🦀> the inc just means how often the "method" gets increased, like 1inc means that the first value is against the initial mehtod, the second and subsequent against the next (useful for those `INLINE_DATA` methods, where the first is e.g. the size and the next the data), ninc just increases with each value and 0inc never
18:45 fdobridge: <d​wlsalmeida> I have a simple dump here:
18:45 fdobridge: <d​wlsalmeida>
18:45 fdobridge: <d​wlsalmeida> [0x00000000] HDR 20016000 subch 3 NINC
18:45 fdobridge: <d​wlsalmeida> mthd 0000 NV906F_SET_OBJECT
18:45 fdobridge: <d​wlsalmeida> .NVCLASS = (0x902d)
18:45 fdobridge: <d​wlsalmeida> .ENGINE = 0x0
18:45 fdobridge: <d​wlsalmeida>
18:45 fdobridge: <d​wlsalmeida> [0x00000002] HDR 20010000 subch 0 NINC
18:45 fdobridge: <d​wlsalmeida> mthd 0000 NV906F_SET_OBJECT
18:45 fdobridge: <d​wlsalmeida> .NVCLASS = (0xc797)
18:45 fdobridge: <d​wlsalmeida> .ENGINE = 0x0
18:46 fdobridge: <d​wlsalmeida> so NINC here means "0000" remains "0000" for both values?
18:56 fdobridge: <g​fxstrand> No, we check if it's the same device. drm_prime.c:921
18:56 fdobridge: <g​fxstrand> Okay, that mystery is solved.
18:56 fdobridge: <g​fxstrand> Or at least partly solved.
18:58 fdobridge: <g​fxstrand> And... verified that running the compositor with NVK+Zink fixes the tiling corruption right up until we get that fence timeout
18:58 fdobridge: <g​fxstrand> Good. So I'm not crazy. That's always a good starting point.
19:00 fdobridge: <k​arolherbst🐧🦀> no, it would increase per pushed value, but both of those only push one value in the first place
19:18 fdobridge: <g​fxstrand> Hrm... Sometimes it's a timeout and sometimes it's a MMU fault
21:25 fdobridge: <g​fxstrand> @prop_energy_ball I'm finally looking into this explicit sync issue.
21:25 fdobridge: <g​fxstrand> It looks like no one is ever signaling the timeline so it's just sitting there in `WAIT_PENDING` forever.
21:25 fdobridge: <g​fxstrand> I'm digging through Mutter code now
21:31 fdobridge: <M​isyl with Max-Q Design> Probably worth smoke testing with gamescope as a session too? I at least know that code in and out and it's a lot simpler
21:36 fdobridge: <g​fxstrand> I think the problem is that it assumes EGL extensions which nouveau GL doesn't support
21:36 fdobridge: <g​fxstrand> Now I just need to dig through EGL specs to figure out which ones
21:39 fdobridge: <g​fxstrand> Yeah, it depends on EGL_ANDROID_native_fence_sync and never checks for it.
21:39 fdobridge: <g​fxstrand> I'll file a Mutter bug
21:41 fdobridge: <g​fxstrand> Of course the GNOME GitLab instance is going to deadname me... 🤦🏻‍♀️
21:46 fdobridge: <g​fxstrand> https://gitlab.gnome.org/GNOME/mutter/-/issues/3475
21:49 fdobridge: <g​fxstrand> Would someone more adventurous than me like to try out GNOME Shell running on NVK+Zink with the modifiers branch. I think it should be fine on Zink. We should support sync_fd export there.
21:49 fdobridge: <g​fxstrand> Would someone more adventurous than me like to try out GNOME Shell running on NVK+Zink with the modifiers branch? I think it should be fine on Zink. We should support sync_fd export there. (edited)
22:01 fdobridge: <a​irlied> @gfxstrand do we expose that EGL ext on mesa at all?
22:02 fdobridge: <a​irlied> seems surprising mutter would depend on a non-mesa feature
22:02 fdobridge: <g​fxstrand> We do
22:02 fdobridge: <g​fxstrand> Iris exposes it on my laptop
22:03 fdobridge: <g​fxstrand> Zink+NVK has it
22:03 fdobridge: <g​fxstrand> Just no old-school nouveau
22:04 fdobridge: <g​fxstrand> It'd be implementable in old-school nouveau if we really cared but that doesn't fix the issue of users who have an old GL driver installed seeing explicit sync blow up
22:05 fdobridge: <a​irlied> that seems less of a problem since explicit sync is a fairly new mutter feature
22:05 fdobridge: <a​irlied> so it's likely if we could fix nouveau it would just be fine
22:05 fdobridge: <a​irlied> should mutter just not expose explicit sync with that ext?
22:06 fdobridge: <g​fxstrand> Yeah
22:06 fdobridge: <g​fxstrand> I could probably type the patch but I'm not set up to actually test it.
22:06 fdobridge: <g​fxstrand> It's like 3 lines of code
22:07 fdobridge: <a​irlied> I've "escalated" it
22:08 fdobridge: <g​fxstrand> Thanks!
22:08 fdobridge: <g​fxstrand> Should take Jonas or similar like 5 minutes
22:09 fdobridge: <!​DodoNVK (she) 🇱🇹> I don't think Mutter calls that ANDROID function call though
22:09 fdobridge: <g​fxstrand> It does
22:10 fdobridge: <g​fxstrand> Just search for `DupNativeFenceFD`
22:16 fdobridge: <g​fxstrand> @airlied One more question to answer before I think things are in decent shape: Should I use the newly added query? Or should we drop the query from the kernel patch? I'm happy to go either way. I generally like having queries for things. I just didn't know it was there when I typed the patch.
22:20 fdobridge: <a​irlied> I'd go with the query, seems like a good plan to use them
22:20 fdobridge: <g​fxstrand> Okay. I'll fix that up quick
22:24 fdobridge: <g​fxstrand> Ugh... We seem to have deleted too much stuff from the uapi header again. 😩
22:26 fdobridge: <k​arolherbst🐧🦀> what do you mean?
22:26 fdobridge: <g​fxstrand> I tried to sync the header and the build failed
22:26 fdobridge: <k​arolherbst🐧🦀> oh no
22:26 fdobridge: <g​fxstrand> channel alloc stuff
22:26 fdobridge: <k​arolherbst🐧🦀> I know that I recently added stuff...
22:26 fdobridge: <g​fxstrand> Maybe I need to rebase?
22:27 fdobridge: <k​arolherbst🐧🦀> yeah.. probably
22:27 fdobridge: <k​arolherbst🐧🦀> maybe I screwed up
22:27 fdobridge: <g​fxstrand> Okay, I won't sweat it then
22:27 fdobridge: <k​arolherbst🐧🦀> but I've added things on the kenrel side, because I started using them after the libdrm import explicitly
22:27 fdobridge: <k​arolherbst🐧🦀> or rather.. they weren't in the uapi header
22:28 fdobridge: <k​arolherbst🐧🦀> ahh yeah
22:28 fdobridge: <k​arolherbst🐧🦀> it's not in linus' tree yet
22:28 fdobridge: <k​arolherbst🐧🦀> @gfxstrand https://cgit.freedesktop.org/drm/drm-misc/commit/?id=460be1d527a8e296d85301e8b14923299508d4fc
22:28 fdobridge: <k​arolherbst🐧🦀> unless you meant something else
22:29 fdobridge: <g​fxstrand> Yup! Those are the ones.
22:29 fdobridge: <k​arolherbst🐧🦀> yeah.. still sitting in drm-misc
22:30 fdobridge: <g​fxstrand> That's fine. I'll scrape the header form drm-misc before merging anyway
22:30 fdobridge: <g​fxstrand> *from
22:36 fdobridge: <g​fxstrand> And... as usual, nothing gets sent to dri-devel that I actually care about makes it's way to my e-mail inbox so I can't review the kernel patch. 🤦🏻‍♀️
22:37 fdobridge: <a​irlied> don't think it's been sent to a list yet
22:38 fdobridge: <g​fxstrand> https://lore.kernel.org/dri-devel/20240430155453.21132-1-mohamedahmedegypt2001@gmail.com/T/#u
22:38 fdobridge: <g​fxstrand> @mohamexiety Mind switching the RFC to PATCH and sending again? Also, you might want to line wrap that commit message.
22:39 fdobridge: <g​fxstrand> I apparently wasn't subscribed to dri-devel
22:39 fdobridge: <a​irlied> probably also add nouveau lsit
22:39 fdobridge: <g​fxstrand> In theory, that's fixed now.
22:39 fdobridge: <m​ohamexiety> sure! but what do you mean with line wrap?
22:39 fdobridge: <a​irlied> and a pointer to the mesa MR which has been reviewied
22:39 fdobridge: <a​irlied> (or is in the process), and I do wonder if we should add cc: stable
22:40 fdobridge: <a​irlied> oh an sob of course 🙂
22:41 fdobridge: <m​ohamexiety> I made a special mental note for the sob but I still forgot it for v2 lol
22:41 fdobridge: <k​arolherbst🐧🦀> you should use checkpatch because it points those things out
22:41 fdobridge: <g​fxstrand> The Mesa MR hasn't been reviewed. 😛
22:41 fdobridge: <k​arolherbst🐧🦀> `./scripts/checkpatch.pl`
22:42 fdobridge: <m​ohamexiety> ok so, drop the "RFC", link to the mesa MR, signed off by. what do you mean with stable/nouveau list?
22:42 fdobridge: <m​ohamexiety> TIL this exists, thanks!
22:43 fdobridge: <g​fxstrand> I've dropped all the WIP and HACK from the NVK MR so it's ready for real review.
22:43 fdobridge: <g​fxstrand> As of today, I've convinced myself that all the nonsense I've been seeing is either unfixable old GL garbage or extant kernel bugs.
22:44 fdobridge: <k​arolherbst🐧🦀> fair conclusion tbh
22:44 fdobridge: <!​DodoNVK (she) 🇱🇹> Do I need to wrap it in an unsafe block? :ferris:
22:45 fdobridge: <a​irlied> I think since this fixes the original uapi it should probably add a Fixes: b88baab82871 ("drm/nouveau: implement new VM_BIND uAPI")
22:45 fdobridge: <a​irlied> @mohamexiety just when sending it include nouveau@lists.freedesktop.org on the cc as well as dri-devel
22:45 fdobridge: <k​arolherbst🐧🦀> so I guess the next step after all that's done on the nvk side, is to move GL to VM_BIND and make it my problem to fix it in a way that works?
22:45 fdobridge: <a​irlied> I think next step might be making sync fd work on nvc0
22:46 fdobridge: <k​arolherbst🐧🦀> mhhh
22:46 fdobridge: <a​irlied> so when we merge this do we set nvc0 to recommend zink on everything nvk supports?
22:46 fdobridge: <k​arolherbst🐧🦀> wouldn't that be easier with VM_BIND?
22:46 fdobridge: <k​arolherbst🐧🦀> and syncobjs?
22:46 fdobridge: <m​ohamexiety> alright, got it
22:47 fdobridge: <a​irlied> @karolherbst it might be, but not sure it's necessary to all the way for just sync fd (though I've no idea)
22:47 fdobridge: <k​arolherbst🐧🦀> yeah.....
22:47 fdobridge: <k​arolherbst🐧🦀> I have no idea tbh
22:48 fdobridge: <g​fxstrand> As long as you fix nvc0 to implement modifiers properly as part of the process, there shouldn't be any issues.
22:48 fdobridge: <g​fxstrand> What I've done should be forwards-compatible.
22:48 fdobridge: <k​arolherbst🐧🦀> I'm just reluctant moving to zink, because regressions, however I trust zink more than the nvc0 driver
22:48 fdobridge: <g​fxstrand> There is just a short list of known broken things with nvc0 but most stuff works.
22:48 fdobridge: <k​arolherbst🐧🦀> depends on the what that entails, but a short list sounds fine for now
22:48 fdobridge: <a​irlied> as long as gnome-shell works I think zink is just as valid as nvc0 for most things
22:49 fdobridge: <a​irlied> I don't think anyone is really doing anything serious with nvc0
22:49 fdobridge: <k​arolherbst🐧🦀> the big question is...
22:49 fdobridge: <k​arolherbst🐧🦀> what if the host runs zink and the flatpak runs old nvc0
22:49 fdobridge: <k​arolherbst🐧🦀> (or the other way around)
22:50 fdobridge: <k​arolherbst🐧🦀> yeah.. I'm not all that much caring about breaking random games, because... vulkan is kinda the defacto best option anyway
22:50 fdobridge: <g​fxstrand> That should work.
22:50 fdobridge: <k​arolherbst🐧🦀> I'm mostly just concerned about random issues, users start to file, especially in flatpak use cases, and then we can't fix them
22:51 fdobridge: <l​eopard1907> I wonder if things like cad apps, blender and similar stuff works ok on zink
22:51 fdobridge: <g​fxstrand> The modifiers nvc0 advertises are correct AFAIK. It's only when you import an image with modifiers into nvc0 that things start to break down
22:51 fdobridge: <k​arolherbst🐧🦀> ahh
22:51 fdobridge: <k​arolherbst🐧🦀> that's kinda a relief
22:51 fdobridge: <g​fxstrand> So nvc0 on anything *should* be fine
22:52 fdobridge: <k​arolherbst🐧🦀> right.. but with flatpak it can go either direction depending on things
22:52 fdobridge: <k​arolherbst🐧🦀> new runtime on old host, or old runtime on new host
22:52 fdobridge: <g​fxstrand> Yeah and as long as we keep the "set the tile mode anyway" hack, the other direction should be fine most of the time.
22:52 fdobridge: <k​arolherbst🐧🦀> in which cases might it not be fine?
22:52 fdobridge: <g​fxstrand> Where things break down is if you don't do a dedicated allocation
22:52 fdobridge: <g​fxstrand> Which there's no reason for us to not support because NVK handles that fine
22:53 fdobridge: <g​fxstrand> We recommend but don't require dedicated allocations.
22:53 fdobridge: <k​arolherbst🐧🦀> mhh, I see
22:53 fdobridge: <g​fxstrand> The WSI code always uses them and Zink should as well.
22:53 fdobridge: <a​irlied> @leopard1907 why do you think nvc0 has been tested with those?
22:54 fdobridge: <k​arolherbst🐧🦀> we have bugs on blender
22:54 fdobridge: <m​ohamexiety> is there a standard max character length before I wrap?
22:54 fdobridge: <a​irlied> 72 usually
22:55 fdobridge: <m​ohamexiety> alright, thanks!
22:55 fdobridge: <k​arolherbst🐧🦀> but anyway, I think we can assume that people run nvc0 on everything imaginable
22:55 fdobridge: <k​arolherbst🐧🦀> at least according to the bug reports we do kinda have
22:56 fdobridge: <g​fxstrand> But yeah, I think we're as good to go as we're going to be. We just need to review and land all the things and I need to write the blog post.
22:56 fdobridge: <k​arolherbst🐧🦀> yeah, sounds good
22:56 fdobridge: <k​arolherbst🐧🦀> users are going to complain in a year when it's broken anyway
22:56 fdobridge: <g​fxstrand> Yup
22:57 fdobridge: <g​fxstrand> Oh, and we need testing. I don't typically test the full stack but I know some folks here are running the full stack regularly
22:58 fdobridge: <g​fxstrand> @tiredchiku or @redsheep might be able to help with that. I think they've both run a full Zink+NVK stack.
22:58 fdobridge: <g​fxstrand> And maybe @asdqueerfromeu
22:58 fdobridge: <g​fxstrand> We should also make sure GameScope is working properly with the latest branches
23:05 fdobridge: <r​edsheep> Yeah I can do some testing for sure, meant to get to it sooner. Unfortunately I don't think any of us three have gnome, since you mentioned that specifically
23:05 fdobridge: <!​DodoNVK (she) 🇱🇹> zink isn't as important for me because of my PRIME setup
23:05 fdobridge: <g​fxstrand> What do you usually run?
23:05 fdobridge: <r​edsheep> Oh yeah you and Sid are on prime
23:06 fdobridge: <r​edsheep> Plasma, and Sid is generally sway afaik
23:06 fdobridge: <g​fxstrand> Ah
23:06 fdobridge: <k​arolherbst🐧🦀> I could probably do gnome testing, I just hate messing up my system mesa install with random builds 😄
23:07 fdobridge: <m​ohamexiety> @airlied @gfxstrand done, re-sent
23:07 fdobridge: <g​fxstrand> Same same
23:07 fdobridge: <!​DodoNVK (she) 🇱🇹> I switched to Plasma
23:07 fdobridge: <r​edsheep> Technically you could point the icd at a test build, but for the zink part that's kinda painful if your zink is old. I learned that the hard way as you know
23:07 fdobridge: <k​arolherbst🐧🦀> yeah...
23:08 fdobridge: <k​arolherbst🐧🦀> I wished I could tell a systemd service to start within a meson devenv tbh
23:08 fdobridge: <k​arolherbst🐧🦀> mhhhhhh
23:08 fdobridge: <k​arolherbst🐧🦀> actually...
23:08 fdobridge: <r​edsheep> That might be possible
23:08 fdobridge: <m​ohamexiety> the endgame is probably 2 systems or something like that. not very convenient space-wise but it feels it'd be a big timesaver in cases like this
23:09 fdobridge: <k​arolherbst🐧🦀> there is this `meson-vscode.env` file
23:09 fdobridge: <m​ohamexiety> but `devenv` is really powerful
23:09 fdobridge: <k​arolherbst🐧🦀> but that relies on the vscode ext
23:09 fdobridge: <k​arolherbst🐧🦀> but...
23:09 fdobridge: <k​arolherbst🐧🦀> there is also `meson devenv $cmd...`
23:09 fdobridge: <m​ohamexiety> also huh
23:10 fdobridge: <m​ohamexiety> ```
23:10 fdobridge: <m​ohamexiety> In file included from ../src/gallium/targets/dri/target.c:3:
23:10 fdobridge: <m​ohamexiety> ../src/gallium/auxiliary/target-helpers/drm_helper.h:220:1: internal compiler error: Segmentation fault
23:10 fdobridge: <m​ohamexiety> 220 | pipe_vmwgfx_create_screen(int fd, const struct pipe_screen_config *config)
23:10 fdobridge: <m​ohamexiety> | ^~~~~~~~~~~~~~~~~~~~~~~~~
23:10 fdobridge: <m​ohamexiety> Please submit a full bug report, with preprocessed source.
23:10 fdobridge: <m​ohamexiety> See <http://bugzilla.redhat.com/bugzilla> for instructions.
23:10 fdobridge: <m​ohamexiety> The bug is not reproducible, so it is likely a hardware or OS problem.
23:10 fdobridge: <m​ohamexiety> ```
23:10 fdobridge: <k​arolherbst🐧🦀> but I'm not sure how well that works with the logind/systemd integration stuff...
23:11 fdobridge: <m​ohamexiety> @gfxstrand quick test of latest branch version: gamescope still works
23:11 fdobridge: <m​ohamexiety> https://cdn.discordapp.com/attachments/1034184951790305330/1237904785093558272/image.png?ex=663d582b&is=663c06ab&hm=790d707d38888a71d526616811f3cc7c5ba4b86a2847f0b76d3418275edad61b&
23:13 fdobridge: <a​irlied> @mohamexiety random ICE like that is often a sign of bad RAM or CPU
23:13 fdobridge: <a​irlied> memtest time
23:14 fdobridge: <m​ohamexiety> uh oh. I did run memtest on this when I got it a month or so ago and it was fine. will rerun and hopefully it's not a sign of something bad
23:14 fdobridge: <r​edsheep> I will get a new kernel and mesa builds to test nvk+zink on the modifiers branch, we're still using the gfxstrand kernel branch from gitlab?
23:15 fdobridge: <r​edsheep> IMO the real acid test will be whether this destroys my display server's performance the way it did before, it always sent about fps to the display regardless of what was happening
23:15 fdobridge: <r​edsheep> sent 5 fps*
23:22 fdobridge: <r​edsheep> I don't see any newer commits than a week ago here: https://gitlab.freedesktop.org/gfxstrand/linux/-/tree/nvk?ref_type=heads
23:22 fdobridge: <r​edsheep>
23:22 fdobridge: <r​edsheep> Is there another place I should be looking for a kernel to build, or have there not been newer kernel patches?
23:22 fdobridge: <m​ohamexiety> no it's the same patch
23:22 fdobridge: <m​ohamexiety> the newer version is just fixed commit description
23:22 fdobridge: <m​ohamexiety> functionally it's literally the same
23:22 fdobridge: <r​edsheep> Ah, okay great
23:27 fdobridge: <r​edsheep> Nice it built this time without any errors
23:31 fdobridge: <l​eopard1907> Good point 🐸
23:31 fdobridge: <l​eopard1907> Yes, people wont be using and testing such a stack when there was no reclocking support on relevant hw for years
23:36 fdobridge: <!​DodoNVK (she) 🇱🇹> I got one with a massive C file
23:37 fdobridge: <g​fxstrand> I just pushed again. That version should have been good but the new push has a fix from @airlied
23:40 fdobridge: <r​edsheep> Uhhhhh I was mid clone, if you just pushed I don't know yet what I have got. Is gitlab crazy or does that branch only have old stuff now? https://gitlab.freedesktop.org/gfxstrand/linux/-/commits/nvk
23:46 fdobridge: <r​edsheep> Yeah it
23:46 fdobridge: <r​edsheep> *'s not just the page
23:47 fdobridge: <r​edsheep> My clone only has commits up to January 10th