00:05fdobridge_: <gfxstrand> https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27205
00:05fdobridge_: <gfxstrand> I'm CTSing that now
00:06fdobridge_: <gfxstrand> It's a little terrifying that it basically worked on the first try.
00:12fdobridge_: <karolherbst🐧🦀> wait.. so the basic idea is, you have a copy only context, with its own purpose to copy from a bo into another bo, and that bo is filled via memcpy. And "all the copies" are simply going through that queue?
00:14fdobridge_: <karolherbst🐧🦀> btw, for small uploads there is a better way of doing those things 😛
00:15fdobridge_: <karolherbst🐧🦀> 3D and compute have their own `LAUNCH_DMA` methods, but they copy embedded data into a buffer
00:15fdobridge_: <karolherbst🐧🦀> which is super useful for like... small uploads
00:20fdobridge_: <airlied> @dwlsalmeida I think Ive only tried on ampere or ada, you might have to adjust some classes, I'll look when I get home, also fluster is probably over playing it. I never even got that one CTS to pass properly, it decoded the right colour in parts of the image
00:38fdobridge_: <airlied> @dwlsalmeida I assume you are booting with GSP enabled?
00:42fdobridge_: <airlied> I'll assume not GSP booted until stated otherwise :0
00:58fdobridge_: <dwlsalmeida> uhh....
00:58fdobridge_: <dwlsalmeida> 😐
00:58fdobridge_: <dwlsalmeida> yeah, no..
00:59fdobridge_: <dwlsalmeida> I totally forgot about this
00:59fdobridge_: <airlied> one thing that is definitely wrong in the code is I've no idea how to size some of the allocations, so hence why I was targetting a single small decode path
00:59fdobridge_: <airlied> probably need to work out from the prop driver how it does some of the mem alloc sizings
01:00fdobridge_: <dwlsalmeida> for the coded data, you mean?
01:01fdobridge_: <airlied> for GetVideoSessionMemoryRequirementsKHR
01:01fdobridge_: <airlied> though not sure those are size based usually
01:01fdobridge_: <airlied> so also for the dpb
01:01fdobridge_: <airlied> oh yeah those can be based of the max coded size
01:01fdobridge_: <gfxstrand> Yes, and useless for shader upload because I don't know what queue it's used on.
01:02fdobridge_: <karolherbst🐧🦀> yeah.. not saying you should use it for shader uploads, just generally speaking 😄
01:02fdobridge_: <gfxstrand> Well, yeah. I know that...
01:03fdobridge_: <dwlsalmeida> @airlied thanks for pointing out the GSP thing, that was very helpful
01:03fdobridge_: <dwlsalmeida> will check again tomorrow
01:04fdobridge_: <airlied> gsp should work on 2060s with 6.8-rc1 I think I pushed the last fix there, 6.7 might not have it
01:04fdobridge_: <airlied> if you have an older 2060 that is
01:04fdobridge_: <dwlsalmeida> what constitutes an "older" 2060 btw?
01:05fdobridge_: <dwlsalmeida> but yeah I'll run 6.8-rc1
01:05fdobridge_: <karolherbst🐧🦀> mhhh... might be useful to use it for ubos now that they are indeed used.. though not sure if that has the same issue as with shader uploads...
01:05fdobridge_: <airlied> older is one that fails to work with 6.7 😛
01:06fdobridge_: <dwlsalmeida> lol 😄
01:06fdobridge_: <gfxstrand> I.e., mine. 😂
01:08fdobridge_: <karolherbst🐧🦀> yours failed to work with 6.7?
01:09fdobridge_: <gfxstrand> It did last I tried.
01:09fdobridge_: <gfxstrand> My 12G 2060 is fine. Just not the 8G Founders Edition
01:10fdobridge_: <gfxstrand> Which is ironic because the 8G card is the one I bought because the 12G card didn't work with nouveau until we bumped firmware versions.
01:11fdobridge_: <airlied> fix will end up in some 6.7 stable release at some point
01:12fdobridge_: <karolherbst🐧🦀> yeah I remember... oh well.. as long as it works with mainline who cares 😛
01:13fdobridge_: <gfxstrand> Yeah, it's been an adventure
01:13fdobridge_: <karolherbst🐧🦀> ~~just update straight with rc1, what's the worst that could happen anyway~~
01:14fdobridge_: <airlied> 6.8-rc1 should have all the fun bo exec locking improvements
02:03fdobridge_: <gfxstrand> This is fun...
02:04fdobridge_: <gfxstrand> https://cdn.discordapp.com/attachments/1034184951790305330/1199172696475435008/message.txt?ex=65c1939a&is=65af1e9a&hm=0f740b411f7ebb01a931abe371f870c01081b18b67f2fe46fe52b6c98a8f5b5e&
02:07fdobridge_: <airlied> https://gitlab.freedesktop.org/drm/nouveau/-/issues/280
02:07fdobridge_: <airlied> yeah seen it once, not again, no ideas
02:08fdobridge_: <airlied> feels like some sort of teardown race
02:11fdobridge_: <gfxstrand> NGL, I'm a little surprised that this seems to "just work"
02:14fdobridge_: <airlied> I wonder if the client is getting torn down in parallel somehow
02:14fdobridge_: <gfxstrand> seems unlikely.
02:14fdobridge_: <gfxstrand> Well... if we're doing threaded submit...
02:15fdobridge_: <gfxstrand> It's deep enough inside worker threads that it's a little hard to tell.
02:16fdobridge_: <gfxstrand> @airlied BTW, Some sort of mystery kernel issue is the only thing standing between us and 1.3 conformance right now.
02:16fdobridge_: <gfxstrand> Well, I say kernel issue but I don't actually know that it's a kernel issue.
02:17fdobridge_: <gfxstrand> But it's a nasty hiesenbug that almost always triggers during synchronization import/export tests. 😬
02:18fdobridge_: <airlied> a different one to the above crash?
02:22fdobridge_: <gfxstrand> Yeah
02:23fdobridge_: <gfxstrand> It doesn't crash. Just kills my context with no explanation
02:23fdobridge_: <gfxstrand> Doesn't do it when I run the test by itself, of course.
02:23fdobridge_: <gfxstrand> IDK how to reproduce it without running a substantial chunk of the CTS.
02:23fdobridge_: <gfxstrand> It's the very best kind of bug. 😭
02:25fdobridge_: <airlied> would be good to see it reproduce on 6.8-rc1
02:33fdobridge_: <Sid> such luck
02:33fdobridge_: <Sid> https://github.com/terminatorul/NvStrapsReBar
02:33fdobridge_: <Sid> much wow
02:34fdobridge_: <Sid> might try it out one of these days
03:29fdobridge_: <airlied> @gfxstrand yeah for the oops above 6.8-rc1 does reorg that code so I'm not sure the race is there anymore, so we'd have to see it
03:29fdobridge_: <airlied> I'm just running a run_deqp.sh run on my ga102 on 6.8-rc1
03:30fdobridge_: <airlied> I'm just running a run_deqp.sh run on my ga106 on 6.8-rc1 (edited)
03:33fdobridge_: <airlied> what gpu are you going for conformance on turing?
03:37fdobridge_: <gfxstrand> I was hoping to do Turing+
03:39fdobridge_: <gfxstrand> But I've had problems on both Turing and Ampere
03:46fdobridge_: <redsheep> Hopefully that means my similarly rare crash is cured there as well, I will give it a shot soon. Just takes forever to reproduce it.
03:46fdobridge_: <redsheep> I was browsing https://nouveau.freedesktop.org/FeatureMatrix.html after an issue mentioned it and I see that SLI is marked TODO on NV190 which should be N/A
03:47fdobridge_: <redsheep> SLI is well and truly dead on Ada
03:48fdobridge_: <redsheep> Unless you're talking about explicit multi gpu or whatever but almost nobody uses that so not sure it's even worth mentioning
04:38fdobridge_: <airlied> Pass: 659675, Fail: 94, Crash: 8, Skip: 2041179, Flake: 2, Duration: 1:17:51, Remaining: 0
05:31fdobridge_: <gfxstrand> What's that?
05:31fdobridge_: <gfxstrand> Did someone add more test fails?
05:36fdobridge_: <gfxstrand> Also the fails I'm seeing don't show up with deqp-runner
05:36fdobridge_: <airlied> that is my ga106 with main and cts main I think
05:37fdobridge_: <airlied> I'm having a look for the timeout
05:39fdobridge_: <airlied> really doesn't look like kernel is doing anything wrong here, except tdr fires after 10s because the fence never signals
05:51fdobridge_: <gfxstrand> Yeah, I need to maybe do some test timing or something. The tests that are failing pass very quickly when run alone or even as part of a group.
05:57fdobridge_: <airlied> yeah 10s is a long time though, like infinite loop type of time
06:02fdobridge_: <redsheep> I know this is probably a ways down the road, but I have been looking into what would be needed for CUDA to work properly with mesa, and I think I have found something that can serve as a proof of concept for "just" implementing PTX
06:02fdobridge_: <redsheep>
06:02fdobridge_: <redsheep> https://github.com/gtcasl/gpuocelot
06:03fdobridge_: <redsheep> This would seem to indicate that if NAK could be made to intake PTX instructions, (or maybe it could go PTX > NIR > NAK?) then CUDA programs could just work
06:03fdobridge_: <Sid> PTX > NAK > NIR but yeah
06:04fdobridge_: <gfxstrand> No, PTX -> NIR -> NAK
06:04fdobridge_: <Sid> oh
06:04fdobridge_: <Sid> but yeah, *massive* could there
06:05fdobridge_: <gfxstrand> We'd just add a bunch of `_ptx` ops to NIR
06:05fdobridge_: <Sid> if it was as simple as accepting ptx cuda wouldn't have vendor lock in
06:05fdobridge_: <redsheep> Well, that project is a thing that works. From what I have heard PTX executes really slowly if you don't have nvidia hardware though.
06:06fdobridge_: <gfxstrand> Well, implementing PTX on non-NVIDIA GPUs isn't going to be super efficient.
06:06fdobridge_: <Sid> ah
06:06fdobridge_: <gfxstrand> It's got a lot of very NVIDIA-specific behavior baked in.
06:06fdobridge_: <airlied> there is also ZLUDA project
06:06fdobridge_: <gfxstrand> Which would have to be emulated.
06:07fdobridge_: <Sid> makes sense
06:08fdobridge_: <gfxstrand> It's not quite as bad as like a Switch emulator where you have to emulate the actual hardware but it would take a decent amount of optimization to undo all the NVIDIAisms and it still wouldn't be 100% of perf compared to being compiled directly for AMD or Intel.
06:08fdobridge_: <Sid> unrelated but I have everything needed to try this out ready
06:08fdobridge_: <Sid> just need to patch the .ffs module into my bios and flash it
06:09fdobridge_: <airlied> also with some newer PTX things you can't really do them at all on other GPUS
06:09fdobridge_: <Sid> which should be simpler than the first time I did it because now I can just replace the old module
06:10fdobridge_: <gfxstrand> Yeah... And some of them would be tricky to retrofit into NIR. Probably not impossible but tricky.
06:10fdobridge_: <redsheep> It would be nice to have it pass through NIR so it could be at least theoretically possible for Intel and AMD to use it if the other drivers want to implement the needed emulation, but that would probably blow up the scope of the project massively
06:11fdobridge_: <redsheep> Having workign DLSS on AMD hardware would be amazing
06:12fdobridge_: <gfxstrand> I want it to go through NIR so NIR can optimize it.
06:12fdobridge_: <Sid> technically it should be possible to enable it already, provided you spoof the right things in the right places
06:12fdobridge_: <gfxstrand> Even if it's a shit load of `_ptx` instructions, I want the optimizer.
06:13fdobridge_: <Sid> it'll just be slow, because nv-isms
06:13fdobridge_: <Sid> and I'm not sure how nvapi/nvngx will handle that
06:14fdobridge_: <airlied> hmm running sync tests under strace seems to make it less likely to die
06:14fdobridge_: <airlied> oh no just got it
06:15fdobridge_: <Sid> yeah, @redsheep `DXVK_NVAPI_ALLOW_OTHER_DRIVERS=1`
06:16fdobridge_: <redsheep> That gets you NVAPI so you can have stuff like reflex on latencyflex, doesn't actually make DLSS work.
06:16fdobridge_: <gfxstrand> Oh, really? I'm about to roll over and sleep but I eagerly await your findings in the morning. 😁
06:17fdobridge_: <Sid> though it'll only work for games that check only by device/vendor pci ids
06:18fdobridge_: <airlied> my findings may be I went and make dinner instead 😛
06:18fdobridge_: <Sid> have you tried providing a dxvk.conf with custom ids?
06:19fdobridge_: <redsheep> I don't have AMD hardware to test it on right now but I looked into DLSS vendor lock in a good bit and there's good reason people are modding FSR into DLSS only games for things like steam deck. You can't just spoof it into working, there's some part of DLSS that uses some PTX or CUDA stuff.
06:20fdobridge_: <redsheep> I am not clear on the details but it doesn't work.
06:20fdobridge_: <airlied> actually I'm sorta back to convincing myself it might be kernel, ah well gotta keep digging I suppose
06:22fdobridge_: <Sid> ah, fair, makes sense
06:22fdobridge_: <redsheep> RDNA 3 probably even has enough matrix multiply performance that if DLSS was able to actually run it would perform well, assuming it isn't too horrible to emulate things that aren't 1 to 1
06:41fdobridge_: <airlied> bleh I think I know the problem and I think my recent fix for the prime bug made it worse, but it's a race condition on fencing
06:42fdobridge_: <airlied> you emit a fence to the hw, then when someone calls fence signalling you enable irqs, but the fence might already have passed by that time, so you just never get the signalling event
06:44fdobridge_: <airlied> at least that's my current working theory
06:51fdobridge_: <airlied> still seeing a timeout here and there though
07:23fdobridge_: <airlied> okay about to send out a patch that might fix it
07:27fdobridge_: <airlied> https://lore.kernel.org/dri-devel/20240123072538.1290035-1-airlied@gmail.com/T/#u
07:27fdobridge_: <airlied> @gfxstrand @karolherbst ^^^ probably need to check my logic here
07:27airlied: dakr: ^^ also you
07:28fdobridge_: <airlied> I'm making it through a full round of dEQP-VK.sync* now
07:31fdobridge_: <!DodoNVK (she) 🇱🇹> @ Sid This patch would be interesting to test :nouveau:
07:32fdobridge_: <Sid> I'll test it in a couple hours
07:45fdobridge_: <Sid> currently dicking around uefi
07:48fdobridge_: <tom3026> im compiling it now ^_^
07:50fdobridge_: <Sid> @asdqueerfromeu
07:57fdobridge_: <redsheep> If some of the deleted code here was added to fix issues with prime does deleting it bring those issues back? Suppose that just needs testing.
07:57fdobridge_: <airlied> no it doesn't
07:58fdobridge_: <airlied> @gfxstrand did you have the fence work queue change in your tree that you were testing on?
07:59fdobridge_: <airlied> just thinking this might be overkill to fix the problem, but I'll sleep on it
08:02fdobridge_: <tom3026> either that patch or something else on linux-next fixed a bunch of weird stutters and freezes/timeouts on this ampere but im seeing a lot less fps in unigine-heaven for some reason and vkcube is spinning in slowmotion heh
08:02fdobridge_: <Sid> ok, building it now
08:02fdobridge_: <Sid> on top of 6.7.1 because I'm dumb like that
08:06fdobridge_: <tom3026> uhm okay, it spins fast on laptop monitor when running on nouvea, dragging it over to the external monitor attached to the nvidia gpu it slows down
08:15fdobridge_: <tom3026> https://streamable.com/qb2eya easier to show a "video" game thinks its rendering at what 80fps? thats me trying to move the mouse as fast as possible its like its drawing at 10fps 😄
08:50fdobridge_: <tom3026> ok seems to be a wayland thing, or kwin. works much better on x11
08:50fdobridge_: <tom3026> but its some kind of combination with nouveau tho
09:15fdobridge_: <Sid> ```
09:15fdobridge_: <Sid> [Tue Jan 23 14:41:41 2024] nouveau 0000:01:00.0: SoTGame.exe[11694]: job timeout, channel 24 killed!
09:15fdobridge_: <Sid> [Tue Jan 23 14:41:41 2024] [drm:nouveau_job_submit [nouveau]] *ERROR* Trying to push to a killed entity
09:15fdobridge_: <Sid> [Tue Jan 23 14:42:33 2024] [TTM] Buffer eviction failed
09:15fdobridge_: <Sid> [Tue Jan 23 14:42:33 2024] nouveau 0000:01:00.0: gsp: Xid:13 Graphics SM Warp Exception on (GPC 0, TPC 0, SM 0): Out Of Range Address
09:15fdobridge_: <Sid> [Tue Jan 23 14:42:33 2024] nouveau 0000:01:00.0: gsp: Xid:13 Graphics SM Global Exception on (GPC 0, TPC 0, SM 0): Multiple Warp Errors
09:16fdobridge_: <Sid> [Tue Jan 23 14:42:33 2024] nouveau 0000:01:00.0: gsp: Xid:13 Graphics Exception: ESR 0x504730=0xc03000e 0x504734=0x4 0x504728=0x4c1eb72 0x50472c=0x174
09:16fdobridge_: <Sid> ```
09:17fdobridge_: <Sid> quake champions: `[Tue Jan 23 14:47:08 2024] nouveau 0000:01:00.0: [13028]: job timeout, channel 32 killed!`
09:19fdobridge_: <Sid> ^ Sea of Thieves
09:20fdobridge_: <Sid> richard burns rally - rallysimfans version
09:20fdobridge_: <Sid> ```
09:20fdobridge_: <Sid> [Tue Jan 23 14:49:42 2024] nouveau 0000:01:00.0: RichardBurnsRal[14704]: job timeout, channel 24 killed!
09:20fdobridge_: <Sid> [Tue Jan 23 14:49:48 2024] nouveau 0000:01:00.0: gsp: mmu fault queued
09:20fdobridge_: <Sid> [Tue Jan 23 14:49:48 2024] nouveau 0000:01:00.0: gsp: rc engn:00000001 chid:24 type:31 scope:1 part:233
09:20fdobridge_: <Sid> [Tue Jan 23 14:49:48 2024] nouveau 0000:01:00.0: fifo:001001:0003:0018:[RichardBurnsRal[14704]] errored - disabling channel
09:20fdobridge_: <Sid> ```
09:21fdobridge_: <Sid> disclaimer: none of these games worked on nvk prior as well
09:24fdobridge_: <Sid> I'll run a cts test in a while and see if I get any there
09:32fdobridge_: <tom3026> where did you get the cts from? was just curious running some myself heh
09:33fdobridge_: <tom3026> doesnt seem to be in AUR from what i can tell
09:34fdobridge_: <Sid> https://github.com/KhronosGroup/VK-GL-CTS/wiki
09:36fdobridge_: <tom3026> ah ok
10:10fdobridge_: <airlied> Yeah didn't think it would solve the rc or mmu faults, not sure we've seen sync fails outside cts
10:42fdobridge_: <tom3026> meh nothing is going as planned, vulkan cts fails to build bunch of "error: ‘VkPipelineOfflineCreateInfo’ in namespace ‘vk’ does not name a type"
10:55fdobridge_: <Sid> ```
10:55fdobridge_: <Sid> [Tue Jan 23 16:20:43 2024] __vm_enough_memory: pid: 2144, comm: deqp-vk, not enough memory for the allocation
10:55fdobridge_: <Sid> [Tue Jan 23 16:20:43 2024] __vm_enough_memory: pid: 2144, comm: deqp-vk, not enough memory for the allocation
10:55fdobridge_: <Sid> [Tue Jan 23 16:20:43 2024] __vm_enough_memory: pid: 2144, comm: deqp-vk, not enough memory for the allocation
10:55fdobridge_: <Sid> [Tue Jan 23 16:20:43 2024] __vm_enough_memory: pid: 2144, comm: deqp-vk, not enough memory for the allocation
10:55fdobridge_: <Sid> [Tue Jan 23 16:21:13 2024] nouveau 0000:01:00.0: Enabling HDA controller
10:55fdobridge_: <Sid> [Tue Jan 23 16:21:14 2024] xhci_hcd 0000:01:00.2: xHC error in resume, USBSTS 0x401, Reinit
10:55fdobridge_: <Sid> [Tue Jan 23 16:21:14 2024] usb usb3: root hub lost power or was reset
10:55fdobridge_: <Sid> [Tue Jan 23 16:21:14 2024] usb usb4: root hub lost power or was reset
10:55fdobridge_: <Sid> [Tue Jan 23 16:21:44 2024] nouveau 0000:01:00.0: Enabling HDA controller
10:55fdobridge_: <Sid> [Tue Jan 23 16:21:44 2024] xhci_hcd 0000:01:00.2: xHC error in resume, USBSTS 0x401, Reinit
10:55fdobridge_: <Sid> [Tue Jan 23 16:21:44 2024] usb usb3: root hub lost power or was reset
10:55fdobridge_: <Sid> [Tue Jan 23 16:21:44 2024] usb usb4: root hub lost power or was reset
10:55fdobridge_: <Sid> [Tue Jan 23 16:22:58 2024] nouveau 0000:01:00.0: Enabling HDA controller
10:55fdobridge_: <Sid> [Tue Jan 23 16:22:58 2024] xhci_hcd 0000:01:00.2: xHC error in resume, USBSTS 0x401, Reinit
10:55fdobridge_: <Sid> [Tue Jan 23 16:22:58 2024] usb usb3: root hub lost power or was reset
10:55fdobridge_: <Sid> [Tue Jan 23 16:22:58 2024] usb usb4: root hub lost power or was reset
10:55fdobridge_: <Sid> [Tue Jan 23 16:23:34 2024] nouveau 0000:01:00.0: Enabling HDA controller
10:55fdobridge_: <Sid> [Tue Jan 23 16:23:35 2024] xhci_hcd 0000:01:00.2: xHC error in resume, USBSTS 0x401, Reinit
10:55fdobridge_: <Sid> [Tue Jan 23 16:23:35 2024] usb usb3: root hub lost power or was reset
10:55fdobridge_: <Sid> [Tue Jan 23 16:23:35 2024] usb usb4: root hub lost power or was reset
10:55fdobridge_: <Sid> [Tue Jan 23 16:24:02 2024] nouveau 0000:01:00.0: deqp-vk[2144]: job timeout, channel 24 killed!
10:55fdobridge_: <Sid> ```
10:55fdobridge_: <Sid> only one timeout now though
10:55fdobridge_: <Sid> instead of the tens I had before
10:56fdobridge_: <Sid> I'll update my cts and let it run again
10:56fdobridge_: <Sid> my build of the cts is *months* old
11:05fdobridge_: <Sid> for me it's failing with `make[2]: *** No rule to make target '/stable/xdg-shell/xdg-shell.xml', needed by 'framework/platform/xdg-shell.c'. Stop.`
11:06fdobridge_: <Sid> oh, missing dep, wayland-protocols
11:51fdobridge_: <tom3026> oh lol im running oom on the cts build
12:32fdobridge_: <Sid> @airlied bunch of device losts on this cts run
12:33fdobridge_: <Sid> will share results and dmesg once it's done
12:33fdobridge_: <Sid> currently in QM class
13:11fdobridge_: <Sid> ```Test case 'dEQP-VK.memory.pipeline_barrier.host_read_host_write.1024'.. terminate called after throwing an instance of 'vk::Error' what(): vkd.deviceWaitIdle(device): VK_ERROR_DEVICE_LOST at vktMemoryPipelineBarrierTests.cpp:9345```
13:11fdobridge_: <Sid> aborted core dumped
13:54fdobridge_: <Sid> https://cdn.discordapp.com/attachments/1034184951790305330/1199351510534983860/dmesg.log?ex=65c23a22&is=65afc522&hm=fb5a86f21ed683e2a0f903689cd23fd378473d525984e55e788af58be2cbce4a&
14:02fdobridge_: <tom3026> your bcachefs seems broken :p
14:08fdobridge_: <Sid> am aware of that, yus
14:21fdobridge_: <gfxstrand> I'll pull, build, and test today
14:47fdobridge_: <gfxstrand> Might update to 6.8-rc1 while I'm at it.
14:50fdobridge_: <marysaka> ... I think I'm still on 6.7-rc1 with the original patches :nya_panic:
15:35fdobridge_: <gfxstrand> Building now...
16:00fdobridge_: <karolherbst🐧🦀> @gfxstrand I think clippy found a bug in NAK in regards to GS :ferrisUpsideDown: ...
16:01fdobridge_: <gfxstrand> That's possible
16:01fdobridge_: <karolherbst🐧🦀> but somebody should really fix those 500 clippy warnings
16:01fdobridge_: <karolherbst🐧🦀> 😄
16:01fdobridge_: <karolherbst🐧🦀> anyway, some cause errors, so I'm fixing that at least
16:03fdobridge_: <gfxstrand> How does one run clippy?
16:03fdobridge_: <gfxstrand> I see a cargo thing but we don't use cargo
16:04fdobridge_: <karolherbst🐧🦀> use `clippy-driver` as your rustc
16:04fdobridge_: <gfxstrand> ah
16:04fdobridge_: <karolherbst🐧🦀> either via a cross file or `RUSTC` env var
16:04fdobridge_: <gfxstrand> Well, feel free to submit an MR. I probably won't be looking at that for a little bit
16:04fdobridge_: <karolherbst🐧🦀> clippy generally points out also how to write cleaner code and stuff 😄 so it's kinda nice to learn better rust
16:05fdobridge_: <karolherbst🐧🦀> yeah.. I just fix the errors so I can use it with rusticl 😄
16:05fdobridge_: <karolherbst🐧🦀> https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27216
16:05fdobridge_: <karolherbst🐧🦀> the opt_out thing is the fix
16:05fdobridge_: <karolherbst🐧🦀> I think my change actually fixes it but please double check 😄
16:06fdobridge_: <gfxstrand> Oh, I didn't know `?` worked on `Option<T>` That's super useful!
16:07fdobridge_: <karolherbst🐧🦀> yeah, it is 🙂
16:15fdobridge_: <gfxstrand> Built. Now CTSing.
16:15fdobridge_: <gfxstrand> I'm gonna be so happy if this works.
16:15fdobridge_: <gfxstrand> It's also going to take hours. 😭
16:16fdobridge_: <Sid> all the best
17:06fdobridge_: <gfxstrand> The good news is that it got all the way through the synchronization tests. The bad news is that I forgot to log into GNOME so it couldn't run the WSI tests.
17:23fdobridge_: <tom3026> Cs2 is running great, a bit low fps but havent triggered any bugs after like 3 hours 👍
17:24fdobridge_: <tom3026> arlieds patch applied, and even that eso gpl draft 😛
17:24fdobridge_: <gfxstrand> https://cdn.discordapp.com/attachments/1034184951790305330/1199404271964262481/rocky-and-bullwinkle-this-time-for-sure-scene-oeadei2i3cmtqg5r.png?ex=65c26b46&is=65aff646&hm=a8186863464c1084d3ed2a140fd9f6b4730fd4eb1383298e61f6280794021748&
17:24fdobridge_: <gfxstrand> Nice!
17:25fdobridge_: <tom3026> what depq incantations do you run the cts with, just all of them?
17:25fdobridge_: <tom3026> feels like that computation isnt uh needed to test
17:25fdobridge_: <tom3026> if i understand it right
17:25fdobridge_: <tom3026> was just curious so i know what to run when i apply drafts xD
17:25fdobridge_: <gfxstrand> https://cdn.discordapp.com/attachments/1034184951790305330/1199404656246399026/run-conformance.sh?ex=65c26ba1&is=65aff6a1&hm=9aeafdbb4966be962491d60331668c22eb092bcc2a64c3fceb2bb0c45b09a43f&
17:26fdobridge_: <gfxstrand> That's the version for an actual CTS run that I want to submit to Khronos
17:26fdobridge_: <gfxstrand> I have a different script that invokes deqp-runner that I use for regression runs
17:31fdobridge_: <tom3026> okay thanks
17:31fdobridge_: <tom3026> that will do heh
17:43fdobridge_: <gfxstrand> Damn...
17:43fdobridge_: <gfxstrand> ```
17:43fdobridge_: <gfxstrand> [ 2438.122585] nouveau 0000:17:00.0: gsp: cli:0xc1d00002 obj:0x00730000 ctrl cmd:0x00731341 failed: 0x0000ffff
17:43fdobridge_: <gfxstrand> [ 2438.123293] nouveau 0000:17:00.0: gsp: cli:0xc1d00002 obj:0x00730000 ctrl cmd:0x00731341 failed: 0x0000ffff
17:43fdobridge_: <gfxstrand> [ 2438.123957] nouveau 0000:17:00.0: gsp: cli:0xc1d00002 obj:0x00730000 ctrl cmd:0x00731341 failed: 0x0000ffff
17:43fdobridge_: <gfxstrand> [ 5422.830047] nouveau 0000:17:00.0: gsp: mmu fault queued
17:43fdobridge_: <gfxstrand> [ 5422.835059] nouveau 0000:17:00.0: gsp: rc engn:00000001 chid:16 type:31 scope:1 part:233
17:43fdobridge_: <gfxstrand> [ 5422.835069] nouveau 0000:17:00.0: fifo:000000:0002:0010:[deqp-vk[8890]] errored - disabling channel
17:43fdobridge_: <gfxstrand> [ 5422.835077] nouveau 0000:17:00.0: deqp-vk[8890]: channel 16 killed!
17:43fdobridge_: <gfxstrand> ```
17:44fdobridge_: <gfxstrand> I may have to do this run in pieces.
18:11fdobridge_: <gfxstrand> Damn. Died again.
18:11fdobridge_: <gfxstrand> @airlied Looks like something with WSI tests + sync
18:12fdobridge_:<gfxstrand> runs again
18:14fdobridge_: <gfxstrand> I wonder if something in the WSI tests is causing our context to get permanently entangled with the compositor and that's messing something up.
18:15fdobridge_: <gfxstrand> Oh, that is an interesting theory....
18:17fdobridge_: <gfxstrand> Like maybe it gets added to a list somewhere when we VM_BIND but never properly gets removed.
18:17fdobridge_: <gfxstrand> If my current run fails, I'm going to disable WSI and attempt a full run
18:18fdobridge_: <gfxstrand> IDK if it's kosher to submit that but I might try
18:40fdobridge_: <tom3026> hm so thats why after wsi im getting all these device lost until the run stops heh
18:56fdobridge_: <gfxstrand> It's possible
18:56fdobridge_: <gfxstrand> https://cdn.discordapp.com/attachments/1034184951790305330/1199427485264265266/image.png?ex=65c280e4&is=65b00be4&hm=ff21dce5e89b397e7c3451e85acd11cc5e735c055cad546d175f5d2ddf3ac573&
18:56fdobridge_: <gfxstrand> It's looking like a damn nice theory, too. That's exactly where my last run stopped.
18:56fdobridge_: <gfxstrand> I'm going to disable WSI and run again
19:02fdobridge_: <tom3026> is there a --deqp-skip=*wsi*
19:27fdobridge_: <airlied> Hmm I don't think I've ever done a wsi run, guess that solves today's things to do
19:31fdobridge_: <dwlsalmeida> @airlied nailed the error to this line:
19:31fdobridge_: <dwlsalmeida>
19:31fdobridge_: <dwlsalmeida> ```
19:31fdobridge_: <dwlsalmeida> switch (device->info.family) {
19:31fdobridge_: <dwlsalmeida> case NV_DEVICE_INFO_V0_VOLTA:
19:31fdobridge_: <dwlsalmeida> ret = nvif_object_ctor(&chan->chan->user, "abi16CeWar", 0, VOLTA_DMA_COPY_A,
19:31fdobridge_: <dwlsalmeida> NULL, 0, &chan->ce);
19:31fdobridge_: <dwlsalmeida> if (ret)
19:31fdobridge_: <dwlsalmeida> goto done;
19:31fdobridge_: <dwlsalmeida> break;
19:31fdobridge_: <dwlsalmeida> case NV_DEVICE_INFO_V0_TURING:
19:31fdobridge_: <dwlsalmeida> ret = nvif_object_ctor(&chan->chan->user, "abi16CeWar", 0, TURING_DMA_COPY_A, <-------
19:31fdobridge_: <dwlsalmeida> NULL, 0, &chan->ce);
19:31fdobridge_: <dwlsalmeida> ```
19:31fdobridge_: <dwlsalmeida>
19:31fdobridge_: <dwlsalmeida> Also, `nouveau.config=NvGspRm=1`
19:31fdobridge_: <dwlsalmeida> @airlied nailed the error (-EINVAL) to this line:
19:31fdobridge_: <dwlsalmeida>
19:31fdobridge_: <dwlsalmeida> ```
19:31fdobridge_: <dwlsalmeida> switch (device->info.family) {
19:31fdobridge_: <dwlsalmeida> case NV_DEVICE_INFO_V0_VOLTA:
19:31fdobridge_: <dwlsalmeida> ret = nvif_object_ctor(&chan->chan->user, "abi16CeWar", 0, VOLTA_DMA_COPY_A,
19:31fdobridge_: <dwlsalmeida> NULL, 0, &chan->ce);
19:31fdobridge_: <dwlsalmeida> if (ret)
19:31fdobridge_: <dwlsalmeida> goto done;
19:31fdobridge_: <dwlsalmeida> break;
19:31fdobridge_: <dwlsalmeida> case NV_DEVICE_INFO_V0_TURING:
19:31fdobridge_: <dwlsalmeida> ret = nvif_object_ctor(&chan->chan->user, "abi16CeWar", 0, TURING_DMA_COPY_A, <-------
19:31fdobridge_: <dwlsalmeida> NULL, 0, &chan->ce);
19:31fdobridge_: <dwlsalmeida> ```
19:31fdobridge_: <dwlsalmeida>
19:31fdobridge_: <dwlsalmeida> Also, `nouveau.config=NvGspRm=1` (edited)
19:32fdobridge_: <dwlsalmeida> can you elaborate a bit on what `nvif` stands for?
19:39Lyude: karolherbst: not sure - can try to check today or tomorrow
19:41Lyude: dwlsalmeida: nvif is the nvidia interface. Basically: the way nouveau was designed was around the fact that with a lot of nvidia GPU functions, you are given a push buffer - but you can also be given unprivileged dma push buffers and other kinds of various hw interfaces that can be handed down to a guest vm. so in theory - you could have nvkm, the nvidia kernel module, running on a
19:41Lyude: host and then the guest could have an nvif driver that connects to that
19:41Lyude: we never really got that far though
19:41fdobridge_: <airlied> @dwlsalmeida can you pastebin a complete dmesg? That is a wierd place to die and I don't expect nvdec would have any affect
19:42fdobridge_: <dwlsalmeida> @airlied sure! but..I don't think there's any error message there, at least nothing trivially apparent
19:42fdobridge_: <dwlsalmeida> just a moment
19:42fdobridge_: <airlied> I assume you have gsp fw installed?
19:43fdobridge_: <dwlsalmeida> by "have gsp fw installed" you mean passing "nouveau.config=NvGspRm=1 "? or should something else be done here
19:45fdobridge_: <dwlsalmeida> fyi: `r535_gsp_load` returns 0 here, so I assume this means it got loaded?
19:45fdobridge_: <airlied> You need a pretty new linux-firmware, but yeah that seems like it loaded
19:46fdobridge_: <dwlsalmeida> Lyude: thanks for explaining about `nvif`!
20:00fdobridge_: <dwlsalmeida> @airlied https://pastebin.com/y0paDskq
20:00fdobridge_: <gfxstrand> @airlied It is in shipping linux-firmware now, isn't it?
20:02fdobridge_: <airlied> yes, and it looks to loaded, since you don't have the lines for what happens without it
20:03fdobridge_: <dwlsalmeida> ````
20:03fdobridge_: <dwlsalmeida> int
20:03fdobridge_: <dwlsalmeida> nvif_object_ioctl(struct nvif_object *object, void *data, u32 size, void **hack)
20:03fdobridge_: <dwlsalmeida> {
20:03fdobridge_: <dwlsalmeida> struct nvif_client *client = object->client;
20:03fdobridge_: <dwlsalmeida> union {
20:03fdobridge_: <dwlsalmeida> struct nvif_ioctl_v0 v0;
20:03fdobridge_: <dwlsalmeida> } *args = data;
20:03fdobridge_: <dwlsalmeida>
20:03fdobridge_: <dwlsalmeida> if (size >= sizeof(*args) && args->v0.version == 0) {
20:03fdobridge_: <dwlsalmeida> if (object != &client->object)
20:03fdobridge_: <dwlsalmeida> args->v0.object = nvif_handle(object);
20:03fdobridge_: <dwlsalmeida> else
20:03fdobridge_: <dwlsalmeida> args->v0.object = 0;
20:03fdobridge_: <dwlsalmeida> args->v0.owner = NVIF_IOCTL_V0_OWNER_ANY;
20:03fdobridge_: <dwlsalmeida> } else
20:03fdobridge_: <dwlsalmeida> return -ENOSYS;
20:03fdobridge_: <dwlsalmeida>
20:03fdobridge_: <dwlsalmeida> return client->driver->ioctl(client->object.priv, data, size, hack); <-------------
20:03fdobridge_: <dwlsalmeida> }
20:03fdobridge_: <dwlsalmeida> ```
20:03fdobridge_: <dwlsalmeida>
20:03fdobridge_: <dwlsalmeida> I wonder if you know what that points to when coming from `nouveau_abi16_ioctl_channel_alloc(ABI16_IOCTL_ARGS)`?
20:03fdobridge_: <dwlsalmeida> going to bet that this is actually where the error comes from
20:05fdobridge_: <dwlsalmeida> `addr2line` isn't the best of friends at times :/
20:06fdobridge_: <nishi> i'm not exactly sure what info is useful to give, so sorry if i miss anything TmT
20:06fdobridge_: <nishi> using the nouveau reclocking on my 2060 creates a bunch of issues which i have no idea where they stem from
20:06fdobridge_: <nishi> - hyprland straight up not launching anymore
20:06fdobridge_: <nishi> - desktop positions on sddm swapped for some reason
20:06fdobridge_: <nishi> - plasma wayland encountering intense lags when dragging on the desktop
20:06fdobridge_: <nishi> if you want me to check for more issues, i'll be more than happy to
20:07fdobridge_: <nishi>
20:07fdobridge_: <nishi> useful (?) info:
20:07fdobridge_: <nishi> switched from 1660 super (worked fine with reclocking) to 2060 (from my dad's pc before he upgraded his gpu)
20:07fdobridge_: <nishi> running kernel 6.7.0-arch3-1
20:07fdobridge_: <nishi> using dual monitor
20:07fdobridge_: <nishi> stuff added to startup: ``module_blacklist=nvidia,nvidia_uvm,nvidia_modeset,nvidia_drm nouveau.config=NvGspRm=1``
20:07fdobridge_: <nishi> ``cat /usr/lib/modprobe.d/nvidia-utils.conf``: ``#blacklist nouveau``
20:07fdobridge_: <nishi> https://cdn.discordapp.com/attachments/1034184951790305330/1199445163358040084/20240123_183957.mp4?ex=65c2915b&is=65b01c5b&hm=5fb675abdb76cd3b30cb4e22462d9b49b41b19e2647e580049f0019b53630940&
20:07fdobridge_: <nishi> https://cdn.discordapp.com/attachments/1034184951790305330/1199445164255608932/20240123_184102.mp4?ex=65c2915b&is=65b01c5b&hm=8a84a467f836937e90d10d68f5011d10dd3164c3529cc1dcf4555bf9cbc5e4aa&
20:07fdobridge_: <nishi> https://cdn.discordapp.com/attachments/1034184951790305330/1199445164847026176/20240123_184156.mp4?ex=65c2915b&is=65b01c5b&hm=b82ff2a4ac8b2be4bec4876b0dcfd747d811d65a1b746a74643e66db8c3b4195&
20:10fdobridge_: <gfxstrand> What do you mean by desktop positions being swapped? I could easily believe that nouveau enumerates display connectors differently with GSP.
20:16fdobridge_: <nishi> on proprietary drivers and on nouveau without the kernel param i just noticed i've been saying the wrong thing T^T) the monitors are configured fine (i.e. they look like they're "connected"?) but once i enable the param the desktops switch positions, my left monitor being put on the right and vice versa
20:17fdobridge_: <nishi> on proprietary drivers and on nouveau without the kernel param the monitors are configured fine (i.e. they look like they're "connected"?) but once i enable the param the desktops switch positions, my left monitor being put on the right and vice versa (edited)
20:18fdobridge_: <airlied> @dwlsalmeida where in mesa is it bailing out?
20:19fdobridge_: <dwlsalmeida> ````
20:19fdobridge_: <dwlsalmeida> int
20:19fdobridge_: <dwlsalmeida> nouveau_ws_vid_context_create(struct nouveau_ws_device *dev, struct nouveau_ws_vid_context **out)
20:19fdobridge_: <dwlsalmeida> {
20:19fdobridge_: <dwlsalmeida> struct drm_nouveau_channel_alloc req = { .fb_ctxdma_handle = ~0, .tt_ctxdma_handle = 0x300 };
20:19fdobridge_: <dwlsalmeida> uint32_t classes[NOUVEAU_WS_CONTEXT_MAX_CLASSES];
20:19fdobridge_: <dwlsalmeida> uint32_t base;
20:19fdobridge_: <dwlsalmeida>
20:19fdobridge_: <dwlsalmeida> *out = CALLOC_STRUCT(nouveau_ws_vid_context);
20:19fdobridge_: <dwlsalmeida> if (!*out)
20:19fdobridge_: <dwlsalmeida> return -ENOMEM;
20:19fdobridge_: <dwlsalmeida>
20:19fdobridge_: <dwlsalmeida> int ret = drmCommandWriteRead(dev->fd, DRM_NOUVEAU_CHANNEL_ALLOC, &req, sizeof(req)); <---------------
20:19fdobridge_: <dwlsalmeida> ```
20:23fdobridge_: <airlied> I wonder does turing use a different nvdec configuration
20:24fdobridge_: <gfxstrand> Yeah, that's probably just the two drivers picking different arbitrary orders to list the GPU connectors. Annoying but ultimately harmless unless you're constantly switching back and forth.
20:24fdobridge_: <nishi> yeah that's true
20:25fdobridge_: <gfxstrand> @nishi With the other issues, are those vs. the blob or vs. nouveau without GSP? As in does hyperland work on non-GSP nouveau?
20:25fdobridge_: <nishi> hyprland works on non-gsp nouveau and blob, but breaks on nouveau + gsp
20:25fdobridge_: <gfxstrand> Oh, well that's not good. Mind filing a bug about it?
20:26fdobridge_: <nishi> is there a general format i should follow for bug reports?
20:26fdobridge_: <nishi> and also where do i put them T^T
20:26fdobridge_: <gfxstrand> https://gitlab.freedesktop.org/mesa/mesa/-/issues/
20:27fdobridge_: <gfxstrand> There's a "Bug Report" template you can use.
20:27fdobridge_: <gfxstrand> Well, maybe that one should go against drm, actually
20:28fdobridge_: <gfxstrand> https://gitlab.freedesktop.org/drm/nouveau/-/issues/
20:28fdobridge_: <gfxstrand> We can move it if you file it in the wrong place but, given that it works on non-gsp nouveau, I'm going to assume it's a kernel bug so it should go in drm/nouveau
20:29fdobridge_: <gfxstrand> First shard survived...
20:29fdobridge_: <nishi> alrighty ty :P
20:29fdobridge_: <gfxstrand> So, yeah, we have something funky going on with exported buffers that's trashing my context
20:29fdobridge_: <gfxstrand> @airlied ^^
20:32fdobridge_: <airlied> @dwlsalmeida I'll see if I can figure out what turing needs todau
20:32fdobridge_: <dwlsalmeida> ack, thanks for the help!
20:41fdobridge_: <airlied> @gfxstrand does just running wsi cases serially in deqp-vk die? are you running against a gnome wayland desktop?
20:42fdobridge_: <airlied> I just tried running them against a bare X server and that all passed serially
20:48fdobridge_: <airlied> not saying you should just use a base X server for expediency 😛
20:48fdobridge_: <!DodoNVK (she) 🇱🇹> I'm not sure who uses a bare X server in 2024 though
20:50fdobridge_: <gfxstrand> @airlied Oh, all the WSI tests pass fine. It's just that the synchronization tests that run after the WSI tests die
20:50fdobridge_: <gfxstrand> 🤡
20:50fdobridge_: <gfxstrand> @airlied And, yeah, I'm running against GNOME
20:50fdobridge_: <gfxstrand> See also this
20:53fdobridge_: <tom3026> I ran on kde X11 and same here wsi tests passes but sync dies
20:55fdobridge_: <airlied> if I have to go fix the gl driver I won't be happy
20:56fdobridge_: <gfxstrand> I don't think this is the GL driver's fault.
20:56fdobridge_: <gfxstrand> I think it's some sort of polution that survives even after we've closed all the shared BOs.
20:57fdobridge_: <redsheep> Has anyone tried with the gnome session running through zink? Not sure that is working yet given an issue from the other day.
20:57fdobridge_: <airlied> yeah it could be the legacy submission paths the GL driver uses
20:59fdobridge_: <redsheep> If that GL driver isn't intended to be used going forward seems like it shouldn't be involved in the test
21:25fdobridge_: <gfxstrand> @airlied Looks like I can't assign things to dakr on GitLab but these two are for him:
21:25fdobridge_: <gfxstrand> https://gitlab.freedesktop.org/drm/nouveau/-/issues/311
21:25fdobridge_: <gfxstrand> https://gitlab.freedesktop.org/drm/nouveau/-/issues/312 (edited)
21:25fdobridge_: <gfxstrand> https://gitlab.freedesktop.org/drm/nouveau/-/issues/311 (edited)
21:25fdobridge_: <gfxstrand> https://gitlab.freedesktop.org/drm/nouveau/-/issues/312
21:26fdobridge_: <gfxstrand> Neither should take more than a few hours
22:39fdobridge_: <karolherbst🐧🦀> @gfxstrand we talked about your ISA request today with nvidia :ferrisUpsideDown:
22:41HdkR: ooo, what instructions do you want added? :P
22:41fdobridge_: <karolherbst🐧🦀> @gfxstrand ohh and we might have a solution for requesting docs and Ben got told, but nobody else :ferrisUpsideDown:
22:41fdobridge_: <karolherbst🐧🦀> so that info got lost
22:42fdobridge_: <gfxstrand> Woof
22:42fdobridge_: <karolherbst🐧🦀> 😄
22:42fdobridge_: <gfxstrand> If there's a channel by which I can request docs and they'll actually give them to me, that's good enough.
22:42fdobridge_: <karolherbst🐧🦀> but yeah.. in theory I have direct access to file bugs into nvidia's bug tracker, but nobody told me 😄
22:42fdobridge_: <karolherbst🐧🦀> and I asked for you to get access as well
22:42fdobridge_: <karolherbst🐧🦀> and they said "it's just admin work in the way of doing that"
22:42fdobridge_: <karolherbst🐧🦀> so just somebody needing to do that
22:43fdobridge_: <gfxstrand> Did they sign someone up to do it? 😂
22:43fdobridge_: <karolherbst🐧🦀> 😄
22:43fdobridge_: <karolherbst🐧🦀> it sounded like that's what they plan to do
22:43fdobridge_: <gfxstrand> Maybe you can file a bug in their tracker to give me access to their tracker
22:43fdobridge_: <karolherbst🐧🦀> 😄
22:43fdobridge_: <karolherbst🐧🦀> yeah dunno.. John said that it will be taken care of
22:43fdobridge_: <gfxstrand> Okay
22:44fdobridge_: <karolherbst🐧🦀> anyway... I already have access and I will convert my email threads to that once I get the info
22:44fdobridge_: <gfxstrand> Does this John have an e-mail address you can DM me? Or you can send an introductory e-mail?
22:44fdobridge_: <karolherbst🐧🦀> which was that helper invoc bit + SPH headers
22:49Lyude: airlied, dakr: pushed some more stuff to my rvkms branch. No working skeleton yet but I'm getting close. One thing I'm currently trying to figure out: what we need to do about https://github.com/AsahiLinux/linux/blob/asahi/drivers/gpu/drm/asahi/driver.rs#L165 . registrations() exists in our tree but is commented out, and the types appear to be totally different - so I assume there's
22:49Lyude: probably some more work there I need to track down
22:51Lyude: I might be able to figure something else out to pass there but i'm still trying to wrap my head around pinning (I get what guarantees it provides but am not really sure how to encorporate it into something)
22:53karolherbst: pinning is overrated :P
22:58fdobridge_: <gfxstrand> Rust `Pin<T>` or kernel pinning?
22:59karolherbst: rust
23:03Lyude: gfxstrand: rust Pin<T>
23:03Lyude: well in the kernel so would that be kernel pinning?
23:04karolherbst: I think with kernel pinning gfxstrand means like memory pining
23:04Lyude: (I might actually know what the missing type for registrations() is now… looked a bit more closely )
23:04Lyude: ahhh
23:04Lyude: looks like we might actually just be missing some convienence types for revokable mutexes
23:05Lyude: I think I see how to implement them, I'll give it a shot
23:29fdobridge_: <gfxstrand> Damn you, dEQP-VK.subgroups.basic.framebuffer.subgroupmemorybarrierimage_tess_control! My nemesis!
23:31fdobridge_: <gfxstrand> I thought for sure I'd fixed that one
23:33fdobridge_: <gfxstrand> I even kinda know what's wrong with it.
23:42fdobridge_: <redsheep> Has anybody recently tested if zink on nvk can load a full plasma session? I am rebuilding my kernel amd mesa to see if I can make it work, and see if it helps at all.
23:42fdobridge_: <redsheep> and mesa*
23:42fdobridge_: <airlied> @gfxstrand you running wsi.wayland tests?
23:43fdobridge_: <gfxstrand> I think so
23:43fdobridge_: <airlied> I'm getting a memory corruption with dEQP-VK.wsi.wayland.swapchain.simulate_oom.min_image_count
23:44fdobridge_: <gfxstrand> Oh...
23:44fdobridge_: <airlied> oh ignore me
23:45fdobridge_: <airlied> definitely wasn't just running llvmpiep
23:45fdobridge_: <gfxstrand> lol