00:05 fdobridge_: <g​fxstrand> https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27205
00:05 fdobridge_: <g​fxstrand> I'm CTSing that now
00:06 fdobridge_: <g​fxstrand> It's a little terrifying that it basically worked on the first try.
00:12 fdobridge_: <k​arolherbst🐧🦀> wait.. so the basic idea is, you have a copy only context, with its own purpose to copy from a bo into another bo, and that bo is filled via memcpy. And "all the copies" are simply going through that queue?
00:14 fdobridge_: <k​arolherbst🐧🦀> btw, for small uploads there is a better way of doing those things 😛
00:15 fdobridge_: <k​arolherbst🐧🦀> 3D and compute have their own `LAUNCH_DMA` methods, but they copy embedded data into a buffer
00:15 fdobridge_: <k​arolherbst🐧🦀> which is super useful for like... small uploads
00:20 fdobridge_: <a​irlied> @dwlsalmeida I think Ive only tried on ampere or ada, you might have to adjust some classes, I'll look when I get home, also fluster is probably over playing it. I never even got that one CTS to pass properly, it decoded the right colour in parts of the image
00:38 fdobridge_: <a​irlied> @dwlsalmeida I assume you are booting with GSP enabled?
00:42 fdobridge_: <a​irlied> I'll assume not GSP booted until stated otherwise :0
00:58 fdobridge_: <d​wlsalmeida> uhh....
00:58 fdobridge_: <d​wlsalmeida> 😐
00:58 fdobridge_: <d​wlsalmeida> yeah, no..
00:59 fdobridge_: <d​wlsalmeida> I totally forgot about this
00:59 fdobridge_: <a​irlied> one thing that is definitely wrong in the code is I've no idea how to size some of the allocations, so hence why I was targetting a single small decode path
00:59 fdobridge_: <a​irlied> probably need to work out from the prop driver how it does some of the mem alloc sizings
01:00 fdobridge_: <d​wlsalmeida> for the coded data, you mean?
01:01 fdobridge_: <a​irlied> for GetVideoSessionMemoryRequirementsKHR
01:01 fdobridge_: <a​irlied> though not sure those are size based usually
01:01 fdobridge_: <a​irlied> so also for the dpb
01:01 fdobridge_: <a​irlied> oh yeah those can be based of the max coded size
01:01 fdobridge_: <g​fxstrand> Yes, and useless for shader upload because I don't know what queue it's used on.
01:02 fdobridge_: <k​arolherbst🐧🦀> yeah.. not saying you should use it for shader uploads, just generally speaking 😄
01:02 fdobridge_: <g​fxstrand> Well, yeah. I know that...
01:03 fdobridge_: <d​wlsalmeida> @airlied thanks for pointing out the GSP thing, that was very helpful
01:03 fdobridge_: <d​wlsalmeida> will check again tomorrow
01:04 fdobridge_: <a​irlied> gsp should work on 2060s with 6.8-rc1 I think I pushed the last fix there, 6.7 might not have it
01:04 fdobridge_: <a​irlied> if you have an older 2060 that is
01:04 fdobridge_: <d​wlsalmeida> what constitutes an "older" 2060 btw?
01:05 fdobridge_: <d​wlsalmeida> but yeah I'll run 6.8-rc1
01:05 fdobridge_: <k​arolherbst🐧🦀> mhhh... might be useful to use it for ubos now that they are indeed used.. though not sure if that has the same issue as with shader uploads...
01:05 fdobridge_: <a​irlied> older is one that fails to work with 6.7 😛
01:06 fdobridge_: <d​wlsalmeida> lol 😄
01:06 fdobridge_: <g​fxstrand> I.e., mine. 😂
01:08 fdobridge_: <k​arolherbst🐧🦀> yours failed to work with 6.7?
01:09 fdobridge_: <g​fxstrand> It did last I tried.
01:09 fdobridge_: <g​fxstrand> My 12G 2060 is fine. Just not the 8G Founders Edition
01:10 fdobridge_: <g​fxstrand> Which is ironic because the 8G card is the one I bought because the 12G card didn't work with nouveau until we bumped firmware versions.
01:11 fdobridge_: <a​irlied> fix will end up in some 6.7 stable release at some point
01:12 fdobridge_: <k​arolherbst🐧🦀> yeah I remember... oh well.. as long as it works with mainline who cares 😛
01:13 fdobridge_: <g​fxstrand> Yeah, it's been an adventure
01:13 fdobridge_: <k​arolherbst🐧🦀> ~~just update straight with rc1, what's the worst that could happen anyway~~
01:14 fdobridge_: <a​irlied> 6.8-rc1 should have all the fun bo exec locking improvements
02:03 fdobridge_: <g​fxstrand> This is fun...
02:04 fdobridge_: <g​fxstrand> https://cdn.discordapp.com/attachments/1034184951790305330/1199172696475435008/message.txt?ex=65c1939a&is=65af1e9a&hm=0f740b411f7ebb01a931abe371f870c01081b18b67f2fe46fe52b6c98a8f5b5e&
02:07 fdobridge_: <a​irlied> https://gitlab.freedesktop.org/drm/nouveau/-/issues/280
02:07 fdobridge_: <a​irlied> yeah seen it once, not again, no ideas
02:08 fdobridge_: <a​irlied> feels like some sort of teardown race
02:11 fdobridge_: <g​fxstrand> NGL, I'm a little surprised that this seems to "just work"
02:14 fdobridge_: <a​irlied> I wonder if the client is getting torn down in parallel somehow
02:14 fdobridge_: <g​fxstrand> seems unlikely.
02:14 fdobridge_: <g​fxstrand> Well... if we're doing threaded submit...
02:15 fdobridge_: <g​fxstrand> It's deep enough inside worker threads that it's a little hard to tell.
02:16 fdobridge_: <g​fxstrand> @airlied BTW, Some sort of mystery kernel issue is the only thing standing between us and 1.3 conformance right now.
02:16 fdobridge_: <g​fxstrand> Well, I say kernel issue but I don't actually know that it's a kernel issue.
02:17 fdobridge_: <g​fxstrand> But it's a nasty hiesenbug that almost always triggers during synchronization import/export tests. 😬
02:18 fdobridge_: <a​irlied> a different one to the above crash?
02:22 fdobridge_: <g​fxstrand> Yeah
02:23 fdobridge_: <g​fxstrand> It doesn't crash. Just kills my context with no explanation
02:23 fdobridge_: <g​fxstrand> Doesn't do it when I run the test by itself, of course.
02:23 fdobridge_: <g​fxstrand> IDK how to reproduce it without running a substantial chunk of the CTS.
02:23 fdobridge_: <g​fxstrand> It's the very best kind of bug. 😭
02:25 fdobridge_: <a​irlied> would be good to see it reproduce on 6.8-rc1
02:33 fdobridge_: <S​id> such luck
02:33 fdobridge_: <S​id> https://github.com/terminatorul/NvStrapsReBar
02:33 fdobridge_: <S​id> much wow
02:34 fdobridge_: <S​id> might try it out one of these days
03:29 fdobridge_: <a​irlied> @gfxstrand yeah for the oops above 6.8-rc1 does reorg that code so I'm not sure the race is there anymore, so we'd have to see it
03:29 fdobridge_: <a​irlied> I'm just running a run_deqp.sh run on my ga102 on 6.8-rc1
03:30 fdobridge_: <a​irlied> I'm just running a run_deqp.sh run on my ga106 on 6.8-rc1 (edited)
03:33 fdobridge_: <a​irlied> what gpu are you going for conformance on turing?
03:37 fdobridge_: <g​fxstrand> I was hoping to do Turing+
03:39 fdobridge_: <g​fxstrand> But I've had problems on both Turing and Ampere
03:46 fdobridge_: <r​edsheep> Hopefully that means my similarly rare crash is cured there as well, I will give it a shot soon. Just takes forever to reproduce it.
03:46 fdobridge_: <r​edsheep> I was browsing https://nouveau.freedesktop.org/FeatureMatrix.html after an issue mentioned it and I see that SLI is marked TODO on NV190 which should be N/A
03:47 fdobridge_: <r​edsheep> SLI is well and truly dead on Ada
03:48 fdobridge_: <r​edsheep> Unless you're talking about explicit multi gpu or whatever but almost nobody uses that so not sure it's even worth mentioning
04:38 fdobridge_: <a​irlied> Pass: 659675, Fail: 94, Crash: 8, Skip: 2041179, Flake: 2, Duration: 1:17:51, Remaining: 0
05:31 fdobridge_: <g​fxstrand> What's that?
05:31 fdobridge_: <g​fxstrand> Did someone add more test fails?
05:36 fdobridge_: <g​fxstrand> Also the fails I'm seeing don't show up with deqp-runner
05:36 fdobridge_: <a​irlied> that is my ga106 with main and cts main I think
05:37 fdobridge_: <a​irlied> I'm having a look for the timeout
05:39 fdobridge_: <a​irlied> really doesn't look like kernel is doing anything wrong here, except tdr fires after 10s because the fence never signals
05:51 fdobridge_: <g​fxstrand> Yeah, I need to maybe do some test timing or something. The tests that are failing pass very quickly when run alone or even as part of a group.
05:57 fdobridge_: <a​irlied> yeah 10s is a long time though, like infinite loop type of time
06:02 fdobridge_: <r​edsheep> I know this is probably a ways down the road, but I have been looking into what would be needed for CUDA to work properly with mesa, and I think I have found something that can serve as a proof of concept for "just" implementing PTX
06:02 fdobridge_: <r​edsheep>
06:02 fdobridge_: <r​edsheep> https://github.com/gtcasl/gpuocelot
06:03 fdobridge_: <r​edsheep> This would seem to indicate that if NAK could be made to intake PTX instructions, (or maybe it could go PTX > NIR > NAK?) then CUDA programs could just work
06:03 fdobridge_: <S​id> PTX > NAK > NIR but yeah
06:04 fdobridge_: <g​fxstrand> No, PTX -> NIR -> NAK
06:04 fdobridge_: <S​id> oh
06:04 fdobridge_: <S​id> but yeah, *massive* could there
06:05 fdobridge_: <g​fxstrand> We'd just add a bunch of `_ptx` ops to NIR
06:05 fdobridge_: <S​id> if it was as simple as accepting ptx cuda wouldn't have vendor lock in
06:05 fdobridge_: <r​edsheep> Well, that project is a thing that works. From what I have heard PTX executes really slowly if you don't have nvidia hardware though.
06:06 fdobridge_: <g​fxstrand> Well, implementing PTX on non-NVIDIA GPUs isn't going to be super efficient.
06:06 fdobridge_: <S​id> ah
06:06 fdobridge_: <g​fxstrand> It's got a lot of very NVIDIA-specific behavior baked in.
06:06 fdobridge_: <a​irlied> there is also ZLUDA project
06:06 fdobridge_: <g​fxstrand> Which would have to be emulated.
06:07 fdobridge_: <S​id> makes sense
06:08 fdobridge_: <g​fxstrand> It's not quite as bad as like a Switch emulator where you have to emulate the actual hardware but it would take a decent amount of optimization to undo all the NVIDIAisms and it still wouldn't be 100% of perf compared to being compiled directly for AMD or Intel.
06:08 fdobridge_: <S​id> unrelated but I have everything needed to try this out ready
06:08 fdobridge_: <S​id> just need to patch the .ffs module into my bios and flash it
06:09 fdobridge_: <a​irlied> also with some newer PTX things you can't really do them at all on other GPUS
06:09 fdobridge_: <S​id> which should be simpler than the first time I did it because now I can just replace the old module
06:10 fdobridge_: <g​fxstrand> Yeah... And some of them would be tricky to retrofit into NIR. Probably not impossible but tricky.
06:10 fdobridge_: <r​edsheep> It would be nice to have it pass through NIR so it could be at least theoretically possible for Intel and AMD to use it if the other drivers want to implement the needed emulation, but that would probably blow up the scope of the project massively
06:11 fdobridge_: <r​edsheep> Having workign DLSS on AMD hardware would be amazing
06:12 fdobridge_: <g​fxstrand> I want it to go through NIR so NIR can optimize it.
06:12 fdobridge_: <S​id> technically it should be possible to enable it already, provided you spoof the right things in the right places
06:12 fdobridge_: <g​fxstrand> Even if it's a shit load of `_ptx` instructions, I want the optimizer.
06:13 fdobridge_: <S​id> it'll just be slow, because nv-isms
06:13 fdobridge_: <S​id> and I'm not sure how nvapi/nvngx will handle that
06:14 fdobridge_: <a​irlied> hmm running sync tests under strace seems to make it less likely to die
06:14 fdobridge_: <a​irlied> oh no just got it
06:15 fdobridge_: <S​id> yeah, @redsheep `DXVK_NVAPI_ALLOW_OTHER_DRIVERS=1`
06:16 fdobridge_: <r​edsheep> That gets you NVAPI so you can have stuff like reflex on latencyflex, doesn't actually make DLSS work.
06:16 fdobridge_: <g​fxstrand> Oh, really? I'm about to roll over and sleep but I eagerly await your findings in the morning. 😁
06:17 fdobridge_: <S​id> though it'll only work for games that check only by device/vendor pci ids
06:18 fdobridge_: <a​irlied> my findings may be I went and make dinner instead 😛
06:18 fdobridge_: <S​id> have you tried providing a dxvk.conf with custom ids?
06:19 fdobridge_: <r​edsheep> I don't have AMD hardware to test it on right now but I looked into DLSS vendor lock in a good bit and there's good reason people are modding FSR into DLSS only games for things like steam deck. You can't just spoof it into working, there's some part of DLSS that uses some PTX or CUDA stuff.
06:20 fdobridge_: <r​edsheep> I am not clear on the details but it doesn't work.
06:20 fdobridge_: <a​irlied> actually I'm sorta back to convincing myself it might be kernel, ah well gotta keep digging I suppose
06:22 fdobridge_: <S​id> ah, fair, makes sense
06:22 fdobridge_: <r​edsheep> RDNA 3 probably even has enough matrix multiply performance that if DLSS was able to actually run it would perform well, assuming it isn't too horrible to emulate things that aren't 1 to 1
06:41 fdobridge_: <a​irlied> bleh I think I know the problem and I think my recent fix for the prime bug made it worse, but it's a race condition on fencing
06:42 fdobridge_: <a​irlied> you emit a fence to the hw, then when someone calls fence signalling you enable irqs, but the fence might already have passed by that time, so you just never get the signalling event
06:44 fdobridge_: <a​irlied> at least that's my current working theory
06:51 fdobridge_: <a​irlied> still seeing a timeout here and there though
07:23 fdobridge_: <a​irlied> okay about to send out a patch that might fix it
07:27 fdobridge_: <a​irlied> https://lore.kernel.org/dri-devel/20240123072538.1290035-1-airlied@gmail.com/T/#u
07:27 fdobridge_: <a​irlied> @gfxstrand @karolherbst ^^^ probably need to check my logic here
07:27 airlied: dakr: ^^ also you
07:28 fdobridge_: <a​irlied> I'm making it through a full round of dEQP-VK.sync* now
07:31 fdobridge_: <!​DodoNVK (she) 🇱🇹> @ Sid This patch would be interesting to test :nouveau:
07:32 fdobridge_: <S​id> I'll test it in a couple hours
07:45 fdobridge_: <S​id> currently dicking around uefi
07:48 fdobridge_: <t​om3026> im compiling it now ^_^
07:50 fdobridge_: <S​id> @asdqueerfromeu
07:57 fdobridge_: <r​edsheep> If some of the deleted code here was added to fix issues with prime does deleting it bring those issues back? Suppose that just needs testing.
07:57 fdobridge_: <a​irlied> no it doesn't
07:58 fdobridge_: <a​irlied> @gfxstrand did you have the fence work queue change in your tree that you were testing on?
07:59 fdobridge_: <a​irlied> just thinking this might be overkill to fix the problem, but I'll sleep on it
08:02 fdobridge_: <t​om3026> either that patch or something else on linux-next fixed a bunch of weird stutters and freezes/timeouts on this ampere but im seeing a lot less fps in unigine-heaven for some reason and vkcube is spinning in slowmotion heh
08:02 fdobridge_: <S​id> ok, building it now
08:02 fdobridge_: <S​id> on top of 6.7.1 because I'm dumb like that
08:06 fdobridge_: <t​om3026> uhm okay, it spins fast on laptop monitor when running on nouvea, dragging it over to the external monitor attached to the nvidia gpu it slows down
08:15 fdobridge_: <t​om3026> https://streamable.com/qb2eya easier to show a "video" game thinks its rendering at what 80fps? thats me trying to move the mouse as fast as possible its like its drawing at 10fps 😄
08:50 fdobridge_: <t​om3026> ok seems to be a wayland thing, or kwin. works much better on x11
08:50 fdobridge_: <t​om3026> but its some kind of combination with nouveau tho
09:15 fdobridge_: <S​id> ```
09:15 fdobridge_: <S​id> [Tue Jan 23 14:41:41 2024] nouveau 0000:01:00.0: SoTGame.exe[11694]: job timeout, channel 24 killed!
09:15 fdobridge_: <S​id> [Tue Jan 23 14:41:41 2024] [drm:nouveau_job_submit [nouveau]] *ERROR* Trying to push to a killed entity
09:15 fdobridge_: <S​id> [Tue Jan 23 14:42:33 2024] [TTM] Buffer eviction failed
09:15 fdobridge_: <S​id> [Tue Jan 23 14:42:33 2024] nouveau 0000:01:00.0: gsp: Xid:13 Graphics SM Warp Exception on (GPC 0, TPC 0, SM 0): Out Of Range Address
09:15 fdobridge_: <S​id> [Tue Jan 23 14:42:33 2024] nouveau 0000:01:00.0: gsp: Xid:13 Graphics SM Global Exception on (GPC 0, TPC 0, SM 0): Multiple Warp Errors
09:16 fdobridge_: <S​id> [Tue Jan 23 14:42:33 2024] nouveau 0000:01:00.0: gsp: Xid:13 Graphics Exception: ESR 0x504730=0xc03000e 0x504734=0x4 0x504728=0x4c1eb72 0x50472c=0x174
09:16 fdobridge_: <S​id> ```
09:17 fdobridge_: <S​id> quake champions: `[Tue Jan 23 14:47:08 2024] nouveau 0000:01:00.0: [13028]: job timeout, channel 32 killed!`
09:19 fdobridge_: <S​id> ^ Sea of Thieves
09:20 fdobridge_: <S​id> richard burns rally - rallysimfans version
09:20 fdobridge_: <S​id> ```
09:20 fdobridge_: <S​id> [Tue Jan 23 14:49:42 2024] nouveau 0000:01:00.0: RichardBurnsRal[14704]: job timeout, channel 24 killed!
09:20 fdobridge_: <S​id> [Tue Jan 23 14:49:48 2024] nouveau 0000:01:00.0: gsp: mmu fault queued
09:20 fdobridge_: <S​id> [Tue Jan 23 14:49:48 2024] nouveau 0000:01:00.0: gsp: rc engn:00000001 chid:24 type:31 scope:1 part:233
09:20 fdobridge_: <S​id> [Tue Jan 23 14:49:48 2024] nouveau 0000:01:00.0: fifo:001001:0003:0018:[RichardBurnsRal[14704]] errored - disabling channel
09:20 fdobridge_: <S​id> ```
09:21 fdobridge_: <S​id> disclaimer: none of these games worked on nvk prior as well
09:24 fdobridge_: <S​id> I'll run a cts test in a while and see if I get any there
09:32 fdobridge_: <t​om3026> where did you get the cts from? was just curious running some myself heh
09:33 fdobridge_: <t​om3026> doesnt seem to be in AUR from what i can tell
09:34 fdobridge_: <S​id> https://github.com/KhronosGroup/VK-GL-CTS/wiki
09:36 fdobridge_: <t​om3026> ah ok
10:10 fdobridge_: <a​irlied> Yeah didn't think it would solve the rc or mmu faults, not sure we've seen sync fails outside cts
10:42 fdobridge_: <t​om3026> meh nothing is going as planned, vulkan cts fails to build bunch of "error: ‘VkPipelineOfflineCreateInfo’ in namespace ‘vk’ does not name a type"
10:55 fdobridge_: <S​id> ```
10:55 fdobridge_: <S​id> [Tue Jan 23 16:20:43 2024] __vm_enough_memory: pid: 2144, comm: deqp-vk, not enough memory for the allocation
10:55 fdobridge_: <S​id> [Tue Jan 23 16:20:43 2024] __vm_enough_memory: pid: 2144, comm: deqp-vk, not enough memory for the allocation
10:55 fdobridge_: <S​id> [Tue Jan 23 16:20:43 2024] __vm_enough_memory: pid: 2144, comm: deqp-vk, not enough memory for the allocation
10:55 fdobridge_: <S​id> [Tue Jan 23 16:20:43 2024] __vm_enough_memory: pid: 2144, comm: deqp-vk, not enough memory for the allocation
10:55 fdobridge_: <S​id> [Tue Jan 23 16:21:13 2024] nouveau 0000:01:00.0: Enabling HDA controller
10:55 fdobridge_: <S​id> [Tue Jan 23 16:21:14 2024] xhci_hcd 0000:01:00.2: xHC error in resume, USBSTS 0x401, Reinit
10:55 fdobridge_: <S​id> [Tue Jan 23 16:21:14 2024] usb usb3: root hub lost power or was reset
10:55 fdobridge_: <S​id> [Tue Jan 23 16:21:14 2024] usb usb4: root hub lost power or was reset
10:55 fdobridge_: <S​id> [Tue Jan 23 16:21:44 2024] nouveau 0000:01:00.0: Enabling HDA controller
10:55 fdobridge_: <S​id> [Tue Jan 23 16:21:44 2024] xhci_hcd 0000:01:00.2: xHC error in resume, USBSTS 0x401, Reinit
10:55 fdobridge_: <S​id> [Tue Jan 23 16:21:44 2024] usb usb3: root hub lost power or was reset
10:55 fdobridge_: <S​id> [Tue Jan 23 16:21:44 2024] usb usb4: root hub lost power or was reset
10:55 fdobridge_: <S​id> [Tue Jan 23 16:22:58 2024] nouveau 0000:01:00.0: Enabling HDA controller
10:55 fdobridge_: <S​id> [Tue Jan 23 16:22:58 2024] xhci_hcd 0000:01:00.2: xHC error in resume, USBSTS 0x401, Reinit
10:55 fdobridge_: <S​id> [Tue Jan 23 16:22:58 2024] usb usb3: root hub lost power or was reset
10:55 fdobridge_: <S​id> [Tue Jan 23 16:22:58 2024] usb usb4: root hub lost power or was reset
10:55 fdobridge_: <S​id> [Tue Jan 23 16:23:34 2024] nouveau 0000:01:00.0: Enabling HDA controller
10:55 fdobridge_: <S​id> [Tue Jan 23 16:23:35 2024] xhci_hcd 0000:01:00.2: xHC error in resume, USBSTS 0x401, Reinit
10:55 fdobridge_: <S​id> [Tue Jan 23 16:23:35 2024] usb usb3: root hub lost power or was reset
10:55 fdobridge_: <S​id> [Tue Jan 23 16:23:35 2024] usb usb4: root hub lost power or was reset
10:55 fdobridge_: <S​id> [Tue Jan 23 16:24:02 2024] nouveau 0000:01:00.0: deqp-vk[2144]: job timeout, channel 24 killed!
10:55 fdobridge_: <S​id> ```
10:55 fdobridge_: <S​id> only one timeout now though
10:55 fdobridge_: <S​id> instead of the tens I had before
10:56 fdobridge_: <S​id> I'll update my cts and let it run again
10:56 fdobridge_: <S​id> my build of the cts is *months* old
11:05 fdobridge_: <S​id> for me it's failing with `make[2]: *** No rule to make target '/stable/xdg-shell/xdg-shell.xml', needed by 'framework/platform/xdg-shell.c'. Stop.`
11:06 fdobridge_: <S​id> oh, missing dep, wayland-protocols
11:51 fdobridge_: <t​om3026> oh lol im running oom on the cts build
12:32 fdobridge_: <S​id> @airlied bunch of device losts on this cts run
12:33 fdobridge_: <S​id> will share results and dmesg once it's done
12:33 fdobridge_: <S​id> currently in QM class
13:11 fdobridge_: <S​id> ```Test case 'dEQP-VK.memory.pipeline_barrier.host_read_host_write.1024'.. terminate called after throwing an instance of 'vk::Error' what(): vkd.deviceWaitIdle(device): VK_ERROR_DEVICE_LOST at vktMemoryPipelineBarrierTests.cpp:9345```
13:11 fdobridge_: <S​id> aborted core dumped
13:54 fdobridge_: <S​id> https://cdn.discordapp.com/attachments/1034184951790305330/1199351510534983860/dmesg.log?ex=65c23a22&is=65afc522&hm=fb5a86f21ed683e2a0f903689cd23fd378473d525984e55e788af58be2cbce4a&
14:02 fdobridge_: <t​om3026> your bcachefs seems broken :p
14:08 fdobridge_: <S​id> am aware of that, yus
14:21 fdobridge_: <g​fxstrand> I'll pull, build, and test today
14:47 fdobridge_: <g​fxstrand> Might update to 6.8-rc1 while I'm at it.
14:50 fdobridge_: <m​arysaka> ... I think I'm still on 6.7-rc1 with the original patches :nya_panic:
15:35 fdobridge_: <g​fxstrand> Building now...
16:00 fdobridge_: <k​arolherbst🐧🦀> @gfxstrand I think clippy found a bug in NAK in regards to GS :ferrisUpsideDown: ...
16:01 fdobridge_: <g​fxstrand> That's possible
16:01 fdobridge_: <k​arolherbst🐧🦀> but somebody should really fix those 500 clippy warnings
16:01 fdobridge_: <k​arolherbst🐧🦀> 😄
16:01 fdobridge_: <k​arolherbst🐧🦀> anyway, some cause errors, so I'm fixing that at least
16:03 fdobridge_: <g​fxstrand> How does one run clippy?
16:03 fdobridge_: <g​fxstrand> I see a cargo thing but we don't use cargo
16:04 fdobridge_: <k​arolherbst🐧🦀> use `clippy-driver` as your rustc
16:04 fdobridge_: <g​fxstrand> ah
16:04 fdobridge_: <k​arolherbst🐧🦀> either via a cross file or `RUSTC` env var
16:04 fdobridge_: <g​fxstrand> Well, feel free to submit an MR. I probably won't be looking at that for a little bit
16:04 fdobridge_: <k​arolherbst🐧🦀> clippy generally points out also how to write cleaner code and stuff 😄 so it's kinda nice to learn better rust
16:05 fdobridge_: <k​arolherbst🐧🦀> yeah.. I just fix the errors so I can use it with rusticl 😄
16:05 fdobridge_: <k​arolherbst🐧🦀> https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27216
16:05 fdobridge_: <k​arolherbst🐧🦀> the opt_out thing is the fix
16:05 fdobridge_: <k​arolherbst🐧🦀> I think my change actually fixes it but please double check 😄
16:06 fdobridge_: <g​fxstrand> Oh, I didn't know `?` worked on `Option<T>` That's super useful!
16:07 fdobridge_: <k​arolherbst🐧🦀> yeah, it is 🙂
16:15 fdobridge_: <g​fxstrand> Built. Now CTSing.
16:15 fdobridge_: <g​fxstrand> I'm gonna be so happy if this works.
16:15 fdobridge_: <g​fxstrand> It's also going to take hours. 😭
16:16 fdobridge_: <S​id> all the best
17:06 fdobridge_: <g​fxstrand> The good news is that it got all the way through the synchronization tests. The bad news is that I forgot to log into GNOME so it couldn't run the WSI tests.
17:23 fdobridge_: <t​om3026> Cs2 is running great, a bit low fps but havent triggered any bugs after like 3 hours 👍
17:24 fdobridge_: <t​om3026> arlieds patch applied, and even that eso gpl draft 😛
17:24 fdobridge_: <g​fxstrand> https://cdn.discordapp.com/attachments/1034184951790305330/1199404271964262481/rocky-and-bullwinkle-this-time-for-sure-scene-oeadei2i3cmtqg5r.png?ex=65c26b46&is=65aff646&hm=a8186863464c1084d3ed2a140fd9f6b4730fd4eb1383298e61f6280794021748&
17:24 fdobridge_: <g​fxstrand> Nice!
17:25 fdobridge_: <t​om3026> what depq incantations do you run the cts with, just all of them?
17:25 fdobridge_: <t​om3026> feels like that computation isnt uh needed to test
17:25 fdobridge_: <t​om3026> if i understand it right
17:25 fdobridge_: <t​om3026> was just curious so i know what to run when i apply drafts xD
17:25 fdobridge_: <g​fxstrand> https://cdn.discordapp.com/attachments/1034184951790305330/1199404656246399026/run-conformance.sh?ex=65c26ba1&is=65aff6a1&hm=9aeafdbb4966be962491d60331668c22eb092bcc2a64c3fceb2bb0c45b09a43f&
17:26 fdobridge_: <g​fxstrand> That's the version for an actual CTS run that I want to submit to Khronos
17:26 fdobridge_: <g​fxstrand> I have a different script that invokes deqp-runner that I use for regression runs
17:31 fdobridge_: <t​om3026> okay thanks
17:31 fdobridge_: <t​om3026> that will do heh
17:43 fdobridge_: <g​fxstrand> Damn...
17:43 fdobridge_: <g​fxstrand> ```
17:43 fdobridge_: <g​fxstrand> [ 2438.122585] nouveau 0000:17:00.0: gsp: cli:0xc1d00002 obj:0x00730000 ctrl cmd:0x00731341 failed: 0x0000ffff
17:43 fdobridge_: <g​fxstrand> [ 2438.123293] nouveau 0000:17:00.0: gsp: cli:0xc1d00002 obj:0x00730000 ctrl cmd:0x00731341 failed: 0x0000ffff
17:43 fdobridge_: <g​fxstrand> [ 2438.123957] nouveau 0000:17:00.0: gsp: cli:0xc1d00002 obj:0x00730000 ctrl cmd:0x00731341 failed: 0x0000ffff
17:43 fdobridge_: <g​fxstrand> [ 5422.830047] nouveau 0000:17:00.0: gsp: mmu fault queued
17:43 fdobridge_: <g​fxstrand> [ 5422.835059] nouveau 0000:17:00.0: gsp: rc engn:00000001 chid:16 type:31 scope:1 part:233
17:43 fdobridge_: <g​fxstrand> [ 5422.835069] nouveau 0000:17:00.0: fifo:000000:0002:0010:[deqp-vk[8890]] errored - disabling channel
17:43 fdobridge_: <g​fxstrand> [ 5422.835077] nouveau 0000:17:00.0: deqp-vk[8890]: channel 16 killed!
17:43 fdobridge_: <g​fxstrand> ```
17:44 fdobridge_: <g​fxstrand> I may have to do this run in pieces.
18:11 fdobridge_: <g​fxstrand> Damn. Died again.
18:11 fdobridge_: <g​fxstrand> @airlied Looks like something with WSI tests + sync
18:12 fdobridge_:<g​fxstrand> runs again
18:14 fdobridge_: <g​fxstrand> I wonder if something in the WSI tests is causing our context to get permanently entangled with the compositor and that's messing something up.
18:15 fdobridge_: <g​fxstrand> Oh, that is an interesting theory....
18:17 fdobridge_: <g​fxstrand> Like maybe it gets added to a list somewhere when we VM_BIND but never properly gets removed.
18:17 fdobridge_: <g​fxstrand> If my current run fails, I'm going to disable WSI and attempt a full run
18:18 fdobridge_: <g​fxstrand> IDK if it's kosher to submit that but I might try
18:40 fdobridge_: <t​om3026> hm so thats why after wsi im getting all these device lost until the run stops heh
18:56 fdobridge_: <g​fxstrand> It's possible
18:56 fdobridge_: <g​fxstrand> https://cdn.discordapp.com/attachments/1034184951790305330/1199427485264265266/image.png?ex=65c280e4&is=65b00be4&hm=ff21dce5e89b397e7c3451e85acd11cc5e735c055cad546d175f5d2ddf3ac573&
18:56 fdobridge_: <g​fxstrand> It's looking like a damn nice theory, too. That's exactly where my last run stopped.
18:56 fdobridge_: <g​fxstrand> I'm going to disable WSI and run again
19:02 fdobridge_: <t​om3026> is there a --deqp-skip=*wsi*
19:27 fdobridge_: <a​irlied> Hmm I don't think I've ever done a wsi run, guess that solves today's things to do
19:31 fdobridge_: <d​wlsalmeida> @airlied nailed the error to this line:
19:31 fdobridge_: <d​wlsalmeida>
19:31 fdobridge_: <d​wlsalmeida> ```
19:31 fdobridge_: <d​wlsalmeida> switch (device->info.family) {
19:31 fdobridge_: <d​wlsalmeida> case NV_DEVICE_INFO_V0_VOLTA:
19:31 fdobridge_: <d​wlsalmeida> ret = nvif_object_ctor(&chan->chan->user, "abi16CeWar", 0, VOLTA_DMA_COPY_A,
19:31 fdobridge_: <d​wlsalmeida> NULL, 0, &chan->ce);
19:31 fdobridge_: <d​wlsalmeida> if (ret)
19:31 fdobridge_: <d​wlsalmeida> goto done;
19:31 fdobridge_: <d​wlsalmeida> break;
19:31 fdobridge_: <d​wlsalmeida> case NV_DEVICE_INFO_V0_TURING:
19:31 fdobridge_: <d​wlsalmeida> ret = nvif_object_ctor(&chan->chan->user, "abi16CeWar", 0, TURING_DMA_COPY_A, <-------
19:31 fdobridge_: <d​wlsalmeida> NULL, 0, &chan->ce);
19:31 fdobridge_: <d​wlsalmeida> ```
19:31 fdobridge_: <d​wlsalmeida>
19:31 fdobridge_: <d​wlsalmeida> Also, `nouveau.config=NvGspRm=1`
19:31 fdobridge_: <d​wlsalmeida> @airlied nailed the error (-EINVAL) to this line:
19:31 fdobridge_: <d​wlsalmeida>
19:31 fdobridge_: <d​wlsalmeida> ```
19:31 fdobridge_: <d​wlsalmeida> switch (device->info.family) {
19:31 fdobridge_: <d​wlsalmeida> case NV_DEVICE_INFO_V0_VOLTA:
19:31 fdobridge_: <d​wlsalmeida> ret = nvif_object_ctor(&chan->chan->user, "abi16CeWar", 0, VOLTA_DMA_COPY_A,
19:31 fdobridge_: <d​wlsalmeida> NULL, 0, &chan->ce);
19:31 fdobridge_: <d​wlsalmeida> if (ret)
19:31 fdobridge_: <d​wlsalmeida> goto done;
19:31 fdobridge_: <d​wlsalmeida> break;
19:31 fdobridge_: <d​wlsalmeida> case NV_DEVICE_INFO_V0_TURING:
19:31 fdobridge_: <d​wlsalmeida> ret = nvif_object_ctor(&chan->chan->user, "abi16CeWar", 0, TURING_DMA_COPY_A, <-------
19:31 fdobridge_: <d​wlsalmeida> NULL, 0, &chan->ce);
19:31 fdobridge_: <d​wlsalmeida> ```
19:31 fdobridge_: <d​wlsalmeida>
19:31 fdobridge_: <d​wlsalmeida> Also, `nouveau.config=NvGspRm=1` (edited)
19:32 fdobridge_: <d​wlsalmeida> can you elaborate a bit on what `nvif` stands for?
19:39 Lyude: karolherbst: not sure - can try to check today or tomorrow
19:41 Lyude: dwlsalmeida: nvif is the nvidia interface. Basically: the way nouveau was designed was around the fact that with a lot of nvidia GPU functions, you are given a push buffer - but you can also be given unprivileged dma push buffers and other kinds of various hw interfaces that can be handed down to a guest vm. so in theory - you could have nvkm, the nvidia kernel module, running on a
19:41 Lyude: host and then the guest could have an nvif driver that connects to that
19:41 Lyude: we never really got that far though
19:41 fdobridge_: <a​irlied> @dwlsalmeida can you pastebin a complete dmesg? That is a wierd place to die and I don't expect nvdec would have any affect
19:42 fdobridge_: <d​wlsalmeida> @airlied sure! but..I don't think there's any error message there, at least nothing trivially apparent
19:42 fdobridge_: <d​wlsalmeida> just a moment
19:42 fdobridge_: <a​irlied> I assume you have gsp fw installed?
19:43 fdobridge_: <d​wlsalmeida> by "have gsp fw installed" you mean passing "nouveau.config=NvGspRm=1 "? or should something else be done here
19:45 fdobridge_: <d​wlsalmeida> fyi: `r535_gsp_load` returns 0 here, so I assume this means it got loaded?
19:45 fdobridge_: <a​irlied> You need a pretty new linux-firmware, but yeah that seems like it loaded
19:46 fdobridge_: <d​wlsalmeida> Lyude: thanks for explaining about `nvif`!
20:00 fdobridge_: <d​wlsalmeida> @airlied https://pastebin.com/y0paDskq
20:00 fdobridge_: <g​fxstrand> @airlied It is in shipping linux-firmware now, isn't it?
20:02 fdobridge_: <a​irlied> yes, and it looks to loaded, since you don't have the lines for what happens without it
20:03 fdobridge_: <d​wlsalmeida> ````
20:03 fdobridge_: <d​wlsalmeida> int
20:03 fdobridge_: <d​wlsalmeida> nvif_object_ioctl(struct nvif_object *object, void *data, u32 size, void **hack)
20:03 fdobridge_: <d​wlsalmeida> {
20:03 fdobridge_: <d​wlsalmeida> struct nvif_client *client = object->client;
20:03 fdobridge_: <d​wlsalmeida> union {
20:03 fdobridge_: <d​wlsalmeida> struct nvif_ioctl_v0 v0;
20:03 fdobridge_: <d​wlsalmeida> } *args = data;
20:03 fdobridge_: <d​wlsalmeida>
20:03 fdobridge_: <d​wlsalmeida> if (size >= sizeof(*args) && args->v0.version == 0) {
20:03 fdobridge_: <d​wlsalmeida> if (object != &client->object)
20:03 fdobridge_: <d​wlsalmeida> args->v0.object = nvif_handle(object);
20:03 fdobridge_: <d​wlsalmeida> else
20:03 fdobridge_: <d​wlsalmeida> args->v0.object = 0;
20:03 fdobridge_: <d​wlsalmeida> args->v0.owner = NVIF_IOCTL_V0_OWNER_ANY;
20:03 fdobridge_: <d​wlsalmeida> } else
20:03 fdobridge_: <d​wlsalmeida> return -ENOSYS;
20:03 fdobridge_: <d​wlsalmeida>
20:03 fdobridge_: <d​wlsalmeida> return client->driver->ioctl(client->object.priv, data, size, hack); <-------------
20:03 fdobridge_: <d​wlsalmeida> }
20:03 fdobridge_: <d​wlsalmeida> ```
20:03 fdobridge_: <d​wlsalmeida>
20:03 fdobridge_: <d​wlsalmeida> I wonder if you know what that points to when coming from `nouveau_abi16_ioctl_channel_alloc(ABI16_IOCTL_ARGS)`?
20:03 fdobridge_: <d​wlsalmeida> going to bet that this is actually where the error comes from
20:05 fdobridge_: <d​wlsalmeida> `addr2line` isn't the best of friends at times :/
20:06 fdobridge_: <n​ishi> i'm not exactly sure what info is useful to give, so sorry if i miss anything TmT
20:06 fdobridge_: <n​ishi> using the nouveau reclocking on my 2060 creates a bunch of issues which i have no idea where they stem from
20:06 fdobridge_: <n​ishi> - hyprland straight up not launching anymore
20:06 fdobridge_: <n​ishi> - desktop positions on sddm swapped for some reason
20:06 fdobridge_: <n​ishi> - plasma wayland encountering intense lags when dragging on the desktop
20:06 fdobridge_: <n​ishi> if you want me to check for more issues, i'll be more than happy to
20:07 fdobridge_: <n​ishi>
20:07 fdobridge_: <n​ishi> useful (?) info:
20:07 fdobridge_: <n​ishi> switched from 1660 super (worked fine with reclocking) to 2060 (from my dad's pc before he upgraded his gpu)
20:07 fdobridge_: <n​ishi> running kernel 6.7.0-arch3-1
20:07 fdobridge_: <n​ishi> using dual monitor
20:07 fdobridge_: <n​ishi> stuff added to startup: ``module_blacklist=nvidia,nvidia_uvm,nvidia_modeset,nvidia_drm nouveau.config=NvGspRm=1``
20:07 fdobridge_: <n​ishi> ``cat /usr/lib/modprobe.d/nvidia-utils.conf``: ``#blacklist nouveau``
20:07 fdobridge_: <n​ishi> https://cdn.discordapp.com/attachments/1034184951790305330/1199445163358040084/20240123_183957.mp4?ex=65c2915b&is=65b01c5b&hm=5fb675abdb76cd3b30cb4e22462d9b49b41b19e2647e580049f0019b53630940&
20:07 fdobridge_: <n​ishi> https://cdn.discordapp.com/attachments/1034184951790305330/1199445164255608932/20240123_184102.mp4?ex=65c2915b&is=65b01c5b&hm=8a84a467f836937e90d10d68f5011d10dd3164c3529cc1dcf4555bf9cbc5e4aa&
20:07 fdobridge_: <n​ishi> https://cdn.discordapp.com/attachments/1034184951790305330/1199445164847026176/20240123_184156.mp4?ex=65c2915b&is=65b01c5b&hm=b82ff2a4ac8b2be4bec4876b0dcfd747d811d65a1b746a74643e66db8c3b4195&
20:10 fdobridge_: <g​fxstrand> What do you mean by desktop positions being swapped? I could easily believe that nouveau enumerates display connectors differently with GSP.
20:16 fdobridge_: <n​ishi> on proprietary drivers and on nouveau without the kernel param i just noticed i've been saying the wrong thing T^T) the monitors are configured fine (i.e. they look like they're "connected"?) but once i enable the param the desktops switch positions, my left monitor being put on the right and vice versa
20:17 fdobridge_: <n​ishi> on proprietary drivers and on nouveau without the kernel param the monitors are configured fine (i.e. they look like they're "connected"?) but once i enable the param the desktops switch positions, my left monitor being put on the right and vice versa (edited)
20:18 fdobridge_: <a​irlied> @dwlsalmeida where in mesa is it bailing out?
20:19 fdobridge_: <d​wlsalmeida> ````
20:19 fdobridge_: <d​wlsalmeida> int
20:19 fdobridge_: <d​wlsalmeida> nouveau_ws_vid_context_create(struct nouveau_ws_device *dev, struct nouveau_ws_vid_context **out)
20:19 fdobridge_: <d​wlsalmeida> {
20:19 fdobridge_: <d​wlsalmeida> struct drm_nouveau_channel_alloc req = { .fb_ctxdma_handle = ~0, .tt_ctxdma_handle = 0x300 };
20:19 fdobridge_: <d​wlsalmeida> uint32_t classes[NOUVEAU_WS_CONTEXT_MAX_CLASSES];
20:19 fdobridge_: <d​wlsalmeida> uint32_t base;
20:19 fdobridge_: <d​wlsalmeida>
20:19 fdobridge_: <d​wlsalmeida> *out = CALLOC_STRUCT(nouveau_ws_vid_context);
20:19 fdobridge_: <d​wlsalmeida> if (!*out)
20:19 fdobridge_: <d​wlsalmeida> return -ENOMEM;
20:19 fdobridge_: <d​wlsalmeida>
20:19 fdobridge_: <d​wlsalmeida> int ret = drmCommandWriteRead(dev->fd, DRM_NOUVEAU_CHANNEL_ALLOC, &req, sizeof(req)); <---------------
20:19 fdobridge_: <d​wlsalmeida> ```
20:23 fdobridge_: <a​irlied> I wonder does turing use a different nvdec configuration
20:24 fdobridge_: <g​fxstrand> Yeah, that's probably just the two drivers picking different arbitrary orders to list the GPU connectors. Annoying but ultimately harmless unless you're constantly switching back and forth.
20:24 fdobridge_: <n​ishi> yeah that's true
20:25 fdobridge_: <g​fxstrand> @nishi With the other issues, are those vs. the blob or vs. nouveau without GSP? As in does hyperland work on non-GSP nouveau?
20:25 fdobridge_: <n​ishi> hyprland works on non-gsp nouveau and blob, but breaks on nouveau + gsp
20:25 fdobridge_: <g​fxstrand> Oh, well that's not good. Mind filing a bug about it?
20:26 fdobridge_: <n​ishi> is there a general format i should follow for bug reports?
20:26 fdobridge_: <n​ishi> and also where do i put them T^T
20:26 fdobridge_: <g​fxstrand> https://gitlab.freedesktop.org/mesa/mesa/-/issues/
20:27 fdobridge_: <g​fxstrand> There's a "Bug Report" template you can use.
20:27 fdobridge_: <g​fxstrand> Well, maybe that one should go against drm, actually
20:28 fdobridge_: <g​fxstrand> https://gitlab.freedesktop.org/drm/nouveau/-/issues/
20:28 fdobridge_: <g​fxstrand> We can move it if you file it in the wrong place but, given that it works on non-gsp nouveau, I'm going to assume it's a kernel bug so it should go in drm/nouveau
20:29 fdobridge_: <g​fxstrand> First shard survived...
20:29 fdobridge_: <n​ishi> alrighty ty :P
20:29 fdobridge_: <g​fxstrand> So, yeah, we have something funky going on with exported buffers that's trashing my context
20:29 fdobridge_: <g​fxstrand> @airlied ^^
20:32 fdobridge_: <a​irlied> @dwlsalmeida I'll see if I can figure out what turing needs todau
20:32 fdobridge_: <d​wlsalmeida> ack, thanks for the help!
20:41 fdobridge_: <a​irlied> @gfxstrand does just running wsi cases serially in deqp-vk die? are you running against a gnome wayland desktop?
20:42 fdobridge_: <a​irlied> I just tried running them against a bare X server and that all passed serially
20:48 fdobridge_: <a​irlied> not saying you should just use a base X server for expediency 😛
20:48 fdobridge_: <!​DodoNVK (she) 🇱🇹> I'm not sure who uses a bare X server in 2024 though
20:50 fdobridge_: <g​fxstrand> @airlied Oh, all the WSI tests pass fine. It's just that the synchronization tests that run after the WSI tests die
20:50 fdobridge_: <g​fxstrand> 🤡
20:50 fdobridge_: <g​fxstrand> @airlied And, yeah, I'm running against GNOME
20:50 fdobridge_: <g​fxstrand> See also this
20:53 fdobridge_: <t​om3026> I ran on kde X11 and same here wsi tests passes but sync dies
20:55 fdobridge_: <a​irlied> if I have to go fix the gl driver I won't be happy
20:56 fdobridge_: <g​fxstrand> I don't think this is the GL driver's fault.
20:56 fdobridge_: <g​fxstrand> I think it's some sort of polution that survives even after we've closed all the shared BOs.
20:57 fdobridge_: <r​edsheep> Has anyone tried with the gnome session running through zink? Not sure that is working yet given an issue from the other day.
20:57 fdobridge_: <a​irlied> yeah it could be the legacy submission paths the GL driver uses
20:59 fdobridge_: <r​edsheep> If that GL driver isn't intended to be used going forward seems like it shouldn't be involved in the test
21:25 fdobridge_: <g​fxstrand> @airlied Looks like I can't assign things to dakr on GitLab but these two are for him:
21:25 fdobridge_: <g​fxstrand> https://gitlab.freedesktop.org/drm/nouveau/-/issues/311
21:25 fdobridge_: <g​fxstrand> https://gitlab.freedesktop.org/drm/nouveau/-/issues/312 (edited)
21:25 fdobridge_: <g​fxstrand> https://gitlab.freedesktop.org/drm/nouveau/-/issues/311 (edited)
21:25 fdobridge_: <g​fxstrand> https://gitlab.freedesktop.org/drm/nouveau/-/issues/312
21:26 fdobridge_: <g​fxstrand> Neither should take more than a few hours
22:39 fdobridge_: <k​arolherbst🐧🦀> @gfxstrand we talked about your ISA request today with nvidia :ferrisUpsideDown:
22:41 HdkR: ooo, what instructions do you want added? :P
22:41 fdobridge_: <k​arolherbst🐧🦀> @gfxstrand ohh and we might have a solution for requesting docs and Ben got told, but nobody else :ferrisUpsideDown:
22:41 fdobridge_: <k​arolherbst🐧🦀> so that info got lost
22:42 fdobridge_: <g​fxstrand> Woof
22:42 fdobridge_: <k​arolherbst🐧🦀> 😄
22:42 fdobridge_: <g​fxstrand> If there's a channel by which I can request docs and they'll actually give them to me, that's good enough.
22:42 fdobridge_: <k​arolherbst🐧🦀> but yeah.. in theory I have direct access to file bugs into nvidia's bug tracker, but nobody told me 😄
22:42 fdobridge_: <k​arolherbst🐧🦀> and I asked for you to get access as well
22:42 fdobridge_: <k​arolherbst🐧🦀> and they said "it's just admin work in the way of doing that"
22:42 fdobridge_: <k​arolherbst🐧🦀> so just somebody needing to do that
22:43 fdobridge_: <g​fxstrand> Did they sign someone up to do it? 😂
22:43 fdobridge_: <k​arolherbst🐧🦀> 😄
22:43 fdobridge_: <k​arolherbst🐧🦀> it sounded like that's what they plan to do
22:43 fdobridge_: <g​fxstrand> Maybe you can file a bug in their tracker to give me access to their tracker
22:43 fdobridge_: <k​arolherbst🐧🦀> 😄
22:43 fdobridge_: <k​arolherbst🐧🦀> yeah dunno.. John said that it will be taken care of
22:43 fdobridge_: <g​fxstrand> Okay
22:44 fdobridge_: <k​arolherbst🐧🦀> anyway... I already have access and I will convert my email threads to that once I get the info
22:44 fdobridge_: <g​fxstrand> Does this John have an e-mail address you can DM me? Or you can send an introductory e-mail?
22:44 fdobridge_: <k​arolherbst🐧🦀> which was that helper invoc bit + SPH headers
22:49 Lyude: airlied, dakr: pushed some more stuff to my rvkms branch. No working skeleton yet but I'm getting close. One thing I'm currently trying to figure out: what we need to do about https://github.com/AsahiLinux/linux/blob/asahi/drivers/gpu/drm/asahi/driver.rs#L165 . registrations() exists in our tree but is commented out, and the types appear to be totally different - so I assume there's
22:49 Lyude: probably some more work there I need to track down
22:51 Lyude: I might be able to figure something else out to pass there but i'm still trying to wrap my head around pinning (I get what guarantees it provides but am not really sure how to encorporate it into something)
22:53 karolherbst: pinning is overrated :P
22:58 fdobridge_: <g​fxstrand> Rust `Pin<T>` or kernel pinning?
22:59 karolherbst: rust
23:03 Lyude: gfxstrand: rust Pin<T>
23:03 Lyude: well in the kernel so would that be kernel pinning?
23:04 karolherbst: I think with kernel pinning gfxstrand means like memory pining
23:04 Lyude: (I might actually know what the missing type for registrations() is now… looked a bit more closely )
23:04 Lyude: ahhh
23:04 Lyude: looks like we might actually just be missing some convienence types for revokable mutexes
23:05 Lyude: I think I see how to implement them, I'll give it a shot
23:29 fdobridge_: <g​fxstrand> Damn you, dEQP-VK.subgroups.basic.framebuffer.subgroupmemorybarrierimage_tess_control! My nemesis!
23:31 fdobridge_: <g​fxstrand> I thought for sure I'd fixed that one
23:33 fdobridge_: <g​fxstrand> I even kinda know what's wrong with it.
23:42 fdobridge_: <r​edsheep> Has anybody recently tested if zink on nvk can load a full plasma session? I am rebuilding my kernel amd mesa to see if I can make it work, and see if it helps at all.
23:42 fdobridge_: <r​edsheep> and mesa*
23:42 fdobridge_: <a​irlied> @gfxstrand you running wsi.wayland tests?
23:43 fdobridge_: <g​fxstrand> I think so
23:43 fdobridge_: <a​irlied> I'm getting a memory corruption with dEQP-VK.wsi.wayland.swapchain.simulate_oom.min_image_count
23:44 fdobridge_: <g​fxstrand> Oh...
23:44 fdobridge_: <a​irlied> oh ignore me
23:45 fdobridge_: <a​irlied> definitely wasn't just running llvmpiep
23:45 fdobridge_: <g​fxstrand> lol