00:00 anholt: v6.3.13-for-mesa-ci-bbe75e512c76
00:02 karolherbst: you have that helper invocation patch, right?
00:10 anholt: do you expect lack of helper inv patch to flake all rendering tests? I feel like you've had me mess with it before.
00:13 karolherbst: mhhh... unlikely though. Kinda sounds more like a memory coherency problem
00:14 karolherbst: or something like that
00:15 benjaminl: ime there are tests that succeed without the patch because the registers that it's loading descriptors into start out with the right value by coincidence
00:21 anholt: hmm. we call our heap coherent, but we're not setting NOUVEAU_GEM_DOMAIN_COHERENT?
00:24 karolherbst: in gl we don't use it either
00:24 karolherbst: well.. for fences we do I guess
00:27 karolherbst: anholt: mind checking what happens here? https://gitlab.freedesktop.org/nouveau/mesa/-/blob/nvk/main/src/nouveau/winsys/nouveau_device.c#L246
00:29 anholt: device->vram_size should be zero.
00:29 karolherbst: well.. should be, but who knows
00:44 fdobridge: <k​arolherbst🐧🦀> yeah okay.. sometihng is busted with Ada
00:44 fdobridge: <k​arolherbst🐧🦀> "gsp: Xid:13 Graphics SM Warp Exception on (GPC 0, TPC 0, SM 0): Misaligned Register"
00:44 fdobridge: <m​henning> prime + gsp worked well enough for glxgears on ampere last time I tried it (but not well enough for a cts run)
00:45 fdobridge: <k​arolherbst🐧🦀> yeah.. looks like Ada actually requires some work even though the 3D support is literally the same as the one from ampere 😄
00:46 fdobridge: <k​arolherbst🐧🦀> I should also for ISA docs for Ada 🙃
00:54 fdobridge: <k​arolherbst🐧🦀> something in those shaders is wrong https://gist.githubusercontent.com/karolherbst/86a016683abf9a17f1b65398b44b4638/raw/dcb380558db8ab638fefe38552182962552262a2/gistfile1.txt
00:55 fdobridge: <k​arolherbst🐧🦀> I wonder if tex requires stricter alignment...
00:55 fdobridge: <k​arolherbst🐧🦀> `2: tex 2D $r8 $s0 rgba f32 { $r0 $r1 $r2 $r3 } $r0 $r1 (16)`
00:56 fdobridge: <k​arolherbst🐧🦀> huh...
00:56 fdobridge: <k​arolherbst🐧🦀> how does that get translated to `TEX R1, R0, R0, R1, 0xf, 0x8, 2D ;`
00:58 fdobridge: <k​arolherbst🐧🦀> ` 2: tex 2D $r8 $s0 rgba f32 { $r0d $r2d } $r0d (16)` on turing.... huh
00:58 fdobridge: <k​arolherbst🐧🦀> did I forget to enable ada somewhere? 😄
01:00 fdobridge: <k​arolherbst🐧🦀> ohh soooo
01:00 fdobridge: <k​arolherbst🐧🦀> *shooo
01:03 fdobridge: <k​arolherbst🐧🦀> ir renders :3
01:04 fdobridge: <k​arolherbst🐧🦀> let's see how bad a CTS run is
01:07 fdobridge: <k​arolherbst🐧🦀> "Pass: 8731, Fail: 37, Crash: 1, Warn: 2, Skip: 728, Flake: 1, Duration: 1:47, Remaining: 19:39"
01:07 fdobridge: <k​arolherbst🐧🦀> not bad so far
01:10 fdobridge: <k​arolherbst🐧🦀> https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/24226 🙃
01:10 fdobridge: <k​arolherbst🐧🦀> low effort enablement
01:18 fdobridge: <a​irlied> okay fixed external fd support in new-uapi
01:21 fdobridge: <g​fxstrand> Building...
01:35 fdobridge: <k​arolherbst🐧🦀> the thing I like the most about GSP is, that we finally get some more useful error messages 😄
01:35 fdobridge: <k​arolherbst🐧🦀> `Pass: 105202, Fail: 472, Crash: 14, Warn: 17, Skip: 8229, Timeout: 37, Flake: 25, Duration: 24:09, Remaining: 0`
01:35 fdobridge: <k​arolherbst🐧🦀> @gfxstrand I think we can just enable Ada ....
01:36 fdobridge: <k​arolherbst🐧🦀> (in nvk)
01:36 fdobridge: <a​irlied> you should find that ampere bug I was seeing with the SKED error
01:36 fdobridge: <k​arolherbst🐧🦀> ahhh.. good idea
01:37 fdobridge: <k​arolherbst🐧🦀> @airlied which test was it?
01:37 fdobridge: <k​arolherbst🐧🦀> though I guess that was vulkan
01:37 fdobridge: <a​irlied> yeah it was vulkan
01:38 fdobridge: <a​irlied> dEQP-VK.pipeline.monolithic.spec_constant.compute.expression.array_size
01:38 fdobridge: <k​arolherbst🐧🦀> mhh, the only errors I'm seeing with GL are out of range ones
01:38 fdobridge: <k​arolherbst🐧🦀> okay.. I'll do the vulkan enablement tomorrow
01:38 fdobridge: <k​arolherbst🐧🦀> kinda funky that a 9 loc patch is all it takes for GL
01:38 fdobridge: <a​irlied> uggh build a kernel with lots of debug turned on to find bugs, run CTS, 18 hours later 😛
01:39 fdobridge: <k​arolherbst🐧🦀> uhhh...
01:39 fdobridge: <k​arolherbst🐧🦀> the thing is.. I use USB-PD to charge this laptop even though it needs like... 180W AC 😄
01:39 fdobridge: <a​irlied> probably should build a second kernel
01:47 fdobridge: <g​fxstrand> Nice!
01:48 fdobridge: <k​arolherbst🐧🦀> does reclocking work with GSP?
01:48 fdobridge: <k​arolherbst🐧🦀> I uhm... might want to do some fancy benchmarks with nvk 😄
01:48 fdobridge: <g​fxstrand> @airlied Does that branch have GSP as well?
01:48 fdobridge: <a​irlied> nope
01:49 fdobridge: <g​fxstrand> That'd be cool. 😁 I expect it to be kinda crappy relative to the blob because we've not optimized a thing and UBOs are horrible but it should run kinda okay.
01:49 fdobridge: <k​arolherbst🐧🦀> yeah, it's more of a "look, with GSP and nvk you can finally do actual gaming with nouveau"
01:49 fdobridge: <k​arolherbst🐧🦀> even if it's like only 20% as fast, it's better than the current 1%
01:50 fdobridge: <k​arolherbst🐧🦀> that laptop I got here is seriously overspeced
01:50 fdobridge: <a​irlied> I did a talos video a while back already showing that :-)\
01:50 fdobridge: <a​irlied> not sure it showed you could game though :-P, since it still seemed overly slow
01:51 fdobridge: <k​arolherbst🐧🦀> mhhh
01:51 fdobridge: <k​arolherbst🐧🦀> but was that with an RTX 5000 Ada
01:51 fdobridge: <g​fxstrand> We do a full stall on every pipeline barrier....
01:51 fdobridge: <g​fxstrand> That's not helping anyone. 😅
01:52 fdobridge: <k​arolherbst🐧🦀> ehh seems like that GPU is like an RTX 4070 Ti
01:52 fdobridge: <k​arolherbst🐧🦀> or 4080
01:52 fdobridge: <k​arolherbst🐧🦀> should show some more impressive fps
01:52 fdobridge: <k​arolherbst🐧🦀> I'm sure I can just.. uhm.. disable that
01:53 fdobridge: <k​arolherbst🐧🦀> ohh wait.. I can do some low effort benchmarking
01:53 fdobridge: <k​arolherbst🐧🦀> but I gotta connect AC for that
01:56 fdobridge: <g​fxstrand> I mean... we need some WFIs...
01:56 fdobridge: <k​arolherbst🐧🦀> ehhh.. we'll figure it out until XDC
01:56 fdobridge:<g​fxstrand> pulls @airlied's branch
01:57 fdobridge: <a​irlied> keep up to date with it, I'm still closing the gap since the last rebase on master
02:00 fdobridge: <k​arolherbst🐧🦀> looks like pixmark piano crashed the system 🥲
02:01 fdobridge: <a​irlied> also -Dnvk-experimental-uapi=true
02:02 fdobridge: <k​arolherbst🐧🦀> yeah huh... somehow running things besides the CTS and glxgears makes the machine crash
02:03 fdobridge: <a​irlied> looks like d32s8 has some fallout from rebasing, will track it down
02:05 fdobridge: <k​arolherbst🐧🦀> funky.. unigine heaven runs
02:05 fdobridge: <k​arolherbst🐧🦀> but kinda slow
02:06 fdobridge: <g​fxstrand> Okay, first new uAPI CTS run going
02:07 fdobridge: <k​arolherbst🐧🦀> 2560x1600 ultra with 8x AA is like 30 fps
02:07 fdobridge: <g​fxstrand> Not bad!
02:07 fdobridge: <k​arolherbst🐧🦀> maybe I should build mesa as release...
02:07 fdobridge: <k​arolherbst🐧🦀> the CPU is kinda busy
02:07 fdobridge: <g​fxstrand> Never mind... I need to re-calibrate. I'm thinking Intel. 😂
02:07 fdobridge: <k​arolherbst🐧🦀> 😄
02:08 fdobridge: <k​arolherbst🐧🦀> ahh yeah, let me run that with intel actually 😄
02:09 fdobridge: <k​arolherbst🐧🦀> 10 fps on intel
02:10 fdobridge: <k​arolherbst🐧🦀> yeah mhhh
02:10 fdobridge: <k​arolherbst🐧🦀> dunno if reclocking works actually here 😄
02:11 fdobridge: <k​arolherbst🐧🦀> mhhh
02:11 fdobridge: <k​arolherbst🐧🦀> CPU is still at 100%
02:12 fdobridge: <k​arolherbst🐧🦀> I wonder if we are doing something dumb...
02:12 fdobridge: <k​arolherbst🐧🦀> but the GPU is also running hot
02:12 fdobridge: <k​arolherbst🐧🦀> I'm sure nvk is 100x faster, but that's for tomorrow to find out
02:14 fdobridge: <k​arolherbst🐧🦀> I should sleep I have a meeting in like... 10 hours
02:15 fdobridge: <g​fxstrand> Oh, I'm sure it is
02:20 fdobridge: <a​irlied> okay pushed a fix for the d32s8 fails
02:21 fdobridge: <g​fxstrand> kk
02:21 fdobridge: <g​fxstrand> I'm running the CTS now
02:21 fdobridge: <g​fxstrand> How do I know that it's using the new uAPI?
02:24 fdobridge: <a​irlied> does vulkaninfo show sparseResidency?
02:24 fdobridge: <a​irlied> sorry sparseBinding
02:24 fdobridge: <a​irlied> and timelineSeamphore
02:26 fdobridge: <g​fxstrand> Yeah
02:27 fdobridge: <g​fxstrand> Okay, that'll tell me. 😄
02:32 fdobridge: <g​fxstrand> Okay, missing the meson flag.
02:34 fdobridge: <a​irlied> I expect there might be a few regressions sitting around, since planes kinda was a start from scratch moment
02:35 fdobridge: <g​fxstrand> heh
02:35 fdobridge: <g​fxstrand> If the KMD is stable, I'll probably spend some time this week playing with it.
02:35 fdobridge: <g​fxstrand> How close to merging are we on the kernel side? If I review the API are we good?
02:36 fdobridge: <a​irlied> yeah I think the gpuva stuff is acked for landing, so it's just the nouveau side which we want to ack
02:37 fdobridge: <a​irlied> not sure how careful we want to be about exposing the new uapi, it's probably fine since nvk is the only user
02:37 fdobridge: <g​fxstrand> Okay. Cool
02:37 fdobridge: <a​irlied> so we've got a fair few weeks until a Linus kernel will have it
02:37 fdobridge: <g​fxstrand> I'll focus on that this week.
02:37 fdobridge: <g​fxstrand> I would love to merge NVK into mesa/main
02:38 fdobridge: <a​irlied> we'd have to burn all the old uapi bits out as well, (or keep them in a branch)
02:39 fdobridge: <g​fxstrand> Yup
02:39 fdobridge: <a​irlied> I'll try and gsp/newuapi crossover going on my ampere now, since it can actually run cts in less than a day
02:40 fdobridge: <g​fxstrand> Yeah, we should make sure that's okeay too
02:40 fdobridge: <g​fxstrand> I could test that, too, in theory.
02:45 fdobridge: <g​fxstrand> vulkaninfo: ../src/nouveau/winsys/nouveau_bo.c:39: bo_bind: Assertion `ret == 0' failed.
02:45 fdobridge: <a​irlied> yeah there's usually a bit of impedance rematching between the two threads of development before it works
02:45 fdobridge: <a​irlied> for everything?
02:45 fdobridge: <g​fxstrand> At least I know it's the right branch! 😅
02:47 fdobridge: <a​irlied> was that vulkaninfo? 🙂
02:47 fdobridge: <g​fxstrand> Yeah
02:48 fdobridge: <a​irlied> also NVK_DEBUG=vm is a thing, but vulkaninfo usually works for me
02:49 fdobridge: <g​fxstrand> ```
02:49 fdobridge: <g​fxstrand> alloc vma 2b000 1000 sparse: 1
02:49 fdobridge: <g​fxstrand> vm bind failed 22
02:49 fdobridge: <g​fxstrand> ```
02:50 fdobridge: <a​irlied> on turing?
02:50 fdobridge: <g​fxstrand> yup
02:52 fdobridge: <a​irlied> I'll build the same kernel branch as you did to check
02:55 fdobridge: <a​irlied> what nvk branch did you build?
02:55 fdobridge: <a​irlied> just in case you got some old ass uapi
02:57 fdobridge: <a​irlied> e791b06a is the latest
02:57 fdobridge: <g​fxstrand> `git fetch https://gitlab.freedesktop.org/nouvelles/kernel/ new-uapi-drm-next`
02:58 fdobridge: <g​fxstrand> 46a6a880babcbe56c3a2ce9ed44aca718fc7dc1d
02:58 fdobridge: <g​fxstrand> + karol's patch
03:00 fdobridge: <g​fxstrand> As per this
03:00 fdobridge: <e​sdrastarsis> on gsp ada?
03:03 fdobridge: <a​irlied> I'm just building that kernel branch now
03:11 fdobridge: <a​irlied> okay seems fine here, you confirm the mesa branch is as above?
03:14 fdobridge: <a​irlied> it might be some pte_kind related stuff though
03:15 fdobridge: <a​irlied> @gfxstrand in nouveau_bo.c can you bump the (1 << 16) to (1 << 21)
03:15 fdobridge: <a​irlied> line 265
03:15 fdobridge: <a​irlied> or around there
03:34 fdobridge: <g​fxstrand> Not at the moment but after a bit
03:46 fdobridge: <g​fxstrand> Nope
03:48 fdobridge: <g​fxstrand> @airlied Fresh pulled the mesa branch from your MR
03:48 fdobridge: <g​fxstrand> e791b06a758ef4e8e75200c882fa03645fc94628
03:49 fdobridge: <g​fxstrand> Same error
03:49 fdobridge: <g​fxstrand> Hrm... Maybe it's trying on Maxwell
03:49 fdobridge: <g​fxstrand> I've got both cards plugged in after all
03:51 fdobridge: <g​fxstrand> Okay, yeah, it was the maxwell
03:52 fdobridge: <g​fxstrand> CTSing Turing now
03:53 fdobridge: <a​irlied> okay I've only tried turing/ampere
03:53 fdobridge: <g​fxstrand> I'll poke at stuff once I get a Turing baseline
03:54 fdobridge: <g​fxstrand> I also want to poke about in the patches and see how I feel about the new paths.
03:54 fdobridge: <g​fxstrand> i.e. review, but with running stuff and tweaking the code as it strikes my fancy
03:54 fdobridge: <g​fxstrand> So far CTS seems to be taking longer and that's a little concerning.
03:55 fdobridge: <g​fxstrand> Oh, that could be because tests run now that didn't before. 🤔
03:59 fdobridge: <g​fxstrand> It also seems to be failing a bit more but I'm less than 10 min into the run. Hopefully I'll have results in the morning.
04:09 fdobridge: <g​fxstrand> It's bedtime soon so I'm not going to really look any more tonight. I'm just going to hope my kernel survives the run and look in the morning.
04:14 fdobridge: <a​irlied> cool
04:14 fdobridge: <a​irlied> I just got ampere gsp/new-uapi to boot, was a few hoops to jump through
04:23 fdobridge: <g​fxstrand> Woo
05:21 fdobridge: <a​irlied> Pass: 378177, Fail: 2670, Crash: 471, Skip: 1633386, Flake: 539, Duration: 1:06:34, Remaining: 0 is my ampere/gsp/new-uapi run
05:29 fdobridge: <e​sdrastarsis> Is turing working on gsp now?
06:42 fdobridge: <g​fxstrand> Similar only with the added bonus of kernel bugs
06:45 fdobridge: <g​fxstrand> ```
06:45 fdobridge: <g​fxstrand> [14384.793943] watchdog: BUG: soft lockup - CPU#13 stuck for 5696s! [gnome-shell:1626]
06:45 fdobridge: <g​fxstrand> ...
06:45 fdobridge: <g​fxstrand> [14300.792861] Call Trace:
06:45 fdobridge: <g​fxstrand> [14300.792862] <IRQ>
06:45 fdobridge: <g​fxstrand> [14300.792862] ? watchdog_timer_fn+0x1a8/0x210
06:45 fdobridge: <g​fxstrand> [14300.792864] ? __pfx_watchdog_timer_fn+0x10/0x10
06:45 fdobridge: <g​fxstrand> [14300.792865] ? __hrtimer_run_queues+0x10f/0x2b0
06:45 fdobridge: <g​fxstrand> [14300.792867] ? hrtimer_interrupt+0xf8/0x230
06:45 fdobridge: <g​fxstrand> [14300.792869] ? __sysvec_apic_timer_interrupt+0x5e/0x130
06:45 fdobridge: <g​fxstrand> [14300.792871] ? sysvec_apic_timer_interrupt+0x6d/0x90
06:45 fdobridge: <g​fxstrand> [14300.792872] </IRQ>
06:45 fdobridge: <g​fxstrand> [14300.792872] <TASK>
06:45 fdobridge: <g​fxstrand> [14300.792873] ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
06:45 fdobridge: <g​fxstrand> [14300.792876] ? ioread32+0x34/0x60
06:45 fdobridge: <g​fxstrand> [14300.792878] nouveau_dma_wait+0x3a1/0x6d0 [nouveau]
06:45 fdobridge: <g​fxstrand> [14300.792984] nouveau_gem_ioctl_pushbuf+0x1688/0x1b00 [nouveau]
06:45 fdobridge: <g​fxstrand> [14300.793098] ? __pfx_nouveau_gem_ioctl_pushbuf+0x10/0x10 [nouveau]
06:45 fdobridge: <g​fxstrand> [14300.793209] drm_ioctl_kernel+0xca/0x170
06:45 fdobridge: <g​fxstrand> [14300.793210] drm_ioctl+0x26d/0x4b0
06:46 fdobridge: <g​fxstrand> [14300.793212] ? __pfx_nouveau_gem_ioctl_pushbuf+0x10/0x10 [nouveau]
06:46 fdobridge: <g​fxstrand> [14300.793324] nouveau_drm_ioctl+0x5a/0xb0 [nouveau]
06:46 fdobridge: <g​fxstrand> [14300.793435] __x64_sys_ioctl+0x91/0xd0
06:46 fdobridge: <g​fxstrand> [14300.793437] do_syscall_64+0x5d/0x90
06:46 fdobridge: <g​fxstrand> [14300.793438] ? exc_page_fault+0x7f/0x180
06:46 fdobridge: <g​fxstrand> [14300.793440] entry_SYSCALL_64_after_hwframe+0x72/0xdc
06:46 fdobridge: <g​fxstrand> ```
06:46 fdobridge: <g​fxstrand> https://cdn.discordapp.com/attachments/1034184951790305330/1131114596451766293/message.txt
06:46 fdobridge: <g​fxstrand> ```
06:46 fdobridge: <g​fxstrand> [14384.793943] watchdog: BUG: soft lockup - CPU#13 stuck for 5696s! [gnome-shell:1626]
06:46 fdobridge: <g​fxstrand> ...
06:46 fdobridge: <g​fxstrand> [14300.792861] Call Trace:
06:46 fdobridge: <g​fxstrand> [14300.792862] <IRQ>
06:46 fdobridge: <g​fxstrand> [14300.792862] ? watchdog_timer_fn+0x1a8/0x210
06:46 fdobridge: <g​fxstrand> [14300.792864] ? __pfx_watchdog_timer_fn+0x10/0x10
06:46 fdobridge: <g​fxstrand> [14300.792865] ? __hrtimer_run_queues+0x10f/0x2b0
06:46 fdobridge: <g​fxstrand> [14300.792867] ? hrtimer_interrupt+0xf8/0x230
06:46 fdobridge: <g​fxstrand> [14300.792869] ? __sysvec_apic_timer_interrupt+0x5e/0x130
06:46 fdobridge: <g​fxstrand> [14300.792871] ? sysvec_apic_timer_interrupt+0x6d/0x90
06:46 fdobridge: <g​fxstrand> [14300.792872] </IRQ>
06:46 fdobridge: <g​fxstrand> [14300.792872] <TASK>
06:46 fdobridge: <g​fxstrand> [14300.792873] ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
06:46 fdobridge: <g​fxstrand> [14300.792876] ? ioread32+0x34/0x60
06:48 fdobridge: <a​irlied> I think that's just a scheduling problem, not sure it's a real issue
06:49 fdobridge: <a​irlied> since at least the instance you give there is a legacy ABI call
06:50 fdobridge: <a​irlied> @esdrastarsis no idea, ampere worked for me, but I haven't got all the turing pieces line up for ben's latest work
06:57 fdobridge: <g​fxstrand> IDK if that's the issue but my whole machine locked up with one test group left to complete.
06:57 fdobridge: <g​fxstrand> I'm trying another run.
06:58 fdobridge: <a​irlied> there might be some interaction between a desktop running on legacy and a CTS running that we haven't seen
06:58 fdobridge: <a​irlied> though I've left gdm going on my ampere
07:00 fdobridge: <g​fxstrand> I shut gdm off for now
07:00 fdobridge: <g​fxstrand> Just in case
07:19 fdobridge: <a​irlied> Pushed a couple of minor new uapi regression fixes. Are you seeing any big ones?
07:39 fdobridge: <g​fxstrand> I haven't gotten through a run yet
07:41 fdobridge: <a​irlied> I'll start my turing non-gsp mode run now, should be finished tomorrow sometime
08:11 fdobridge: <g​fxstrand> I just kicked off another after my last run died in a fire
08:46 fdobridge: <k​arolherbst🐧🦀> yeah
08:57 fdobridge: <g​fxstrand> nouveau_sched locked up again. 🙄
08:59 fdobridge: <g​fxstrand> RCU is giving me red text. That's bad....
09:01 fdobridge: <g​fxstrand> I'm gonna reboot
09:07 fdobridge: <a​irlied> Okay copies of red text would be good, I'll see if mine throws anything wierd
09:18 fdobridge: <g​fxstrand> I've not been able to get through a full run yet. 😭
09:18 fdobridge: <g​fxstrand> I really should try sleeping again.
09:18 fdobridge: <g​fxstrand> I've heard that it's good for you
09:20 fdobridge: <m​arysaka> :nya_panic:
11:32 fdobridge: <k​arolherbst🐧🦀> @gfxstrand do you plan to rebase on top of `mesa/main` in the near future? I've landed the Ada enablement which also has trivial codegen patches.
11:37 fdobridge: <k​arolherbst🐧🦀> @gfxstrand also mind sharing the exact command, then git hash you run the VK CTS on and your most recent `nvk/main` failures.csv (and the commit you ran it on) so I can diff it here? If you don't have anything recent I'm just going to run it through Turing/Ampere myself or something
13:19 fdobridge: <g​fxstrand> yay! New uAPI run finally completed successfully! \o/
13:22 fdobridge: <g​fxstrand> Rebasing now
13:23 fdobridge: <!​[NVK Whacker] Echo (she) 🇱🇹> Now pour some GSP sauce
14:04 fdobridge: <k​arolherbst🐧🦀> It looks like nvk just works without any changes on ada 🙃
14:04 fdobridge: <k​arolherbst🐧🦀> at least vkcube runs
14:05 fdobridge: <k​arolherbst🐧🦀> there is a trivial patch to fix the sm value though, but whatever
14:05 fdobridge: <k​arolherbst🐧🦀> apparently hopper (0x180) is SM90 where Ada (0x19x) is SM89
14:10 fdobridge: <k​arolherbst🐧🦀> sooo.. let's run the CTS with my script and see how bad it is
14:24 fdobridge: <g​fxstrand> Rebased. That was mildly painful... I'm running CTS on the rebase now.
14:24 fdobridge: <k​arolherbst🐧🦀> okay...
14:24 fdobridge: <g​fxstrand> I'll push once CTS is done
14:25 fdobridge: <g​fxstrand> Eh, it's not dying in a fire. I'll push now and push again if I have to.
14:25 fdobridge: <g​fxstrand> There you go. Rebased.
14:25 fdobridge: <k​arolherbst🐧🦀> 😄
14:26 fdobridge: <g​fxstrand> Alyssa (I think) changed `PIPE_PRIM_*` to `MESA_PRIM_*` and that was a bit annoying.
14:26 fdobridge: <g​fxstrand> There was also a NIR change
14:26 fdobridge: <k​arolherbst🐧🦀> "gsp: Xid:13 Graphics Exception: SKEDCHECK05_LOCAL_MEMORY_TOTAL_SIZE failed" @airlied
14:27 fdobridge: <g​fxstrand> ```sh
14:27 fdobridge: <g​fxstrand> #! /bin/bash
14:27 fdobridge: <g​fxstrand>
14:27 fdobridge: <g​fxstrand> OUTDIR="${1}"
14:27 fdobridge: <g​fxstrand>
14:27 fdobridge: <g​fxstrand> if ! mkdir "${OUTDIR}"; then
14:27 fdobridge: <g​fxstrand> echo "${OUTDIR} already exists!"
14:27 fdobridge: <g​fxstrand> exit 1
14:27 fdobridge: <g​fxstrand> fi
14:27 fdobridge: <g​fxstrand>
14:27 fdobridge: <g​fxstrand> dmesg --follow > "${OUTDIR}/dmesg" &
14:27 fdobridge: <g​fxstrand> DMESG_PID="$!"
14:27 fdobridge: <g​fxstrand>
14:27 fdobridge: <g​fxstrand> export MESA_VK_ABORT_ON_DEVICE_LOSS=1
14:27 fdobridge: <g​fxstrand>
14:27 fdobridge: <g​fxstrand> # Disable some codegen optimizations for now
14:27 fdobridge: <g​fxstrand> export NV50_PROG_OPTIMIZE=1
14:27 fdobridge: <g​fxstrand>
14:28 fdobridge: <g​fxstrand> SKIPS=$(cat <<-END
14:28 fdobridge: <g​fxstrand> dEQP-VK.api.object_management.max.*
14:28 fdobridge: <g​fxstrand> dEQP-VK.glsl.derivate..*
14:28 fdobridge: <g​fxstrand> dEQP-VK.graphicsfuzz..*
14:28 fdobridge: <g​fxstrand> dEQP-VK.image.swapchain_mutable..*
14:28 fdobridge: <g​fxstrand> dEQP-VK.wsi..*
14:28 fdobridge: <g​fxstrand> END
14:28 fdobridge: <g​fxstrand> )
14:28 fdobridge: <g​fxstrand>
14:28 fdobridge: <g​fxstrand> PRE_TURING_SKIPS=$(cat <<-END
14:28 fdobridge: <g​fxstrand> dEQP-VK.query_pool..*copy_result.*
14:28 fdobridge: <g​fxstrand> .*null_descriptor.*
14:28 fdobridge: <g​fxstrand> .*cmdcopyquerypoolresults.*
14:28 fdobridge: <k​arolherbst🐧🦀> mhhh
14:28 fdobridge: <g​fxstrand> I could probably turn derivative tests back on now that helpers work
14:29 fdobridge: <k​arolherbst🐧🦀> ohh.. I figuered out what I messed up locally... oh well, let's run it now
14:34 fdobridge: <k​arolherbst🐧🦀> heh "deqp-vk[20786]: VMM allocation failed: -22"
14:34 fdobridge: <g​fxstrand> woo?
14:34 fdobridge: <g​fxstrand> That looks "fun"
14:35 fdobridge: <k​arolherbst🐧🦀> yeah.. no idea, might be some GSP stuff
14:35 fdobridge: <k​arolherbst🐧🦀> anyway.. "19139, Fail: 302, Crash: 13, Warn: 1, Skip: 62479, Flake: 566, Duration: 2:54, Remaining: 1:08:12"
14:35 fdobridge: <k​arolherbst🐧🦀> uhh.. let me see if vkcube still runs
14:36 fdobridge: <k​arolherbst🐧🦀> ahh yeah.. it's using lavapipe now 😄
14:36 fdobridge: <g​fxstrand> hehe
14:36 fdobridge: <g​fxstrand> Yeah, those don't look like NVK numbers. 😛
14:36 fdobridge: <k​arolherbst🐧🦀> I think I'm gonna remove those icd files
14:37 fdobridge: <g​fxstrand> Yeah, when testing I use `VK_ICD_FILENAMES=` to ensure I get exactly one driver and it's the one I want.
14:42 fdobridge: <k​arolherbst🐧🦀> okay.. should be good now
14:48 fdobridge: <k​arolherbst🐧🦀> "Pass: 24817, Fail: 926, Crash: 1, Skip: 145754, Flake: 2, Duration: 3:31, Remaining: 37:50"
14:52 fdobridge: <g​fxstrand> That's a lot of fail for the first 3 min
14:57 fdobridge: <k​arolherbst🐧🦀> "Pass: 61145, Fail: 2262, Crash: 9, Warn: 1, Skip: 358333, Timeout: 3, Missing: 1593380, Flake: 110, Duration: 9:11, Remaining: 0"
14:57 fdobridge: <k​arolherbst🐧🦀> ehh.. seems like my GPU crashed again 😄
14:57 fdobridge: <k​arolherbst🐧🦀> "Pass: 60767, Fail: 2223, Crash: 8, Warn: 1, Skip: 355493, Timeout: 3, Flake: 5, Duration: 9:05, Remaining: 34:41"
14:58 fdobridge: <g​fxstrand> Yeah... looks like
14:58 fdobridge: <k​arolherbst🐧🦀> well.. not tooooo bad, but I guess GSP isn't there yet
14:58 fdobridge: <k​arolherbst🐧🦀> I think it makes more sense to run with Ampere and fix the bugs there
14:58 fdobridge: <k​arolherbst🐧🦀> like that `SKEDCHECK05_LOCAL_MEMORY_TOTAL_SIZE` error
14:58 fdobridge: <k​arolherbst🐧🦀> anyway.. Ada == Ampere
15:03 fdobridge: <g​fxstrand> Okay, rebase run checks out
15:16 fdobridge: <e​sdrastarsis> ben updated the 00.02-gsp-rm branch recently
15:18 fdobridge:<g​fxstrand> lives dangerously and tries another new uAPI run with a rebased Mesa branch
15:24 fdobridge: <k​arolherbst🐧🦀> https://gitlab.freedesktop.org/nouveau/mesa/-/merge_requests/231 guess more won't be needed...
15:25 fdobridge: <k​arolherbst🐧🦀> @gfxstrand you don't have an Ampere GPU or do you?
15:25 fdobridge: <g​fxstrand> No, not yet
15:28 fdobridge: <g​fxstrand> Merged
15:28 fdobridge: <g​fxstrand> I'll probably get some newer cards once NAK is in decent shape.
15:28 fdobridge: <k​arolherbst🐧🦀> cool
15:28 fdobridge: <g​fxstrand> And once GSP is in shape such that I can use a lovelace as my daily driver.
15:29 fdobridge: <k​arolherbst🐧🦀> I'll probably figure out that local memory thing, because that's the only error I was seeing on Ada as well..
15:29 fdobridge: <g​fxstrand> Cool
15:29 fdobridge: <g​fxstrand> That's probably just a QMD thing
15:29 fdobridge: <g​fxstrand> Or is it for all stages?
15:29 fdobridge: <k​arolherbst🐧🦀> it's compute only
15:29 fdobridge: <k​arolherbst🐧🦀> there is also another error, but no idea what that is all about:
15:29 fdobridge: <k​arolherbst🐧🦀> [ 1324.344091] nouveau 0000:01:00.0: gsp: rc engn:00000001 chid:16 type:45 scope:1 part:233
15:29 fdobridge: <k​arolherbst🐧🦀> [ 1324.344098] nouveau 0000:01:00.0: fifo:c00000:0002:0002:[Xorg[10237]] errored - disabling channel
15:29 fdobridge: <g​fxstrand> Yeah, probably a bit moved in QMD
15:30 fdobridge: <k​arolherbst🐧🦀> or some weirdo alignment thing or something
15:30 fdobridge: <k​arolherbst🐧🦀> I'll play around with it
15:30 fdobridge: <k​arolherbst🐧🦀> the annoying part with ada is we don't have the compute class header...
15:31 fdobridge: <k​arolherbst🐧🦀> but 3D is 100% identical to Ampere
15:31 fdobridge: <k​arolherbst🐧🦀> one concerning part is dma-copy
15:33 fdobridge: <k​arolherbst🐧🦀> hopper dma-copy: https://github.com/NVIDIA/open-gpu-kernel-modules/blob/main/src/common/sdk/nvidia/inc/class/clc8b5.h
15:33 fdobridge: <k​arolherbst🐧🦀> no header for ada either
15:33 fdobridge: <k​arolherbst🐧🦀> but ada seems to be the same as ampere here as well.. otherwise how would anything work 😄
15:35 fdobridge: <k​arolherbst🐧🦀> Hopper is probably entirely broken, but I'm also not concerned about users running nouveau on hopper
15:38 fdobridge: <g​fxstrand> I might care eventually
15:38 fdobridge: <g​fxstrand> But not today
15:38 fdobridge: <k​arolherbst🐧🦀> I think the problem with hopper is that it can't do 3D
15:38 fdobridge: <g​fxstrand> Sure
15:38 fdobridge: <g​fxstrand> Vulkan compute, baby!
15:38 fdobridge: <g​fxstrand> Or rusticl
15:38 fdobridge: <k​arolherbst🐧🦀> just use mesh shaders 😛
15:38 fdobridge: <g​fxstrand> Or rusticl + zink + NVK
15:38 fdobridge: <g​fxstrand> Or something
15:39 fdobridge: <k​arolherbst🐧🦀> yeah, but hopper is like.. expensive 😄
15:39 fdobridge: <k​arolherbst🐧🦀> it's really a DC only GPU
15:40 fdobridge: <g​fxstrand> Yeah, I know
15:40 fdobridge: <g​fxstrand> Like I said. I might care eventually but not today.
15:41 fdobridge: <g​fxstrand> If we build it right, what we build for client GPUs should scale to the datacenter. We probably can't beat nvidia at their own CUDA game but we should be able to scale.
15:41 fdobridge: <k​arolherbst🐧🦀> right
16:48 fdobridge: <g​fxstrand> Wow. Got a second CTS run with the new uAPI to survive. 🤯
17:10 fdobridge: <m​arysaka> Nice :vibrate:
17:20 fdobridge: <m​ohamexiety> does GSP run with NVK now?
17:21 fdobridge: <m​ohamexiety> last time I tried it just failed with something related to sync or so 😮
17:23 fdobridge: <m​ohamexiety> you get two (2) TPCs if you're brave enough hahaha
17:23 fdobridge: <m​ohamexiety> you get two (2) TPCs that can do graphics if you're brave enough hahaha (edited)
17:30 HdkR: Puts in to perspective in the latest datacenter keynote that Jensen did. "Can it run Crysis?", Not very efficiently!
17:39 fdobridge: <e​sdrastarsis> yeah, I think the problem with double free on turing using gsp was this function, Ben removed it
18:34 fdobridge: <e​sdrastarsis> Ziggurat (OpenGL native game), 2560x1080 High Quality 60 Fps on Wayland (Sway), my gpu is GTX 1650 (Turing) using Nouveau with GSP reclocking
18:34 fdobridge: <e​sdrastarsis> https://cdn.discordapp.com/attachments/1034184951790305330/1131292988912455890/20230719_15h30m35s_grim.png
18:34 fdobridge: <e​sdrastarsis> finally, nouveau gaming
18:36 fdobridge: <!​[NVK Whacker] Echo (she) 🇱🇹> Try 1080p max settings on SuperTuxKart
18:40 fdobridge: <!​[NVK Whacker] Echo (she) 🇱🇹> https://www.youtube.com/watch?v=paeaveMZms0
18:43 fdobridge: <e​sdrastarsis> 30 fps 🐸
18:43 fdobridge: <!​[NVK Whacker] Echo (she) 🇱🇹> A third of proprietary performance 🤔
18:44 fdobridge: <e​sdrastarsis> Codegen memes?
18:45 fdobridge: <!​[NVK Whacker] Echo (she) 🇱🇹> Faith has said that NAK generated more optimized instructions than codegen some time ago
19:40 fdobridge: <g​fxstrand> The nouveau GL driver is also doing some pretty serious nonsense
19:40 fdobridge: <g​fxstrand> NVK should be doing less nonsense in theory but it's still pretty stall-happy
19:40 fdobridge: <t​tabi1> @esdrastarsis Ben's latest code works on Turing now.
19:43 fdobridge: <!​[NVK Whacker] Echo (she) 🇱🇹> What nonsense?
19:45 fdobridge: <e​sdrastarsis> Yeah, I'm testing now (see my screenshot above), thanks for letting me know
19:45 fdobridge: <t​tabi1> Sweet!
19:48 fdobridge: <e​sdrastarsis> Was the nvkm_firmware_put(blob) in goto done the culprit of the double free?
19:50 fdobridge: <k​arolherbst🐧🦀> @ttabi1 while you are here, any idea what's going on here? https://gist.githubusercontent.com/karolherbst/3a6a06e87236f17a7212de10e3700283/raw/4ee983098c8e7413e52d73a9f9c953c8c2fd2d5d/gistfile1.txt
19:50 fdobridge: <k​arolherbst🐧🦀> this happens after running the VK CTS for a while on Ada
19:51 fdobridge: <t​tabi1> I didn't check the code to see what changed yet.
19:51 fdobridge: <t​tabi1> Hmmm could be anything.
19:51 fdobridge: <t​tabi1> You'd have to ask Ben.
19:52 fdobridge: <t​tabi1> GSP-RM is such a beast that I never just "know" what's going on, I have to debug it.
19:52 fdobridge: <k​arolherbst🐧🦀> fair enough
19:53 fdobridge: <k​arolherbst🐧🦀> kinda feels like GSP crashes or something, but ....
19:53 fdobridge: <t​tabi1> GSP-RM crashes usually appears as a timeout sending RPCs
19:54 fdobridge: <k​arolherbst🐧🦀> mhh...
19:54 fdobridge: <t​tabi1> I have plans on adding a whole bunch of error handling/logging stuff once Ben's code is upstream.
20:00 fdobridge: <k​arolherbst🐧🦀> that would be very helpful!
20:12 fdobridge: <k​arolherbst🐧🦀> @gfxstrand so uhm.. ada doens't like shaders with 0 gprs 😄
20:13 fdobridge: <k​arolherbst🐧🦀> I have to figure out what's going on there, but we might not want to set gprs to 0 and have it be 4 at the minimum or something...
20:28 fdobridge: <m​ohamexiety> I guess ampere behaves differently? this is interesting cuz it's mostly the same SM 😮
20:29 fdobridge: <a​irlied> @gfxstrand any big regressions stand out on uapi?
20:35 fdobridge: <k​arolherbst🐧🦀> no idea
20:43 fdobridge: <k​arolherbst🐧🦀> @gfxstrand what's the proper way of dumping shaders with nvk?
20:43 fdobridge: <k​arolherbst🐧🦀> but anyway.. the slm buffer is too small on ampere/ada
20:43 fdobridge: <k​arolherbst🐧🦀> now figuring out why that is
20:48 fdobridge: <a​irlied> Is the dump same as for gl?
20:49 fdobridge: <k​arolherbst🐧🦀> mhh.. sooo... in one of those tests the per thread local memory is 0x420 and the global slm buffer is 0x4e60000, but that's an invalid combination
20:49 fdobridge: <k​arolherbst🐧🦀> it appears that a global slm buffer of "0x4e60000" can support up to 0x2c0 per thread local memory
20:51 fdobridge: <k​arolherbst🐧🦀> mp count is 38
20:52 fdobridge: <k​arolherbst🐧🦀> a bit too much of a difference to be a simple alignment problem...
20:54 fdobridge: <k​arolherbst🐧🦀> the calculate size for 0x2c0 per thread would be 0x0x3440000
20:54 fdobridge: <k​arolherbst🐧🦀> the calculate size for 0x2c0 per thread would be 0x3440000 (edited)
20:54 fdobridge: <k​arolherbst🐧🦀> ignoring alignment, we multiple the per thread one by 32 * 64 * mp_count
20:54 fdobridge: <k​arolherbst🐧🦀> let's see if the cuda hw support feature shows anything obvious we are missing since ampere
20:55 fdobridge: <k​arolherbst🐧🦀> ahh yeah...
20:57 fdobridge: <k​arolherbst🐧🦀> mhhh strange
20:58 fdobridge: <g​fxstrand> Not that I'm seeing. Doing a bit of refactoring of the code right now.
20:58 fdobridge: <g​fxstrand> Refactoring is the best form of review. 😁
21:02 fdobridge: <k​arolherbst🐧🦀> yeah...
21:02 fdobridge: <k​arolherbst🐧🦀> it's a factor of 1.5 indeed
21:02 fdobridge: <k​arolherbst🐧🦀> the heck
21:02 fdobridge: <k​arolherbst🐧🦀> a warp is still 32 threads, because uhm... 48 would be weird
21:02 fdobridge: <k​arolherbst🐧🦀> and we still only have 64 warps per mp
21:03 fdobridge: <k​arolherbst🐧🦀> maybe the mp count is wrong...
21:04 fdobridge: <k​arolherbst🐧🦀> but 38*1.5 would be 57.. mhh.. maybe it's more like 58 or something.. let's see how I can verify this
21:07 fdobridge: <k​arolherbst🐧🦀> does nvidia report how many mps a GPU has?
21:14 fdobridge: <m​ohamexiety> mp?
21:15 fdobridge: <m​ohamexiety> https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#features-and-technical-specifications think this is the most they reveal out of device limits
21:16 fdobridge: <m​ohamexiety> table 15
21:16 fdobridge: <m​henning> "The combined capacity of the L1 data cache and shared memory is 192 KB/SM in A100 versus 128 KB/SM in V100." <- a 1.5x difference (from https://images.nvidia.com/aem-dam/en-zz/Solutions/data-center/nvidia-ampere-architecture-whitepaper.pdf )
21:17 fdobridge: <k​arolherbst🐧🦀> local memory
21:17 fdobridge: <k​arolherbst🐧🦀> not shared
21:17 fdobridge: <k​arolherbst🐧🦀> local memory is a buffer located in VRAM
21:18 fdobridge: <m​ohamexiety> if mp = sm then 48 is the max number of warps per SM for consumer ampere / ada, and 64 is the max number for pro Ampere (A100)
21:21 fdobridge: <m​henning> oops, I guess I misinterpreted what slm stands for then
21:46 fdobridge: <k​arolherbst🐧🦀> OKAY
21:47 fdobridge: <k​arolherbst🐧🦀> it's a factor of 2 missing
21:47 fdobridge: <k​arolherbst🐧🦀> I don't have 64 warps per SM, I only have 48
21:47 fdobridge: <k​arolherbst🐧🦀> I asked Ben to expose that information via the device info IOCTL thing
21:47 fdobridge: <k​arolherbst🐧🦀> because atm we don't
21:48 fdobridge: <k​arolherbst🐧🦀> or is that the `gpc_count`? let me check...
21:49 fdobridge: <m​ohamexiety> yeah only pro ampere does 64 interestingly
21:55 fdobridge: <k​arolherbst🐧🦀> yeah.. so since ampere there are two SMs
22:00 fdobridge: <k​arolherbst🐧🦀> huh...
22:01 fdobridge: <k​arolherbst🐧🦀> yeah...
22:01 fdobridge: <k​arolherbst🐧🦀> there is a factor of 2 missing
22:10 fdobridge: <k​arolherbst🐧🦀> maybe the mmio thing is different there?
22:11 fdobridge: <k​arolherbst🐧🦀> but.. strange...
22:11 fdobridge: <k​arolherbst🐧🦀> https://cdn.discordapp.com/attachments/1034184951790305330/1131347669097402430/gp100_block_diagram-1.png
22:11 fdobridge: <k​arolherbst🐧🦀> ehh.. wrong chat 😄
22:17 fdobridge: <k​arolherbst🐧🦀> @gfxstrand mhhh.. do you see any local memory related bugs running the CTS?
22:17 fdobridge: <k​arolherbst🐧🦀> because... we sure are calculating that shit incorrectly
22:17 fdobridge: <k​arolherbst🐧🦀> sooo.. the nouveau ioctl gives us the _tpc_ count, not _mp_count_
22:18 fdobridge: <k​arolherbst🐧🦀> what's the different?
22:18 fdobridge: <k​arolherbst🐧🦀> on some gens, a TPC has one SM/MP
22:18 fdobridge: <k​arolherbst🐧🦀> on others a TPC has two SM/MPs
22:18 fdobridge: <k​arolherbst🐧🦀> odd part
22:18 fdobridge: <k​arolherbst🐧🦀> GP100 has 2 per TPC, GP102+ has 1, TU102+ has two again
22:18 fdobridge: <k​arolherbst🐧🦀> but Ben is also not sure if the reported values are all sane or not
22:25 fdobridge: <k​arolherbst🐧🦀> @airlied if you want to run the CTS on ampere again, you need to double the slm buffer size inside `nvk_slm_area_ensure` at `uint64_t size = bytes_per_mp * dev->pdev->dev->mp_count;`. Just stick a `* 2` in there and it should just work (tm)
22:25 fdobridge: <k​arolherbst🐧🦀> I'm seeing faults, but that might be the sahder doing dumb shit
22:25 fdobridge: <k​arolherbst🐧🦀> anyway.. I'll clean that mess up because it's slightly wrong 😄
22:35 fdobridge: <g​fxstrand> When I hooked stuff up for NAK, it looked a bit sketchy. I didn't dig in, though.
22:35 fdobridge: <g​fxstrand> @airlied One thing I think we're still missing from the new uAPI is a new submit ioctl which just takes an array of unlimited length of virtual addresses.
22:36 fdobridge: <g​fxstrand> Or it can be limited length and we can ioctl multiple times.
22:36 fdobridge: <a​irlied> huh the exec ioctl is the new submit
22:36 fdobridge: <g​fxstrand> It just makes syncobj wrangling easier if it's unlimited
22:36 fdobridge: <g​fxstrand> Hrm... I haven't found that in the MR yet
22:36 fdobridge: <a​irlied> it takes pushes which are vaddr/length
22:37 fdobridge: <k​arolherbst🐧🦀> mhh.. though the per thread value should be fine, it's really just that we have to rename `mp_count` to `tpc_count` and double it on some architectures. But there is also this constant 64 warps per `mp` which is less on a couple of GPUs... anyway, I'm kinda trying to figure out what needs to be fixed on what generation of GPU
22:37 fdobridge: <a​irlied> struct drm_nouveau_exec
22:37 fdobridge: <a​irlied> okay it's limited to 32-bits of exec ptrs
22:37 fdobridge: <g​fxstrand> Hrm... Maybe it's hidden in this queue commit
22:37 fdobridge: <k​arolherbst🐧🦀> maybe it all works out in some shaders which don't use a ton of TLS space due to the alignment thingies...
22:37 fdobridge: <g​fxstrand> Yeah, 32 bits should be enough. 😅
22:38 fdobridge: <a​irlied> but yheah exec takes 3 counts, 1 for sync wait, 1 or sync signals and one for pushes
22:39 fdobridge: <g​fxstrand> cool
22:46 fdobridge: <g​fxstrand> I'm going to push the first 4 patches in the MR If this current run goes okay
22:46 fdobridge: <g​fxstrand> It's almost 6:00 PM here so I think I'm probably done for the evening. I'll get to exec tomorrow.
22:46 fdobridge: <a​irlied> cool, I'll look over some cleanups when I get out my next meeting
22:46 fdobridge: <g​fxstrand> So consider the lock on the MR branch released if you want to do any bugfixing
22:47 fdobridge: <g​fxstrand> I've not been bothering to make fixup commits. I just `git commit --amend`
22:48 fdobridge: <a​irlied> I don't quite get your comment on binding over with sparse, but I'll try and figure out what you mean 🙂
22:50 fdobridge: <g​fxstrand> I mean that on `nvk_DestroyImage()` or `nvk_DestroyBuffer()`, we should just free the VMA range. Right now, we're doing a sparse bind over it.
22:51 fdobridge: <g​fxstrand> It still releases any bound BOs so it's not like we're going to start leaking memory but it leaves the memory range bound to whatever the sparse null page thingy looks like.
22:53 fdobridge: <a​irlied> we shouldn't be, it should be just unbind and free
22:53 fdobridge: <a​irlied> if it was a sparse mapping we unbind it
22:56 fdobridge: <a​irlied> sparse buffers get created, gets a sparse mapping, when it's destroy, we destroy that mapping
23:13 fdobridge: <e​sdrastarsis> Wolfenstein: The New Order on low settings, nice
23:13 fdobridge: <e​sdrastarsis> https://cdn.discordapp.com/attachments/1034184951790305330/1131363198138859541/20230719_20h06m44s_grim.png
23:17 fdobridge: <g​fxstrand> Hrm... Maybe I misread
23:18 fdobridge: <g​fxstrand> @airlied I guess I don't get what the difference is between unbind and unbind with `VM_BIND_SPARSE`
23:22 fdobridge: <a​irlied> ah so if we unbind the sparse to a normal bind and then unbind that it's kinda pointless
23:24 fdobridge: <a​irlied> btw we have ran this on pascal in the past and the basics did seem to work
23:42 fdobridge: <g​fxstrand> I guess I don't get eggs l why there's a difference between a sparse and non-sparse unbind.
23:44 fdobridge: <g​fxstrand> I expected it to work like mmap where bunds just overwrite what's there. A sparse bind just sets the soft fault bit on the given range. A non-sparse bind fills it with pages from the BO, and an unbind removes whatever's there and leaves it in a full fault state.
23:45 fdobridge: <g​fxstrand> I expected it to work like mmap where binds just overwrite what's there. A sparse bind just sets the soft fault bit on the given range. A non-sparse bind fills it with pages from the BO, and an unbind removes whatever's there and leaves it in a full fault state. (edited)
23:49 fdobridge: <a​irlied> actually it's likely I can just drop one of those paths in the userspace, thinks its just an after effect from previous iterations
23:49 fdobridge: <a​irlied> we shouldn't be calling the normal unbind_vma in the sparse path
23:58 fdobridge: <g​fxstrand> Why not? I'm still confused as to what a sparse unbind is at all. 🤷🏻‍♀️
23:58 fdobridge: <g​fxstrand> What does that even mean? How is it different from a regular unbind?