00:56 fdobridge: <a​irlied> [28376/28376] skip: 1867, pass: 25756, warn: 11, fail: 703, crash: 38
01:47 fdobridge: <a​irlied> https://paste.centos.org/view/ffeaf342 GL CTS fails with zink
01:47 fdobridge: <a​irlied> @zmike ^
01:58 airlied: dakr: please merge Faith's patch to drm-misc-next along with the dmem fix
02:13 fdobridge: <k​arolherbst🐧🦀> @gfxstrand do you want to review the OOB fixes for the mme simulator or should I just merge it? (it's trivial stuff) https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/24544
02:13 fdobridge: <k​arolherbst🐧🦀> kinda want to unblock CI before people complain to much
02:15 fdobridge: <g​fxstrand> I just assigned to Marge
02:16 fdobridge: <k​arolherbst🐧🦀> cool, thanks
02:17 fdobridge: <g​fxstrand> Thanks for fixing that!
02:17 fdobridge: <g​fxstrand> Both of those bugs would have never happened with Rust. 😛
02:17 fdobridge: <g​fxstrand> 🦀
02:18 fdobridge: <k​arolherbst🐧🦀> I suspect the `/ 3` one would have, _though_ the runtime would have complained about it 😄
02:30 fdobridge: <d​akr91> @airlied Sure, mind reviewing the rest of the series, then I can merge all of them. https://lore.kernel.org/dri-devel/20230807162859.68905-1-dakr@redhat.com/T/#u
02:37 fdobridge: <a​irlied> on the list now
02:52 fdobridge: <g​fxstrand> Pulling into my branch...
03:05 fdobridge: <d​akr91> @airlied merged.
03:08 fdobridge: <g​fxstrand> \o/
03:10 fdobridge: <g​fxstrand> Ugh... I kinda hate this sync test...
03:10 fdobridge: <g​fxstrand> It's possible cross-device sync is busted. It's also possible we just need more cache maintenance and/or WFI somewhere.
03:11 fdobridge: <a​irlied> yeah proving what the race is definitely tricky
03:12 fdobridge: <g​fxstrand> I still get fails even when `CmdPipelineBarrier()` and `EndCommandBuffer()` do a full WFI
03:13 fdobridge: <a​irlied> I asked @dakr to take a look as well, I've got a bit of brainfog going on
03:13 fdobridge: <g​fxstrand> 👍🏻
03:13 fdobridge: <g​fxstrand> I'm starting to think the kernel is fine
03:14 fdobridge: <g​fxstrand> The fails seem specific to certain kinds of reads and writes. If kernel sync were falling over, I'd think it would be more random.
03:15 fdobridge: <g​fxstrand> Unless those are just the slow kinds of writes? 🤷🏻‍♀️
03:15 fdobridge: <g​fxstrand> But on the other hand, other stuff seems to be pretty reliable in NVK in general so it feels weird that just these tests are failing
03:15 fdobridge: <g​fxstrand> Though I think there's other synchronization tests that have been a bit flaky
03:19 fdobridge: <a​irlied> I mainly wondered if it was around the syncobj in/out, but that code is largely generic
03:20 fdobridge: <g​fxstrand> Yeah, I looked through that today. I didn't see anything obvious besides the one thing I sent a patch for.
03:22 fdobridge: <g​fxstrand> Hrm... I'm actually not seeing those tests in my fails anymore
03:23 fdobridge: <g​fxstrand> Anyway, I'll let @dakr91 look at it while I sleep. 😅
03:23 fdobridge: <a​irlied> I had them running fine yesterday, then not running fine
05:32 fdobridge: <!​[NVK Whacker] Echo (she) 🇱🇹> Is it possible to combine NOUVEAU_WS_BO_LOCAL and NOUVEAU_WS_BO_MAP flags? :triangle_nvk:
05:42 fdobridge: <a​irlied> no they don't mean the same thing
05:43 fdobridge: <a​irlied> eventually they might make sense together
06:39 fdobridge: <a​irlied> [28376/28376] skip: 1867, pass: 25771, warn: 11, fail: 701, crash: 25
07:08 fdobridge: <!​[NVK Whacker] Echo (she) 🇱🇹> `[288538.135761] nouveau 0000:01:00.0: fifo: fault 01 [VIRT_WRITE] at 0000007fffb3f000 engine 0f [ce0] client 20 [HUB/HSCE0] reason 02 [PTE] on channel 3 [00ff9d0000 gamescope[469499]]` :frog_gears:
07:08 fdobridge: <!​[NVK Whacker] Echo (she) 🇱🇹> What if I want host-visible/coherent VRAM?
07:17 fdobridge: <a​irlied> currently doesn't exist afaik
07:18 fdobridge: <a​irlied> but we do want host visible vram at some point
07:18 fdobridge: <a​irlied> though I realised I don't think the kernel exposes the visible vram size
07:33 fdobridge: <!​[NVK Whacker] Echo (she) 🇱🇹> 🐸
07:33 fdobridge: <!​[NVK Whacker] Echo (she) 🇱🇹> https://cdn.discordapp.com/attachments/1034184951790305330/1138374380271976448/message.diff
15:11 fdobridge: <g​fxstrand> Ugh... That's annoyingly possible. IDK that it's right but I could believe we have a bug like that.
15:13 fdobridge: <g​fxstrand> Have you seen `nvk_heap.c`? It definitely works to set those two flags. IDK that I want to start handing that off to clients, though, but we've been using it internally for a while.
15:13 fdobridge: <g​fxstrand> There are massive restrictions pre-Turing
15:19 fdobridge: <!​[NVK Whacker] Echo (she) 🇱🇹> What restrictions? Can you only map a limited amount of VRAM?
15:26 fdobridge: <g​fxstrand> You can only write to it
15:26 fdobridge: <g​fxstrand> It will `SIGBUS` if you ever read
15:26 fdobridge: <g​fxstrand> IDK if that's a hardware restriction or a kernel restriction. I just know it's a thing.
15:31 fdobridge: <m​ohamexiety> yeah, for the prop driver [at least on windows] you only get host visible VRAM on Maxwell and up
15:32 fdobridge: <m​ohamexiety> also it came in a driver update; wasn't like that originally
15:44 fdobridge: <g​fxstrand> We currently only allow it on Volta+
15:45 fdobridge: <g​fxstrand> wait, no...
15:45 fdobridge: <g​fxstrand> Yeah, I can't see in the code where the write-only restriction gets removed
16:03 fdobridge: <!​[NVK Whacker] Echo (she) 🇱🇹> Allow what?
16:33 fdobridge: <g​fxstrand> I was wrong about the Volta+ bit.
16:33 fdobridge: <g​fxstrand> That was unrelated
16:34 fdobridge: <g​fxstrand> Point is that we can mmap things with `BO_LOCAL`, we just can't read from them until some HW generation.
16:59 fdobridge: <!​[NVK Whacker] Echo (she) 🇱🇹> I set HOST_VISIBLE and COHERENT bits for the VRAM heap and gamescope exploded when I tried it
17:16 fdobridge: <a​irlied> Ah there is a hw cutoff, I wondered why my Turing didn't explode when I added a 256 vram mappable heap
17:21 fdobridge: <k​arolherbst🐧🦀> I'm still confused on why I was running into that issue in the early days...
17:21 fdobridge: <k​arolherbst🐧🦀> maybe because I started on a laptop and things are different there again?
17:22 fdobridge: <k​arolherbst🐧🦀> (pages are made available to read once the GPU gets runtimed suspended)
17:26 fdobridge: <!​[NVK Whacker] Echo (she) 🇱🇹> So gamescope just explodes and zink has missing graphics with SuperTuxKart :triangle_nvk:
17:28 fdobridge: <g​fxstrand> Just run SuperTuxKart in Vulkan mode. 🙃
17:31 fdobridge: <!​[NVK Whacker] Echo (she) 🇱🇹> I love shader-less graphics 🧓
18:22 dakr: For some reason discord tells me "Your account has been disabled." when I try to log in. Is this a known issue?
18:23 dakr: @gfxstrand, @airlied: I just had a look at dEQP-VK.synchronization.signal_order.shared_binary_semaphore.write_ssbo_vertex_read_copy_buffer.buffer_16384_opaque_fd failing
18:23 karolherbst: dakr: ehh yeah... you aren't the first one to get blocked like this...
18:23 karolherbst: airlied and sima also got blocked once in the past
18:24 sima: well a few times, if this is about discord
18:24 dakr: First of all, all the VM_BINDs are synchronous, hence I don't think it's a problem with in/out syncs.
18:24 sima: but it's been fine since the last blocking half a year ago
18:24 karolherbst: anyway.. I'd like to have a proper IRC replacement at sme point for everybody, but a hosted solution with no influence in such decisions ain't it :D
18:24 karolherbst: they are just sometimes really trigger happy
18:26 dakr: I added a couple of trace_printk() to trace what's going on and it seems that userspace already starts unbinding mappings before the out-fence of the faulting EXEC job is even signaled.
18:27 dakr: This is the relevant (filtered) trace: https://paste.centos.org/view/2fe63865
18:28 dakr: And the full trace: https://paste.centos.org/view/1308ed6d
18:29 dakr: nouveau_exec_job_cb() is a callback registered via dma_fence_add_callback().
18:37 dakr: karolherbst, sima, I see...do you know what's the best thing to do to get unblocked?
19:00 fdobridge: <g​fxstrand> Yes, that's true of the fault. You need a CTS patch
19:00 fdobridge: <a​irlied> I just did the file a support ticket dance one time. Another I unfortunately let the account timeout and reapplied
19:00 fdobridge: <g​fxstrand> But that's only possible because something isn't syncornizing properly
19:02 sima: dakr, you can file a ticket, worked the first time, but 2nd time I just had to wait until they expired the blocked account from their db entirely before I could restart
19:02 sima: iirc 1 month or so
19:04 gfxstrand: dakr: You're seeing those VM_BINDs because there's a CTS bug which doesn't idle the GPU quite early enough to ensure everything is done before the clean up. There are some early returns that get in the way.
19:04 gfxstrand: dakr: However, there is enough of a dependency chain in the work that they submit that, if all the dependencies are properly getting satisified, both devices should be idle anyway.
19:05 gfxstrand: Like, they submit on A and then submit something on B that depends on A and then wait on B. If that worked properly, A would be idle when they go to clean up.
19:05 gfxstrand: The fact that it's not implies a sync problem.
19:30 gfxstrand: Given that vkWaitForFences seems to work, I suspect it's on the wait side, not the signal side in kernel space.
19:43 fdobridge: <a​irlied> @gfxstrand the cond render MR probably is ready for a run btw, 24520
19:43 fdobridge: <g​fxstrand> Okay. I've got a NAK MR running right now. It'll be going another 1.5h if deqp-runner is to be believed. I'll run conditional render after that.
19:43 fdobridge: <g​fxstrand> My kernel is suddenly rock-solid now that I've disabled iwlwifi. 😅
20:38 dakr: @gfxstrand: All VM_BINDs are synchronous, hence they're not relevant dependency wise (except that their corresponding ioctl() must return before a subsequent dependent EXEC job is submitted). There is exactly one EXEC job having an out-fence that is used as an in-fence on another EXEC job. However, both of them aren't signaled yet when the UNBIND breaking things comes in.
20:40 dakr: Even if the dependency handling between the jobs would be broken it is not valid to UNBIND things as long as the correspond EXEC job isn't finished, right?
20:50 gfxstrand: Hrm...
20:51 gfxstrand: dakr: It should definitely be waiting on the 2nd on the CPU
20:51 gfxstrand: I see the saitForFences() call
20:51 gfxstrand: *vkWaitForFences()
20:52 gfxstrand: Well, I see it here in the CTS source code, anyway.
20:52 gfxstrand: I've not verified it in GDB.
21:02 dakr: the question is, if vkWaitForFences() is called, does it return before the EXEC jobs fence is actually signaled...
21:02 dakr: which seems to be likely.
21:05 fdobridge: <a​irlied> It shouldn't though if it's waiting for the right fence
21:06 fdobridge: <a​irlied> Unless there is some crazy race where the fence gets signalled on the FIFO before the shader completes
21:06 fdobridge: <a​irlied> Which wouldn't shock me but it would be pretty horrible
21:07 dakr: That's not the case. I have a callback registered on the fence and in the failing case, the fence wasn't signaled before the UNBIND and the subsequent fault happens.
21:12 fdobridge: <a​irlied> okay should probably work out if the fence wait is doing something wierd then
21:13 dakr: anyway, can someone point me to the CTS patch please? I'd like to run another trace with the fix.
21:17 fdobridge: <a​irlied> https://paste.centos.org/view/8a5a51c5 but I think Faith said it shouldn't be necessary for the next to work
21:40 dakr: ok, the dependent job indeed does not wait for it's in-fence..
22:03 fdobridge: <g​fxstrand> Uh oh...
22:15 gfxstrand: dakr: Looks like you don't implement prepare_job()
22:16 gfxstrand: Hrm... no.
22:21 gfxstrand: dakr: I'm crawling through all the in-fence code and I've yet to find any bugs besides the one from yesterday. :-/
22:22 dakr: gfxstrand: I just recognized that drm_sched_entity_add_dependency_cb() returns false when the job in question is popped from the queue in drm_sched_entity_pop_job() and checked for dependencies.
22:23 dakr: This makes the scheduler schedule the job right away without waiting for anything.
22:24 dakr: This happens either if the job is from the same entity, or if the scheduler fails to register a fence callback.
22:28 gfxstrand: Oh, are we using the same fence context for multiple contexts?
22:28 dakr: each sched_entity and hence each channel should have a separate one
22:29 gfxstrand: Okay, yeah, that makes sense
22:30 gfxstrand: That context check looks suspect
22:31 dakr: That one?
22:31 dakr: if (fence->context == entity->fence_context ||
22:31 dakr: fence->context == entity->fence_context + 1)
22:31 gfxstrand: Well, it's the drm_fence_put() I don't understand
22:31 gfxstrand: But I guess the function is doing drm_fence_put in all false paths
22:31 gfxstrand: Oh, Ok. I think I see what that function is doing. Odd, but okay.
22:32 gfxstrand: Though checking context == context + 1 seems weird
22:32 dakr: I agree.
22:32 dakr: I don't get that either right away.
22:34 dakr: anyway we're running in the "fence is from the same scheduler path" of drm_sched_entity_add_dependency_cb().
22:34 gfxstrand: kk
22:34 gfxstrand: I suspect that +1
22:35 gfxstrand: Hrm... No, that's documented.
22:35 dakr: Which is odd as well, on one hand because the function returns false, which means we fail to register the callback and on the other hand because it says we only need to wait for the job being scheduled rather than completed...
22:36 gfxstrand: So the theory behind that one is that it's just ignoring any fences that come from that same entity because the entity is a FIFO.
22:37 gfxstrand: So any fence from the same entity is either a fence in the future in which case it'll deadlock (and that should be impossible) or a fence triggering by something it's already scheduled. In either case, throwing it away is the safe thing to do.
22:37 dakr: scheduled means that job A depending on job B will be executed even if B is still running.
22:38 dakr: yeah, that one I get, I'm more confused by the latter one.
22:38 gfxstrand: It assumes that jobs run in-order.
22:38 gfxstrand: I'm confused by why the only in-fence we have is from the same scheduler
22:38 dakr: there is only one scheduler.
22:39 dakr: for the whole driver instance.
22:39 gfxstrand: Or same sched_entity, rather
22:40 dakr: it's not from the same sched_entity, but from the same scheduler.
22:40 gfxstrand: But the fence_context is per-entity
22:40 dakr: right.
22:42 dakr: ok, so it just returns false in this path because at the time the job is scheduled already, which is fine.
22:43 dakr: So what we have is:
22:43 dakr: We have entity A and entity B, a job X is submitted through entity A that depends on job Y submitted through entity B.
22:43 dakr: The scheduler recognizes that B must be served first and executes job Y, which means calling the run_job() callback and registering a callback on the fence returned by run_job().
22:44 dakr: Directly afterwards it does the same for job X.
22:44 dakr: But Y can still be running currently.
22:44 gfxstrand: Sure, and that's fine. All its done is register the callback. It shouldn't call run_job() until that callback signals
22:45 gfxstrand: Unless we're calling the callback early for some reason
22:46 dakr: But it does call run_job() in this case, *before* the callback signals, that's the whole point of "Fence is from the same scheduler, only need to wait for it to be scheduled" I think.
22:47 dakr: scheduled means "run_job() has been called", not "fence of this job signaled completion".
22:48 dakr: drm_sched_main() calls drm_sched_fence_scheduled() (which signals the scheduled fence) right after calling the run_job() callback.
22:49 gfxstrand: That should be okay. The fence_context check is only for fences inside the same entity. X and Y are on different entities.
22:49 dakr: I'm not talking about the fence_context check.
22:50 dakr: We're running into this path: https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/scheduler/sched_entity.c#L366
22:51 gfxstrand: Oh....
22:51 dakr: There the callback is registered on "s_fence->scheduled" not "s_fence->finished". the scheduled fence is signaled right away after the run_job() callback...
22:51 dakr: so, yeah, it doesn't keep ordering. the jobs can just race
22:51 gfxstrand: Maybe we need to be setting DRM_SCHED_FENCE_DONT_PIPELINE?
22:52 gfxstrand: It looks like AMD does in at least some cases
22:52 dakr: This would surely make the job pass, I'm convinced.
22:53 dakr: I just don't understand in which case we wouldn't want the behavior with the flag set.
22:53 gfxstrand: Oh, I think I see what's going on....
22:54 gfxstrand: We're running into the out-of-order assumptions of the scheduler.
22:54 gfxstrand: in-order assumptions, rather.
22:54 dakr: why would the scheduler assume jobs from different fifos complete in-order?
22:55 gfxstrand: So, drm/scheduler works on an oldschool ringbuffer model where each scheduler is also a FIFO. The idea is that you have N hardware rings and one scheduler per ring. The job of the scheduler is simply to make sure run_job() gets called in the right order, assuming that run_job() feeds a FIFO.
22:55 gfxstrand: When the scheduler is feeding a firmware scheduler with N rings, this model breaks.
22:56 gfxstrand: My solution with the Xe driver was to have one scheduler per context/channel/engine (pick a name)
22:56 dakr: hm...i see.
22:56 gfxstrand: There has also been talk of making the scheduler understand out-of-order completion
22:56 dakr: which also means a kthread per channel
22:56 gfxstrand: Yes, that's why no one likes that plan. :-)
22:57 dakr: we could abuse DRM_SCHED_FENCE_DONT_PIPELINE meanwhile though... :D
22:57 gfxstrand: Yeah, I think we probably can.
22:58 gfxstrand: Because the fence_context check is first, it will still ignore fences from the same entity. The only other time we should have a drm_sched_fence from the same scheduler is cases where it's useful.
22:59 dakr: right.
22:59 gfxstrand: Seems like a bit of an abuse of that flag but I'm happy to carry that patch until we have a better plan. :-D
23:01 dakr: actually, the more I think about it, the less it feels like abusing it. actually it seems like exactly what we want to deal with this kind of hardware.
23:02 gfxstrand: I'll leave that up to the scheduler maintainers. I gave my plan for how to deal with it and they didn't like it. :-P
23:02 dakr: but maybe there should be scheduler flag for that rather than a fence flag.
23:02 gfxstrand: Anyway, I've typed a hack which I'm about to try.
23:04 gfxstrand: Yeah, this really feels like the one change that should sort out out-of-order return.
23:06 gfxstrand: I just put a set_bit in nouveau_job_fence_attach
23:07 dakr: I put in nouveau_job_add_deps() for every fence that comes as in-fence.
23:07 dakr: I mean, as a hack as well.
23:07 dakr: I think there should be a scheduler flag as mentioned.
23:07 gfxstrand: That was the other option. 😂
23:08 gfxstrand: Let's see if it works here first.
23:08 gfxstrand: I'm about ready to reboot
23:09 dakr: It passes.
23:09 gfxstrand: \o/
23:09 gfxstrand: You're faster at installing kernels than me, clearly
23:10 dakr: Something that never can be fast enough. :D
23:11 gfxstrand: I'm the sort of idiot who builds with a full Fedora config so it takes a while.
23:11 dakr: ah, yeah, I'm using localmodconfig
23:12 dakr: but that can be annoying as well.
23:12 gfxstrand: I just don't like plugging in a USB device and not having it work because it wasn't plugged in at the time I did localmodconfig
23:12 gfxstrand: Test run totals: Passed: 189/189 (100.0%) Failed: 0/189 (0.0%) Not supported: 0/189 (0.0%) Warnings: 0/189 (0.0%) Waived: 0/189 (0.0%)
23:12 gfxstrand: Great success!
23:12 gfxstrand: I'm going to do a full CTS run now to see how things are holding up but I think we found the bug and a solution or two
23:13 dakr: Ok, the question is, do you want me to submit the workaround first to make sure we're not running into other crazy bugs?
23:13 gfxstrand: I'm happy to ack the patch but you should probably CC Matt Brost and Christian könig and in case they have better solutions.
23:13 dakr: Because as mentioned I'd like to have a scheduler flag for that, because it shouldn't depend on the fence telling the scheduler how it should scheduler things for a given HW.
23:14 gfxstrand: I have a feeling long-term this is going to get lost in the giant bikeshed that is kthreads but we have a solution for now.
23:14 dakr: Not sure how long the mailing list discussion will take though.
23:15 gfxstrand: I think the long-term solution is actually to split the kthread from the scheduler and have one kthread serve multiple schedulers and then have one scheduler per channel.
23:15 gfxstrand: At least, there was some discussion trending that way at one point.
23:15 gfxstrand: There was also discussion going in the other direction of just having an "out-of-order" flag on the scheduler or something.
23:16 gfxstrand: For now, I think we can probably land a DONT_PIPELINE flag but long-term IDK where it will land.
23:16 dakr: What's better about having multiple schedulers being backed by the same kthread rather than just waiting for jobs to actually complete for the same scheduler?
23:16 gfxstrand: I walked away from the fight in the pain store a long time ago 😂
23:17 gfxstrand: IDK that one is much better than the other. It's mostly a matter of what mental model we want the scheduler to have. Right now the mental model is that 1 scheduler == 1 hw ring.
23:17 gfxstrand: If that's still true, then you need a scheduler per channel.
23:18 dakr: I see. That makes sense because technically I don't see a difference rather than having multiple scheduler with 1 kthread sounds way more complicated than having a simple flag.
23:18 gfxstrand: Or we can change it and say a scheduler is just a thing that calls run_job() a bunch at which point we maybe have to drop a few assumptions but otherwise it's mostly a matter of setting a bit.
23:19 dakr: yeah, I really feel the latter is way less error prone.
23:19 dakr: I will try my luck or let me teach why this is a bad idea.
23:20 dakr: But I guess we need a quick fix for that right?
23:20 gfxstrand: The annoying thing about that is that if you had better not have multiple entities feeding the same HW ring. Otherwise, all your ordering guarantees go up in smoke.
23:21 gfxstrand: But, yeah, having a mode on the scheduler seems like the easiest thing.
23:21 gfxstrand: As long as we guarantee that there are only really two modes of operation:
23:21 gfxstrand: 1. One scheduler per HW ring and multiple entities feed into the scheduler and it sorts out the ordering.
23:22 gfxstrand: 2. One entity per HW ring and the scheduler just exists to sort out dma_fence deps and feed the all the rings.
23:22 dakr: that sounds sane to me.
23:24 gfxstrand: Anyway, I'm going to kick off a CTS run and go make some supper.
23:24 gfxstrand: Sorry, @airlied, conditional rendering will have to wait. :-/
23:24 gfxstrand: @airlied, rather
23:24 gfxstrand: I'll try to kick that off yet tonight
23:28 fdobridge: <a​irlied> Okay, I just caught up on the last hour of scheduler. @sima had some plan for splitting Sched into a generic frontend dealing with deps and backend for fw schedulers, but I don't want to misquote them!