14:09fdobridge: <karolherbst🐧🦀> sooo.. I think the instability is caused by having multiple channels per application and nouveau not locking that stuff properly and just ends up doing data races on the GPU object tree stuff
14:10fdobridge: <karolherbst🐧🦀> I can see that happening when allocating subchannels e.g.
17:46fdobridge: <binhani> I was trying to compile the repository to test something when I got this error
17:46fdobridge: <binhani> https://cdn.discordapp.com/attachments/1034184951790305330/1146501140905742386/image.png
17:47fdobridge: <binhani> from what I understand, `nvk_entrypoints.h` is auto-generated from `vk_entrypoints_gen.py` which means it should already be there
17:47fdobridge: <binhani> what am I missing?
18:02fdobridge: <![NVK Whacker] Echo (she) 🇱🇹> I think so
18:13anholt: binhani: generally, that's that caused by that library's meson.build forgetting to use the generated header's idep, so you tried to build that code before the auto-generation completed.
18:21dakr: airlied: btw. while moving over to DRM_SCHED_POLICY_SINGLE_ENTITY I noticed a race between nouveau_channel_idle() and dma-fence callbacks from jobs on this channel.
18:22dakr: This seems to be due to nouveau_fence_wait() called with lazy == false, which is polling while the dma-fence path is waiting for the interrupt.
18:24dakr: So, it can happen that a job's dma-fence with a smaller seqno appears to be signaled after nouveau_channel_idle() already returned.
18:26dakr: Not a huge problem, but probably worth to keep that in mind.
23:46fdobridge: <gfxstrand> We're getting there... `Pass: 401948, Fail: 928, Crash: 445, Skip: 1728657, Timeout: 2, Flake: 390, Duration: 1:39:59`
23:46fdobridge: <gfxstrand> NAK CS-only run with my new spiller ^^