00:39 airlied: okay lavapipe win32 has something on the screen, just need to fix the stridfe
00:41 zmike: !
00:42 zmike: screenshot on reddit or it didn't happen
00:42 imirkin: i thought screenshots on reddit were only of things that didn't happen...
00:49 anholt: airlied: nice
00:51 jenatali: airlied: Sweet!
00:52 anholt: airlied: reassigned your lvp asan fixes to marge, once those go through I'll see what the rebased CI run looks like.
00:52 anholt: really hoping we'll be able to run on zink soon
00:52 anholt: *turn on
00:55 airlied: yay thw vkcube is spinning
00:55 airlied: now to clean up the mess
01:00 zmike: airlied: are you going to put this on ci?
01:01 airlied: zmike: might leave that to daniels :-P
01:02 zmike: well I didn't mean "you" literally, we're a community
01:02 zmike: if that's in the cards though, it'd be cool to see zink put in there too since we seem to have trouble keeping the windows build operational
01:05 jenatali: Seems worth adding to the Windows CI, at least from a build perspective
01:06 jenatali: Not sure how much effort it's worth writing tests for it though
01:07 zmike: should just be able to run the same tests?
01:07 zmike:should spin up a windows vm sometime for testing
01:08 jenatali: Oh, piglit on zink on lavapipe? Sure
01:13 airlied: jenatali: we also run cts on lvp
01:13 jenatali: airlied: But not on Windows
01:13 jenatali: Meaning the CTS isn't currently built for Windows
01:14 airlied: yeah that would be the pain alright :-P
01:14 jenatali: Yyyyyyep, can confirm :P
01:45 airlied: zmike: not quite reddit, but I tweeted it :-P
01:45 zmike:rushes to the twittersphere
01:46 jenatali:counts down seconds til Phoronix article
01:48 jekstrand: Uh, oh... What'd you do?
01:51 jenatali: jekstrand: Lavapipe on Windows
01:57 airlied: jekstrand79: just finished the lvp win32 port
01:57 jekstrand79: Oh, neat
03:09 airlied: wierd all cmd buffer allocs for sascha deoms are hitting loader icd magic assert, wonder what I've done wrong there
04:39 airlied: jekstrand: you ever heard of cmd buffer loader magic corruption from the loader?
04:40 airlied: has to do https://gitlab.freedesktop.org/airlied/mesa/-/commit/a42a942289e7c4a893faf96e8e8570049bba3f82 to make demos work
04:40 airlied: and moltenvk has this https://github.com/KhronosGroup/MoltenVK/issues/689
04:50 airlied: ah well the sascha demos working on win32 as well now with that hack
05:05 jekstrand: airlied: Yeah, command buffers are dispatchable objects.
05:06 jekstrand: airlied: But you should be deriving from vk_object_base and calling vk_object_base_init which should take care of that for you.
05:06 jekstrand: airlied: Are you memsetting your entire lvp_command_buffer struct to reset it?
05:11 airlied: jekstrand: it only happens with one that are handed back to the pool
05:11 airlied: and they are corrupted when I get them back
05:13 airlied:thought it was memory corruption and I'v no idea how to track that on windows, so it might still be
05:14 airlied: jenatali: I've added it to building in CI in my branch
05:14 jekstrand: airlied: FYI: You want to get rid of LVP_DEFINE_*HANDLE_CASTS and replace them with VK_DEFINE_*HANDLE_CASTS.  It'll force you to make everything derive from vk_object_base.  It'll also help you catch bugs like that, I expect.
05:15 jekstrand: airlied: Among other things, the VK_ versions do a few object sanity checks as part of the cast so you'll see the corruption quicker.
05:15 jenatali: airlied: Application Verifier would help track corruption on Windows
05:41 airlied: jekstrand: so loader_set_dispatch in the loader writes data to obj over the loader magic
05:42 airlied: at least it's not random memory corruption, I guess I just don't use a debug build of the loader on linux at all
08:20 tzimmermann: mripard, hi! may i ask you for a review of https://lore.kernel.org/dri-devel/20210211081636.28311-1-tzimmermann@suse.de/
08:20 tzimmermann: let me know if there's something i can review for you
08:35 pinchartl: possibly not strictly on topic for this channel, but I know there's lots of experience here on this domain: is it possible to configure a CI/CD pipeline on gitlab.fd.o without a .gitlab-ci.yml file in the repository ? I'm considering a use case of upstream kernel development, which won't allow gitlab-specific files to be committed to the master branch, with CI pipelines
08:37 hifi: not specific to fd.o gitlab but you can always create a repo that has the CI config which then in turn clones (with --depth 1 or something) the actual repository you want the code from
08:38 hifi: that won't give you automatic CI runs on push directly, though
08:38 pinchartl: I've just realized that the path to .gitlab-ci.yml can be an http URL
08:39 pinchartl: sorry for the noise :-)
08:39 hifi: well that makes it a lot easier
08:40 mripard: tzimmermann: I'll give it a look today :)
08:40 udovdh: Hello!
08:41 udovdh: Is there a howto on the net describing what I need to change in my mesa build when I switch to wayland? (if at all?)
08:42 emersion: just make sure you have "wayland" in -Dplatforms
08:44 udovdh: taht is all? no other requirements that I might need? It works but I ask just to be sure
08:57 udovdh: Currently the conf is like: meson configure . --prefix /opt/xorg/ -Dbuildtype=release -Ddri-drivers=[] -Dgallium-drivers=radeonsi -Dgallium-xvmc=auto -Dgallium-vdpau=true -Dgles1=false -Dgles2=true -Dgallium-xa=auto -Dgallium-opencl=disabled -Ddri3=true -Dplatforms=auto,x11,wayland -Dshared-glapi=true -Dglx=dri -Dglx-direct=true -Dgbm=true -Dosmesa=false -Dglvnd=true -Dlmsensors=true -Dvulkan-drivers=amd -Dgallium-va=true
08:58 tzimmermann: thanks mripard
10:14 MrCooper: kiryl: FWIW, at least in theory your issue shouldn't happen with a recent AMD GPU
12:36 mslusarz: hey guys, I'm trying to figure out why X freezes when application triggers kernel's gpu hang detection and I discovered that X isn't affected directly by the hang - it freezes on first glXSwapBuffers after mesa recreates its GL context
12:36 mslusarz: it seems it's not the implicit glFlush in glXSwapBuffers that is causing the problem, but the swap part - from mesa perspective freeze happens on xcb_flush after xcb_present_pixmap
12:36 mslusarz: this is on Intel gen9 GPU, mesa 20.3.4, xserver 1.20.10 with modesetting driver and my own fork of mesa on the application side (glretrace in this case)
12:38 mslusarz: it seems X doesn't notice there's anything wrong and acceleration of new applications keeps working, just X can't display new content
12:40 mslusarz: I would appreciate if you would give me some advices how to proceed
12:43 mslusarz: by "X freezes" I mean it switches between (last?) 2 frames back and forth
13:48 MrCooper: mslusarz: it's likely because glamor doesn't support https://www.khronos.org/registry/OpenGL/extensions/ARB/ARB_robustness.txt yet (will be pretty tricky to add though), so all its GL drawing commands are dropped on the floor after the GPU recovery
13:50 mslusarz: MrCooper: if I exit() on Mesa side anywhere between failed ioctl and glxSwapBuffers X server doesn't freeze
13:53 mslusarz: so I think X is not affected by what happens with application context
13:54 mslusarz: something about the drawable passed to glXSwapBuffers is not right, but I have no idea how to debug this...
13:56 mslusarz: uhm, actually, let me read the extension text, I thought it's about something else
14:11 danvet: vsyrjala, did you respin your vblank_restore fix and I missed it?
14:20 zmike: MrCooper: re: that piglit expects thing, anything I can do to help move that along?
14:21 MrCooper: come up with a solution for the issue I described :)
14:21 zmike: I'm not sure I fully understand the issue tbh
14:21 zmike: haven't gotten very deep into understanding our ci pipeline yet
14:25 MrCooper: the issue is that ci-expects/ is in the rules of test jobs only, not of the container/build jobs they depend on
14:25 vsyrjala: danvet: not yet. i wanted to test it a bit first, but psr1+hsw totally borked so actually can't test
14:25 vsyrjala: i guess i'll just send it out anyway then
14:25 danvet: :-/
14:25 MrCooper: so if nothing else triggers those container/build jobs, there's no pipeline due to invalid YAML
14:26 zmike: oh
14:26 danvet: MrCooper, hm I thought iris would recover internally
14:27 danvet: and arb_robustness is only for when you want the driver to not do that because you're dealing with untrusted shaders and stuff like that
14:27 danvet: i.e. webgl
14:27 danvet: Kayden, ^^ or am I wrong here
14:27 danvet: vk ofc just keels over and drops vk_device_lost on the application/compositor
14:27 MrCooper: maybe it does recover internally, but somehow "loses the connection" for buffers shared via DRI3/dma-buf?
14:28 danvet: mslusarz, ^^
14:28 danvet: MrCooper, yeah that's maybe more plausible
14:28 danvet: maybe the imports all fall to pieces in the new context
14:28 danvet: but shouldn't happen either
14:29 danvet: mslusarz, #intel-3d might help with this, if it's intel specific
14:29 danvet: mslusarz, hm, what's your renderer string? I think iris isn't yet the default for gen9
14:33 yshui`: is Xserver expected to keep functioning after GPU resets?
14:34 yshui`: it's not the case on AMD gpus
14:34 zmike: depends on the driver
14:36 yshui`: the kernel driver? the gl implementation? or ddx?
14:37 mslusarz: danvet: iris is the default driver on gen9
14:38 mslusarz: danvet: I don't know if this issue is Intel specific
14:39 karolherbst: ufff.. the cryptocurrency bullshit is in such a state, the GPU vendors produce "mining cards" so that gamers can have their GPUs for normal prices :D
14:41 yshui`: danvet doesn't seems to be intel specific
14:44 MrCooper: with amdgpu it depends on what kind of reset is required; in theory it's also possible for radeonsi to recover internally as seems to be the case for iris, but in practice most of the time it's not, in which case both the app and the display server (and glamor in Xwayland) would need to support the robustness functionality to be able to recover without restarting the display server
14:45 MrCooper: on the bright side, with GNOME 40 one will be able to kill Xwayland, and mutter will just restart it on demand :)
14:46 yshui`: I see.
14:46 mslusarz: the thing is that if I modify iris to not attempt to recover and just abort, X survives just fine, with new GL apps unaffected
14:46 yshui`: Does wayland compositors support this kind of recovery in general?
14:47 emersion: wlroots has been supporting this for a long time
14:47 yshui`: Nice
14:47 emersion: note, killign xwayland also kills all x11 clients
14:48 emersion: killing*
14:48 MrCooper: emersion: what exactly is "this"? :) I suspect yshui` might have asked about recovering from GPU hangs via robustness
14:48 emersion: ah, no, xwaylanmd restarts
14:49 emersion: robustness is still a TODO for everyone AFAIK
14:49 yshui`: :'(
14:50 yshui`: maybe in mslusarz 's case, iris recovery actually broke X's GL context?
14:53 mslusarz: yshui`: recovery is on the client side, so I'm not sure how can it affect the server side...
14:54 yshui`: the server side has a GL context that needs to be recovered as well.
14:58 mslusarz: yshui`: so why X survives if application just aborts on hang?
14:59 yshui`: i would like to know as well
14:59 mslusarz: (I'll verify if X even gets the hang notification in a moment)
15:06 mslusarz: nope, it doesn't
15:08 yshui`: so recovering a client side context breaks the server side context somehow?
15:09 pq: Does the X server actually keep on flipping but not rendering, or does it stop flipping?
15:09 pq: if X server stops flipping, maybe the crashed app context managed to submit a fence to X server, and that fence never signals as the app context got destroyed/reset?
15:10 mslusarz: yshui`: I'm not sure, but I think the buffer shared between the X server and application somehow gets "broken"
15:10 pq: ...reset but not destroyed, maybe? and aborting the app causes the context to be fully destroyed
15:10 mslusarz: pq: how can I verify if it stops flipping?
15:11 pq: mslusarz, uhh... gdb and breakpoints?
15:11 mslusarz: but on what, I'm not familiar with X server code
15:11 pq: drmModePageflip and drmModeSetCrtc
15:11 pq: me neither
15:12 yshui`: from my experience with AMDGPU, usually after gpu resets, X would flash between 2 broken frames
15:12 yshui`: is that the same with intel? mslusarz
15:13 mslusarz: yshui`: yup, 2 frames keeps swapping
15:13 yshui`: that would mean it's still flipping?
15:14 pq: yeah
15:32 mslusarz: pq: yeah, X seems to keep flipping
15:32 pq: that shot in the dark missed then :-)
15:33 danvet: tzimmermann, we might also need to fix up dma-buf importing to make sure it uses the right struct device
15:33 danvet: otherwise the dma-buf is on the wrong device
15:33 yshui`: mslusarz what did you do to determine if X has gotten the reset notification?
15:33 mslusarz: (I had to break on callers of drmModePageFlip inside of the X server, instead of drmModePageFlip itself, because gdb would just hang otherwise, weird)
15:35 mslusarz: yshui`: I put a breakpoint on code that handles EIO from command submission ioctl
15:36 ajax: hm. the -Dglx=xlib build target stopped compiling recently. can i use that as an excuse to just delete it (pretty please)?
15:45 yshui`: mslusarz looks like iris only tries to recover if it got an EIO. but there's also the "innocent" reset, and I don't think iris automatically recover in that case?
15:46 yshui`: maybe Xserver still needs to recover itself in that case, but doesn't. and when the client side doesn't try to recover, the server side keeps working by accident?
15:48 mslusarz: that would be sad...
15:49 mslusarz: I want my software to crash and burn on any failure, not work by accident ;)
15:49 ajax: i'm not entirely sure xserver can meaningfully recover on reset
15:49 yshui`: that's just a hypothesis
15:50 ajax: well, it could, if X had a way to send clients events about losing pixmap contents, and your client happened to know how to handle it.
15:51 yshui`: ajax: isn't that what Expose events are for?
15:51 ajax: but for every existing client, we'd have lost all of their pixmaps' contents
15:51 ajax: those are events about Windows
15:51 ajax: you could name a pixmap instead of a window in the event, i suppose, but literally no client anywhere is expecting you to do that yet
15:52 yshui`: hmm, but i suppose having a few glitch windows is better than the whole X server stops working?
15:53 mslusarz: yshui`: +1
15:55 mslusarz: how buffers are shared between X server and client? who is the owner?
15:57 imirkin: X server owns everything GPU-related
16:00 yshui`: mslusarz: I think it's DRI3PixmapFromBuffer, which creates an X pixmap from some sort of file descriptor
16:00 yshui`: and then you can Present with that pixmap
16:03 yshui`: imirkin i thought the buffers are indeed shared, so no one side really owns them?
16:04 imirkin: with present, under certain conditions, the X server may indeed decide to flip to the given image
16:04 imirkin: however all (regular) pixmaps/etc are hosted by the X server
16:28 MrCooper: the point here is rather the pixmap storage is shared between server & client via DRI3; Present works with "normal" pixmaps (whose storage only exists in the server) as well
16:29 imirkin: right, yes.
16:31 MrCooper: mslusarz: you wrote "acceleration of new applications keeps working, just X can't display new content"; it's not clear if that means new applications are displayed correctly or not
16:36 mslusarz: MrCooper: they are not displayed; I haven't verified it thoroughly, but they seem to work correctly, e.g. piglit shader test that probes some pixels succeeds
16:37 imirkin: mslusarz: a piglit shader test that does front-buffer rendering/verification?
16:37 MrCooper: that does sound like the client's recovery affects the X server's context, i.e. a kernel i915 driver issue
16:37 imirkin: if it's back-buffer (/fbo), then it's all inside the client
16:38 mslusarz: imirkin: oh... I used -fbo -auto
16:38 imirkin: yeah, so if you have DRI3, then there's (nearly) zero X server involvement
16:38 MrCooper: -fbo doesn't display anything on the display server
16:39 MrCooper: try something like glxgears?
16:39 mslusarz: how can I check that? without -fbo -auto shader_runner doesn't say whether probe succeeded
16:40 MrCooper: you can use -auto without -fbo
16:40 MrCooper: though normally the result should be reported even without -auto
16:40 mslusarz: glxgears works (it prints that it renders 60FPS every few seconds), but I don't know what's the contents
16:41 MrCooper: anyway, no permutation of piglit test options can verify that the contents are actually displayed correctly
16:41 imirkin: if you use -auto, then it might still not care
16:41 imirkin: you have to pick the right test
16:41 imirkin: which uses front-buffer rendering / verification
16:42 imirkin: look for *front* (but not front-facing :) )
16:42 MrCooper: the front buffer is just another DRI3 buffer, not what's actually displayed
16:49 mslusarz: imirkin: https://gitlab.freedesktop.org/mesa/piglit/-/blob/master/tests/general/read-front.c ?
16:51 imirkin: mslusarz: yeah. but with DRI3, MrCooper is probably right, and there's just no way to test it with a GL application
16:51 mslusarz: ah
16:52 MrCooper: you'd have to get the window contents via another X mechanism
16:52 imirkin: right. i meant "via existing piglit tests" :)
16:53 imirkin: i guess even with DRI2 you can't. nvidia did something different, and that's why we show windows even with -auto now, but it didn't matter for mesa / open stack
16:54 mslusarz: uhm, interesting... read-front -auto succeeds, but without -auto it says observed != expected
16:55 imirkin: mslusarz: what if you do PIGLIT_NO_WINDOW=0 ? (iirc 1 is still the default)
16:55 imirkin: (for -auto)
16:56 mslusarz: fails with both 0 and 1
16:57 imirkin: you said -auto succeeds
16:58 imirkin: you're trying to fix the regular case. i'm trying to break -auto ;)
16:58 mslusarz: oh, sorry, I misunderstood
16:59 mslusarz: with -auto both 0 and 1 suceeds
17:00 imirkin: huh ok
17:00 imirkin: that's odd. don't know what -auto would be doing differently then
17:00 imirkin: i thought it might be skipping the window creation which somehow affects things
17:00 imirkin: but apparently that doesn't end up mattering
17:11 yshui`: MrCooper use Composite to grab the window pixmap?
17:12 MrCooper: or even just XGetImage :)
17:14 mslusarz: I have to go for now, I'll be back tomorrow
17:29 yshui`: A client recreating its context isn't normally going to break the server, so I think the GPU being reset has to have something to do with this.
17:38 alyssa:tries to understand how coordinate shaders are legal
17:38 alyssa: (in the presence of side effects)
17:43 alyssa: "You may not assume that a Vertex Shader will be executed only once for every vertex you pass it. It may be executed multiple times for the same vertex" oh
17:54 HdkR: alyssa: Alternative story. Tons of applications don't think the vertex shader will execute twice and result in fun broken behaviour for them :)
17:54 HdkR: If you're lucky they are just storing to memory based off vertid and te resulting double execution doesn't hurt much
17:54 imirkin: alyssa: HdkR: alt-alternative story: many drivers don't support images/ssbo's in vertex shaders, so applications don't rely on them
17:55 alyssa: imirkin: fair :p
17:55 alyssa: actually thinking about geom shaders which the spec is even less clear about
17:56 imirkin: geom runs once per primitive
17:56 imirkin: but even there, it could be processed multiple times on tilers/etc
17:56 alyssa: ^^ yeah
17:56 imirkin: such is life.
17:56 imirkin: which is why specs only require this stuff for frag / compute
17:57 alyssa: (trying to figure out if it's legal to run the geom twice to figure out the sizes of everything ahead-of-time)
17:57 imirkin: (maybe that's not *why*, but it's a nice side-effect)
18:05 MrCooper: yshui`: a normal user-space process shouldn't be able to explicitly trigger a GPU reset which affects other processes
18:06 MrCooper: at least it should have to work harder :)
18:07 yshui`: MrCooper aren't we a pretty long way from being able to prevent that?
18:08 alyssa: macOS can't even do that...
18:08 yshui`: if the application submits a broken program, that could easily hang the gpu
18:08 imirkin: just solve ATM - what's the big deal
18:08 MrCooper: hence my clarification :) obviously it's always possible indirectly by making the GPU hang, but it still shouldn't be possible by just calling a "break other contexts please, kthxbye" ioctl
18:14 yshui`: so it's about keeping buffers and stuff alive across reset?
18:18 anholt: pinchartl: iirc you can also say "always use this branch of mine for the gitlab-ci.yml" in your project's ci config
18:22 pinchartl: anholt: I've seen that the CI config can also point to a separate project
18:22 yshui`: imirkin: what's ATM, the Turing Machine?
18:23 pinchartl: so it should support all I need
18:24 imirkin: yshui`: the halting problem
18:24 imirkin: i.e. whether a turing machine will hit a halting state or not
18:25 yshui`: imirkin: i wonder if it makes sense to create a total shader language
18:25 imirkin: (which is not solvable using current methods in finite time)
18:25 imirkin: (i forget if it's proven to be unsolvable on a turing machine or not)
18:26 yshui`: yeah the halting problem is undecidable
18:32 alyssa: yshui`: IIRC unextended gles2 is close
18:42 yshui`: interesting
18:45 yshui`: looks like gles2 shader still has unrestricted loops?
18:47 HdkR: gles2 it is valid for a shader to fail compiling if the loop can't be fully unrolled
18:47 ajax: the answer to the halting problem is yes. eventually entropy will bring the computer to a halt.
18:48 alyssa: the answer to "doctor, is it terminal?" is necessarily always yes, including for the common cold
18:49 yshui`: @ajax to be fair practical computers don't normally have infinite storage anyway
18:50 imirkin: alyssa: in the end, that di-hydrogen monoxide will get you
18:50 HdkR:hands imirkin a dad-joke sticker
18:51 alyssa: DHMO is toxic at levels found in the Atlantic Ocean, putting millions of Maritimers at risk!
18:51 ajax: thousands die of accidental inhalation of dhmo every year and it's found in 100% of malignant tumors
18:52 ajax: ban this etc
18:52 FLHerne: HdkR: That may be true, but when I screwed up and put an infinite loop in an ES2 shader it hung my GPU under radeonsi
18:52 imirkin: in case others are concerned: https://www.dhmo.org/facts.html
18:52 alyssa: FLHerne: MAY not MUST
18:52 HdkR: ^
18:52 FLHerne: Or maybe it was intel then, I can't remember which laptop that was :p
18:53 HdkR: I'd expect any GPU that supports real branching will just infinite loop and force a job kill or GPU reset
18:53 karolherbst: HdkR: sure? :D
18:54 HdkR: karolherbst: Plez support job killing :<
18:54 karolherbst: HdkR: I'd if the hw would let me
18:54 imirkin: heh, that site also has a link to the klein bottle guy
18:54 FLHerne: Ok, but it means restricting to ES2 is only a hypothetical solution, not something that'll help with existing drivers
18:55 FLHerne: And now I wonder how WebGL doesn't blow up more often than it does
18:55 karolherbst: FLHerne: because browser wrap shit
18:55 karolherbst: they don't pass in the shader without messing around with it :p
18:55 FLHerne: I mean, there was that glfuzz thing that would hang my laptop with webgl, but that seems to be the exception
18:55 imirkin: FLHerne: don't need infinite loops. drawing a single primitive can hang an nvidia gpu (with blob drivers too)
18:56 karolherbst: heh :D
18:56 FLHerne: yay
18:56 karolherbst: how'd you do that?
18:56 imirkin: deqp did that...
18:56 karolherbst: just stupid shader with an infinite loop?
18:56 imirkin: trivial shaders
18:56 karolherbst: ehh
18:56 imirkin: max out tessellation, max out geometry outputs, watch the fireworks from drawing a patch
18:56 karolherbst: ahh
18:57 imirkin: i suspect it has some kind of internal buffer which doesn't handle the million+ triangles that end up getting generated
18:57 karolherbst: probably :D
19:04 alyssa: geom/tess is awful
19:05 imirkin: esp when you do 1024 vertices output, 32 instances, and tess factors at 64 :)
19:06 alyssa: Oof
19:06 imirkin: one patch can do a lot of damage :)
19:14 HdkR: karolherbst: There should be a way to kick a "stuck" job other than a GPU reset right?
19:14 karolherbst: HdkR: should as in "they should add one" or as in "there probably is one?"
19:15 HdkR: There probably is one
19:15 karolherbst: uhm...
19:15 karolherbst: without turning one debugging?
19:15 karolherbst: and for graphics?
19:15 karolherbst: I am sure all of that is "trivial" for the compute engine, but graphics?
19:16 HdkR: I believe so. Might be a generational thing
19:17 karolherbst: mhh
19:17 karolherbst: ohh wait
19:17 karolherbst: with "GPU reset" you mean a full GPU reset, right?
19:17 HdkR: aye
19:18 karolherbst: ahh yeah.. nouveau doesn't support that :p
19:18 karolherbst: you can kill the channel and restart the falcons
19:18 karolherbst: and that's how we recover from stuck jobs
19:19 HdkR: Sounds reasonable
19:19 karolherbst: yeah
19:37 ajax: anholt: does building with asan set a #define that we could conditionalize the dlclose on?
19:37 anholt: ajax: yeah, there's one that we do in egl
19:38 anholt: but for vk, the loader does the dlstuff [citation needed]
19:38 ajax: alternatively, would you accept an LD_PRELOAD that nerfs dlclose for this particular test set?
19:39 anholt: would be pretty into that
20:08 ajax: airlied: i know you were working on scene overlap for llvmpipe at one point, did you ever get to letting the PutImage overlap too?
20:16 airlied: ajax: no, never got to it, was messy to navigate the glx/dri code
20:16 airlied: https://gitlab.freedesktop.org/airlied/mesa/-/tree/llvmpipe-wip-scenes is an updated overlap branch that works for vulkan as well
20:16 airlied: I was considering trying to make it works for the vulkan wsi where at least I don't have to deal with the glx/dri interface
20:17 airlied: fencing the xshmputimage was where I think I was getting stuck
20:17 airlied: we have a pretty large Xsync in there now
20:54 alyssa: dschuermann: What's the difference between opt_sink and opt_move?
20:57 dschuermann: global vs local
20:57 alyssa: got it, thanks
20:57 alyssa: so I want both?
20:57 dschuermann: yes, first sink, then move
20:57 alyssa: (but mostly want sink, since local we can do in the backend sched)
20:57 alyssa: [well. will be able to anyway]
20:58 dschuermann: makes sense
21:00 alyssa: sink fixes spilling on a pathological case hit in deqp, thanks :+1:
21:01 alyssa: (A shader that writes out a constant vector a bunch of times, which without sink meant huge numbers of moves to constants at the start)
21:28 alyssa: I will say dEQP-GLES31.functional.ssbo.* is awfully slow for me :(
21:28 alyssa: I guess it does encompass 2000 tests :)
21:29 imirkin: is each tests slow, or just lots of tests?
21:29 anholt: I would bet that nir_load_store_vectorize will help you.
21:29 alyssa: a mix, I think most our fast and I just have some dumb cases to deal with for RA :)
21:29 imirkin: there are *some* tests in there which iirc are slow
21:29 imirkin: esp if you run * rather than grepping in the master.txt file
23:29 anholt: apinheiro: things like cmdcopybuffer are really unfortunate for v3d which has switching between compute and render actually require going back to the kernel.
23:30 anholt: but unless you've done something clever, you've already got separate bin and render jobs for each
23:31 anholt: apinheiro: I wouldn't expect each cmdcopybuffer to be a heavyweight operation (multiple trips to the kernel)
23:32 apinheiro: anholt, no, each cmdcopybuffer is not really heavyweight
23:32 apinheiro: the main issue here is an oom, as the 15k copies gets accumulated
23:32 apinheiro: > but unless you've done something clever, you've already got separate bin and render jobs for each
23:32 apinheiro: well, it is now when we are evaluating to do clever things or not
23:32 anholt: doesn't each of the cmdcopybuffers end up with a separate V3DV_JOB_TYPE_GPU_CL?
23:33 apinheiro: until the end of last year we were more focused on get things done
23:33 apinheiro: > doesn't each of the cmdcopybuffers end up with a separate V3DV_JOB_TYPE_GPU_CL?
23:33 apinheiro: yes
23:33 apinheiro: as it is working now
23:34 apinheiro: for each copybuffer, we create all the render/binnnig/etc that it is part of the v3dv_job_type_gpu_cl
23:34 anholt: so each of those is a job ioctl to the kernel, and a bin and render interrupts. plus they've probably all got separate CL bos?
23:34 apinheiro: and it is being included on the current cmd buffer
23:34 anholt: I wonder how much you'd get out of sharing CL bos between jobs
23:35 apinheiro: yes, that would be an interesting thing
23:35 apinheiro: but as mentioned, for this specific app
23:35 apinheiro: it uses 15k copies on the same cmd buffer
23:35 apinheiro: and we get out of memory at ~2.6k
23:36 apinheiro: so Im personally not sure if sharing CL bos between jobs would cover that gap
23:36 airlied: so v3d has to record ioctls into the command buffers for replays, ouch
23:36 anholt: 5x reduction is not totally implausible to me from starting the new CL's commands at the end of the last CL job's buffers.
23:36 anholt: airlied: yep. actually pretty required by the different compute vs graphics queues.
23:37 anholt: I think I had some idea at some point of being able to chain bin->bin->bin and render->render->render to avoid an ioctl per subsequent graphics job.
23:38 airlied: anholt: is that an artifact of bad hw design or bad kernel api design?
23:38 anholt: hw
23:39 apinheiro: airlied, what do you mean on "for replays"? in this context is the same that for execute?
23:39 anholt: we don't have a top level queue in front of the binner, renderer, and csd. there's just "stuff a new bin job in the registers"
23:39 apinheiro: > 5x reduction is not totally implausible to me from starting the new CL's commands at the end of the last CL job's buffers.
23:40 apinheiro: anholt, 5x reduction would be still short here ;)
23:40 apinheiro: although clearly could help in other cases
23:40 anholt: 5.8x, whatever :)
23:40 apinheiro: and after all, I guess that using less memory would be good in general
23:40 apinheiro: fwiw, im right now on analysis mode
23:41 apinheiro: we were not expecting an app using 15k copies on the same cmd buffer, so we are not really sure what apps/games really do
23:42 apinheiro: out of curiosity I was checking other drivers, and then check how apps use the copies (copytobuffer, copy to image)
23:42 apinheiro: and see how much priority would have start to code stuff like this
23:43 apinheiro: anholt, but thanks a lot of the advice
23:44 apinheiro: btw, "checking apps" here is mostly the quake vulkan ports, ue4 demo and now the doom3 vulkan port
23:44 apinheiro: that are basically the most complex vulkan apps that we found and are able to run
23:44 anholt: apinheiro: at some point, for vulkan-in-general, probably going to need to tackle a multi-submit interface in the kernel where bin is in hardware before render is queued, but not wait for bin finished, and use the CL semaphores to have the HW schedule between them
23:45 anholt: and also see about loading more than one job in CTnQ at a time
23:49 anholt: apinheiro: reason I've been thinking about those last couple things is that apparently it's been important https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/2371
23:49 anholt: (I think we did this for tu as well)
23:49 apinheiro:looking
23:50 apinheiro: hmm interesting
23:54 apinheiro: anholt, thanks for all the tips. I will keep looking at how those apps use the copies, and tomorrow will talk with Iago
23:56 airlied: anholt: would you be fast just executing memcpys instead of using the gpu :-P
23:57 anholt: airlied: I had seriously considered whether there should be a job type that is "use the generic dma engine"