02:23imirkin: how does one load a smooth-interpolated input in nir?
02:28imirkin: also, what's the meaning of INTERP_MODE_NONE in a frag shader?
08:24nanonyme: mattst88, I meant https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/195 ; it doesn't seem like backport-candidate since that's a new feature
08:26nanonyme: mattst88, but maybe as airlied said it's not as bad an issue anymore if llvmpipe has grown new features
08:35karolherbst: imirkin: with https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/5747 I fixed it for nouveau so that the nir and TGSI path are equal in regards to interpolation... it's still a bit hacky, but maybe that helps you get an idea?
08:37karolherbst: you essentially have an input variable with var->data.interpolation == INTERP_MODE_SMOOTH
08:37karolherbst: and load it with "load_interpolated_input"
08:38karolherbst: or also "load_input"? maybe both
08:38karolherbst: load_interpolated_input is more for the explicit barycentric operations afaik
08:39karolherbst: but essentially pre lowered you just have deref_var and load_deref on the deref for those inputs
08:46Kayden: load_input is for flat inputs
08:46Kayden: load_interpolated_input is for non-flat inputs
08:46Kayden: it has a parameter which guaranteed to be SSA and is the barycentric mode
08:47karolherbst: Kayden: well.. there is also nir_intrinsic_load_barycentric_pixel
08:47karolherbst: which is kind of exactly like the load_input path
08:47karolherbst: more or less
08:48Kayden: load_interpolated_input's src.ssa->parent_instr is going to be one of nir_intrinsic_load_barycentric_pixel / centroid / sample / model / at_sample / at_offset
08:48karolherbst: but I meant the end result is pretty much the same
08:48karolherbst: at least for us
08:48Kayden: but for non-interpolated, it's load_input
08:48karolherbst: I meant, load_interpolated_input(nir_intrinsic_load_barycentric_pixel) == load_input
08:49Kayden: huh, ok
08:49karolherbst: yeah.. maybe our hw is weird, but that's how I handle nir_intrinsic_load_barycentric_pixel
08:49karolherbst: the other ones are special though
20:12imirkin: Kayden: specifically i'm looking at nir_lower_clip.c
20:12imirkin: the input doesn't have an interpolation set, so it gets INTERP_MODE_NONE, and the instruction used to load it is " load = nir_intrinsic_instr_create(b->shader, nir_intrinsic_load_input);" (in FS)
20:13imirkin: this seemed off to me, but what do i know
20:13imirkin: could you (or someone else well-versed in nir) have a glance at it and see if load_clipdist_input and create_clipdist_var seem right to you?
20:13imirkin: it does seem to work anyways on freedreno, but i'm suspecting it's more of a coincidence
20:14imirkin: (this is basically a lowering which implements clip distance by adding discards in the frag shader based on the distance varyings)
20:51Domi: Hallo, I use mesa to run a opengl software in a gpu less server. The problem is I'm not interested that much in the rendering on the screen. I just need some output form the software via console. Is there a way to reduce the cpu load from the cpu rendering? For example by decreasing the quality ?
20:52airlied: Domi: reduce the resolution
20:53Domi: ok I will try that. Is there anything which prevents me from just using 1x1 resolution? Is there anything else I can do?
20:54HdkR: You could just disable xserver from running
20:55airlied: Domi: if you aren't rendering anything on the screen it won't use any cpu
20:55bnieuwenhuizen: HdkR: assuming the app can survive without X. At which point I'd be curious if the app doesn't have a switch to be console only anyways ;)
20:56HdkR: I made an assumption from the 1x1 output question
20:57Domi: the app is minecraft and I implemented some AI algorithmus in the player as a mod. I want to just test the algorithms and do not need to see the player. I just want to test the interactions with the world and multiple players. Therefore I would need to run a lot of instances
20:59HdkR: Guess even at lowest resolution llvmpipe still handles the vertex jobs then
21:00bnieuwenhuizen: yep, time for a no-op GL driver :P
21:00bnieuwenhuizen: I believe there even is a noop gallium driver IIRC
21:04Domi: are there no op GL driver ready to drip in? can you tell me one?
21:05bnieuwenhuizen: how about setting the env var GALLIUM_NOOP=true ?
21:05bnieuwenhuizen:has no clue really how well that'd work
21:23Domi: GALLIUM_NOOP does not seem to affect anything
21:25imirkin: i think that only works on debug builds
21:27Domi: reducing the resolution does help a lot. Thank you for that. How do I get a debug build? or is there any other possibility to reduce the load?
21:29imirkin: question is ... why is anything rendering to the screen when you don't need it
21:32HdkR: Since they're running minecraft they still need a gl context
21:33Domi: I tried it without and got the error org.lwjgl.LWJGLException: Pixel format not accelerated. I assumed that I need at least some gl interaction
21:33imirkin: ah i see.
21:37Domi: the best solution would be a driver which can use noop and normal rendering with the ability to change on the fly for debugging out mod
21:38imirkin: the noop driver is sorta that
21:38HdkR: on the fly meaning changing at runtime? Could probably modify llvmpipe still do it that :P
21:40emersion: is GALLIUM_NOOP=true documented somewhere?
21:41Domi: yes at runtime would be very nice. But not necessary
21:48Domi: emersion I could not find any documentation about the var
21:53imirkin: this is where all the screen wrapping is done
21:54imirkin: this is where it checks for GALLIUM_NOOP before trying to wrap the screen
22:11Domi: I'm not that fluent in C but where does it check for a debug build?
22:12imirkin: my theory is that the macro
22:12imirkin: checks for some sort of debug build mode
22:14imirkin: however it appears that my assumption was incorrect
22:14imirkin: looks like it should work either way
22:22Domi: the env variable is set and I get rendering
22:23Domi: ingame it shows me llvmpipe (LLVM 6.0, 256 bit) 3.0 Mesa 18.0.5 as gpu
23:09haasn: is OpenGL supposed to be allowed to call the glDebugMessageCallback from a different thread other than the one the OpenGL context is bound to?
23:13airlied: haasn: GL_DEBUG_OUTPUT_SYNCHRONOUS
23:14airlied: I think that's the thing you need
23:16HdkR: Alternative answer, "Yes"
23:19haasn: thanks, I was just wondering if I needed to add a mutex to the log callback or not