00:11jenatali: airlied: Finally posted: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/7565 :)
00:11gitbot: Mesa issue (Merge request) 7565 in mesa "Add Microsoft CLC CLOn12 compiler stack" [Nir, Opencl, Spir-V, D3D12, Panfrost, Opened]
01:10karolherbst: PIPE_COMPUTE_CAP_GRID_DIMENSION being uin64_t is also quite an overkill...
01:17karolherbst: airlied: llvmpipe implements it wrongly :p
01:17karolherbst: will send an MR... at some point
01:52karolherbst: also.. can people at Intel tell their marketing team they shouldn't encourage to make statements on Mesa future without having talked about the community about stuff?
01:52karolherbst: I know, "not my driver", but.. you get the idea
01:54karolherbst: also. maybe we should put a statement in the docs saying "dependencies are not allowed to have less open development models than mesa itself"
01:54karolherbst: sounds like a good idea to vote on it and just make it part of the docs
01:54karolherbst: airlied: what do you say?
01:57Sachiel: maybe having some blowback from the community about it would encourage them to listen to the people working on the project
01:58karolherbst: hence the idea of writing down that all mesa dependencies should be developed under an open development model
01:58karolherbst: and a community driven project
01:58karolherbst: so if somebody comes around and opens an MR adding something developed privately, but open source, we point it out and say: here.. look, that's agreed upon the community
01:58karolherbst: can't accept your MR
01:59karolherbst: what is a totalle no go is to write stuff like "If all goes well by the end of next year we could potentially see IGC used by default within Mesa."
02:00Sachiel: sounds good to me
02:00karolherbst: how does people even decide something like that without having brought it up at all
02:03karolherbst: I am all for writing up that stuff and/or ack it, but I am not sure if I should be the public author of it.. so if somebody else steps up in a more neutral position that would be grand... would have to discuss first internally before tripping of the wrong people at Intel...
02:05karolherbst: maybe even ading something that distributions have strict requierments and it has to be highly likely that distribution accept packaging if not already done and that distributions are fine with adding it to live images
02:07kisak: karolherbst: I was under the impression that amd/addrlib was developed outside of mesa and periodically synced into mesa. Sure, you could contribute to it in mesa, but if mareko doesn't know about the change, it could get stomped on later. <- Point to that is mesa isn't 100% developed in the open, there's already a little flexibility in areas that cover one hardware vendor
02:08karolherbst: but we have it in tree at least
02:08karolherbst: and it's small
02:08karolherbst: but yeah
02:08karolherbst: this part is also annoying
02:09airlied: karolherbst: yeah we could make deps have requirements though for things like d3d12 it might be tricky, maybe linux deps :P
02:09karolherbst: kisak: but it still goes through the MR process
02:10karolherbst: airlied: mhhhh.....
02:10karolherbst: airlied: but what does d3d12 depend on?
02:11karolherbst: but I don't see why to make exceptions honestly :p
02:11karolherbst: and it seems like jenatalis MR doens't pull in new deps except I missed something?
02:12airlied: it depends on d3d12 itself
02:12airlied: granted it it a standalone package, but getting the wording right would be tricky
02:13kisak: I wouldn't expect that building against d3d12 and interfacing with a d3d12 api is the same thing
02:14karolherbst: airlied: but I still don't see where the d3d12 stuff depends on something external actually
02:15airlied: well I suppose it just relies on window
02:15karolherbst: yeah, probably
02:15karolherbst: which is OS stuff and out of our control anyway
02:16karolherbst: I guess exception for OS provided deps can be made, because that often runs into propriatary code anyway
02:16karolherbst: which makes it hard for IGC because they have to convince people to ship it first
02:16karolherbst: "OS core libs" maybe
02:16karolherbst: "OS" is a too vague term in regards to linux distributions
02:17karolherbst: there is just now way we would depend on something by default which isn't shipped
02:17karolherbst: I think at this point I am more annoyed by this "if everything works out, it can be the default next year"
02:17karolherbst: what the hell actually...
03:33mareko: kisak: regarding addrlib, we can merge important cosmetic-looking changes like build fixes into the internal addrlib repo, I can't do that with big changes though
04:01Kayden: karolherbst: I'd be happy to have such a statement and policy from the project
04:02Kayden: I would be happy to ack such patch
04:02Kayden: *such a patch
04:02Kayden: I doubt it would really change anything at this point
04:03Kayden: but, my position on shader compilers has always been that we should develop them in the Mesa community and in our tree
04:03Kayden: if people do want to use something like LLVM and it's in that upstream project, I can live with that as well
04:12Kayden: since it's also an open source project with defined procedures
04:12Kayden: and actual participation
04:21airlied: we also initially had the amdgpu llvm backend in mesa until it got upstreamed
04:24airlied: karolherbst: from some amd code about opencl-c.h
04:24airlied: " # FIXME: CLANG_CMAKE_DIR seems like the most stable way to find this, but
04:24airlied: # really there is no way to reliably discover this header.
04:24airlied: I thought I could get rid of resource dir but it appears not
04:28Plagman: if i'm reading the code right i think it's more of a general drm question than an amdgpu one, so bringing it up here
04:29Plagman: in vkd3d there's a pattern where every submit waits on an incremental timeline point
04:30Plagman: that currently forces the kernel to submit the next chunk just-in-time after it realizes the previous one completed
04:31Plagman: which doesn't let the hw queue kick in properly, causes bubbles, latency, etc
04:31Plagman: it's not a great pattern to begin with, sure - but couldn't drm elide those waits here? it knows there's already a signal flushed into that same ring
04:33dcbaker[m]: @airlied, I'd really, really like to get something in clang to make it more discoverable, since they don't seem to provide cmake, a clang-config, or pkg-config files
04:35airlied: dcbaker[m]: making clang include it iself seems doable and it's half there
04:36dcbaker[m]: still, for our own discovery it would be nice
04:40mareko: will IGT be part of upstream LLVM?
04:40Kayden: upstream LLVM has rejected it
04:40Kayden: it doesn't use their backend infrastructure
04:41mareko: sorry to hear that
04:41Kayden: which, in fairness...has a lot about it that isn't a great fit for the architecture. it'd need some changes to both parts
04:41Plagman: any data on compile times? i'm worried people are only tracking runtime in this possible compiler transition
04:41mareko: LLVM IR passes are slow
04:42Plagman: compile time is extremely important for gaming - like, backend-compiler-rewriting important
04:42Plagman: that's probably well known
04:42Kayden: would be excellent feedback for them
04:42Plagman: in these circles anyway
04:43curro: not really so important when you're hitting the shader cache most of the time
04:43Kayden: I tried to tell them that when bringing them up to speed on ACO
04:44Plagman: you can't hit the shader cache most of the time unfortunately
04:44Plagman: at least not when it matters
04:44mareko: I agree that the shader cache is vastly overrated
04:45Plagman: even when you have the luxury of an app that compiles everything upfront, warming the cache takes several minutes for modern games
04:45Plagman: the difference between a quick and slow compiler can bloat that time by 5x
04:46Plagman: looks like my timeline syncobj question above might be a dj-death question, i see his name on all the reviews
04:46HdkR: There was outrage about HZD up front shader compile times :P
04:46Plagman: there's many other examples at this point unfortunately
04:48mareko: most journalists run benchmarks with a cold shader cache and some of them like to show min FPS...
04:49Plagman: as long as they're being consistent about it, it's not unfair - seems representative of the experience a real user would have
04:49curro: how many of those are poorly designed games that bring up new shaders in the middle of the actual rendering? not saying that compile time is fully unimportant but as long as there is a trade-off between it and the shader's run-time i'd rather have improved run-time particularly if it's just going to slow down the first load of the game...
04:49Kayden: actually the last data I had was that people were complaining about compile times.
04:49Plagman: a quick compiler backend is really the difference between unplayable stutter and an OK experience
04:50mareko: curro: all DX->VK translation is on the fly compilation, this is unfixable... if you have a slow compiler, my condolences
04:50Plagman: it doesn't matter if they're poorly-designed, it's all games
04:50Plagman: even native dx11 and dx12 suffers from this problem
04:50curro: so let's go for the quickest compiler that does no optimization whatsoever? ;)
04:50DrNick: the actual answer is two compilers
04:51Kayden: curro: have you not been paying attention to any of the ACO work in the last couple years?
04:51Plagman: well aco is 5x quicker than llvm and also tends to generate better code, so i don't know what you're on about
04:51anholt: yeah. turns out you can just write a fast backend and it's fine.
04:52Plagman: adding an even quicker mode there that avoids some optimization and doing the two-pass background thread approach DrNick just hinted at is probably a good thing to do on top of what exists now
04:52curro: Kayden: sure i have, but some compiler being inefficient doesn't mean that there isn't a trade-off between compile-time and run-time in general
04:52Plagman: but not really needed right now
04:52mareko: curro: GL and DX have total compile time for separate shaders in O(VS+PS), while Vulkan has it in O(VS*PS*states), again, my condolences to whoever is using LLVM with Vulkan
04:53DrNick: I assumed the blobs did it O(VS*PS*states) even if they were theoretically separate
04:53Plagman: my point was that compile time is a real metric that matters to a great extent, so it's a good idea to keep track of it if evaluating a backend compiler switch
04:53Plagman: or you might be hurting users in unexpected ways
04:54mareko: DrNick: radeonsi does it in O(VS+PS) in 99% cases, and it was a lot of work and it's what makes adopting ACO hard
04:54Plagman: there are mitigation strategies that exist out there, but despite best efforts your users _will_ have their game lockup waiting for a compile to end
04:55Plagman: 5ms or 50ms matters a great deal there
04:55Plagman: and that broadly applies to the majority of modern games
04:55Plagman: (not just due to translation, even if it doesn't help)
04:57DrNick: mareko: but there are theoretically optimization opportunities where games use a modular shader approach and e.g. one VS always produces constant values that can be inlined into the PS or one of the PS ignores ones of its inputs from the VS, making its calculation dead code
04:58curro: Plagman: sounds like those games could use a better graphics interface to have some guarantees of when (and how thoroughly) things get compiled, trying to guess whether the app wants a slow compile with full optimization or a quick and dirty one seems like a recipe for failure otherwise...
04:58DrNick: the app wants the fast compile with the full opimization
04:58mareko: DrNick: yes those are the advantages and they are good, radeonsi hansles that in parallel threads and switches to the optimized shaders when they are ready
04:59DrNick: oh, nice, I didn't realize that actually got implemented in Mesa
04:59Plagman: even if the first part of what you said is a true statement, it's useless to think about for all the games that exist today, as it's safe to assume they won't be fixed
05:00Plagman: DrNick: it'd be nice to plumb to radv's no-opt bit at some point
05:00Plagman: aco has some support for it but it's not fully fleshed out
05:01DrNick: has anybody written an offline Vulkan pipeline compiler that populates the cache?
05:01Plagman: one ships in steam right now
05:01DrNick: in the client? or on a server somewhere
05:01Plagman: all the pipelines of all games are serialized to disk as people play and collected
05:02Plagman: into per-game pipeline databases, that gets compiled into your binary cache before you play the game
05:02Plagman: it can take a long while though
05:02Plagman: especially with llvm
05:03DrNick: oh, I thought you were transferring pre-compiled shaders around, not the pipelines
05:03Plagman: it also does that
05:03Plagman: it's just rarer to get a hit on that, so the other thing is a superset of that system, in a way
05:03DrNick: yeah, I was just about to ask if the cache hit rate was any good
05:03Plagman: but if you do also get a payload of prebuilt cache entries, it just speeds up or completely avoids the replaying process
05:03Plagman: it's good for popular combinations, but of course that flies out the window if you build your own mesa
05:04Plagman: and not super nimble right now so if you just updated your distro, odds are you won't get a new payload in a day or two
05:04DrNick: how do you identify Mesa versions that are compatible?
05:04Plagman: not at the moment
05:04Plagman: that'd be a good optimization for hit rate, but it's hard to track
05:04Plagman: we'd need compiler and driver devs to participate and it's prone to error
05:05Plagman: i missed the first word of your question
05:05Plagman: so yeah, we don't identify compatible versions - it's by exact timestamp/hash
05:05DrNick: of the shared object?
05:05Plagman: so everyone on latest stable mesa from ubuntu gets the same thing
05:05Plagman: of several, iirc - tarceri knows
05:06Plagman: if you use llvm i think that participates?
05:06Plagman: the serializer and offline replayer is called fossilize, hans-kristian wrote it
05:06Plagman: i believe it's used by mesa as well
05:07Plagman: well by mesa devs for a shader-db equivalent for vulkan
05:10Plagman: this thing
05:10Plagman: turns out fossilize is not the easiest to google
05:13tarceri: Most (all?) mesa driver no longer use the timestamp of the build, rather we use a hash of the build
05:15tarceri: So in theory shaders can be used across distros assuming gcc output the same object.
05:17Plagman: in a perfect world someone would increment a version whenever any change is made in the driver or compiler that might affect the generated code for the same key
05:17Plagman: and we'd have CI to catch it when we somehow forget
05:17Plagman: but that sounds like a lot of work
05:18Plagman: when i say someone i mean anyone making said change to the driver/compiler, not that one person would try to track all of it
05:20tarceri: yeah unfortunately that was never going to work well in an open source world. Even if we managed to track it successfully project wide, there is also the issue of distros applying custom patches
05:20tarceri: customs user builds etc
05:24Plagman: i can already see the distro patch: "remove dumb version code causing less cache hits"
05:24Plagman: maybe i shouldn't be giving them ideas
05:25Plagman: i think we might do an attempt at it on the backend
05:25Plagman: we can statistically determine that two versions produce the exact same and link them together semantically as long as they do
05:25Plagman:adds to list
05:28curro: Plagman: hm, but what if your statistics are missing some factor that might influence the compatibility of the binaries across the two versions, just not under the conditions you were taking the samples?
05:28Plagman: if variance for one given change can be so small that this happens, then we'd just compare everything probably
05:28Plagman: not the end of the world
05:29Plagman: the first thing that diverges, that version is known bad, with massive hysteresis to reduce uncertainty in the beginning
05:29Plagman: still potentially unsafe though
05:30Plagman: we have a pretty good idea on when a game is done sending us new shaders usually, at least until it updates
05:30Plagman: so we could do the compare at that point
05:30Plagman: as long as it makes sure to re-compare before changing the distributed set it should be safe, actually
05:30Plagman: just have to compare the whole thing
05:30Plagman: it's not that much data all things considered
05:31Plagman: so ought to be doable
05:34curro: IMHO deciding whether an existing binary is suitable for some other hardware+software combination is the kind of task that can be done more reliably by the driver itself than by any external observer treating it as a black box, it's the one in a position to exhaustively track all the variables that can possibly influence the compilation
05:37Plagman: right now it's being safe by only considering hash and i don't know that there's any reason to change
05:37Plagman: the source pipelines are more important and the hit rate is way higher on that, since they're not driver-specific to the same extent
05:38DrNick: how intractable is serializing GL state?
05:38Plagman: there's not that many distros out there so it works well there with enough users
05:38Kayden: hmm, serializing the gallium CSO cache? hadn't thought about it
05:39Plagman: i think you could do it if you had inside knowledge of underlying vendor driver behaviour from the app side
05:39Plagman: apps were kinda doing that already, tribal knowledge of knowing what state not to change
05:39Plagman: or what state to change when arming the shader cache in your loading screen
05:39Plagman: it'd be super dicey with all possible extensions to track though
05:39Plagman: it's doable but i wouldn't want to get close to it, sounds like a nightmare
05:43Plagman: i guess i was thinking of just tracking the state you know affect pipeline compilation on the particular driver you're dealing with but maybe that's premature optimization
05:43Plagman: you can just do all of GL state
05:44Plagman: if apitrace and the steam overlay can do it, why not
05:44Plagman: gl is probably better left alone at this point though
05:45Plagman: i'd way rather spend some energy on improving zink some more and leverage the existing vulkan pipeline serialization when that's good enough
07:26cengiz_io: hello there. is this the right place to ask about drm drivers for a rgb24 tft panel?
07:40cengiz_io: I have a tianma tft with Parallel RGB (1 ch 8-bit) FPC 40 pins. I'm trying to get this recognized as "simple panel" by drm subsystem but there are no connectors detected by dri-drm
07:41cengiz_io: and when I look at include/drm/drm_mode.h I can't see a connector type for RGB. #define DRM_MODE_CONNECTOR_XXX
07:42cengiz_io: question is: which DRM_MODE_CONNECTOR_ is suitable for parallel RGB?
07:54Venemo: curro, DrNick I'm late to the discussion, but AFAIK the optimizer isn't the slowest part of ACO so we wouldn't get much by skipping optimizations
07:55Venemo: maybe dschuermann has a flame graph somewhere
07:57Venemo: also, we have plenty of other work to do before this becomes reasonable
08:22MrCooper: Plagman: I'd say shader pre-caching is another argument for running steam in flatpak :)
08:23curro: Venemo: i was just being sarcastic in that comment ;), didn't seriously suggest to remove the optimizer, it was just an example of the trade-offs a compiler has to make between compile time and optimality of the generated binary (not all of them in what you'd call an optimization pass), there's hardly any way around that...
08:35Venemo: curro: yes, I see your point there, just saying that in this case the situation is a bit more complicated than that. fortunately aco's optimizer doesn't have to do much because NIR is responsible for a lot of the heavy stuff
10:45MrCooper: tanty: did you see https://gitlab.freedesktop.org/mesa/piglit/-/issues/43 ?
10:45gitbot: Mesa issue 43 in piglit "!353 broke generating HTML summary for xts-render profile results" [Infrastructure, Regression, Opened]
11:46rellla: hi, i'm trying to fix a stencil writemask issue in lima and have a question to glStencilMask:
11:47rellla: if i have sth like glStencilMask(0x00); glStencilFunc(GL_EQUAL, 50, 0xff); glStencilOp(GL_REPLACE, GL_REPLACE, GL_REPLACE);
11:48rellla: the stencil-replace-op should be masked with 0x00 and result in a new stencil value of 0, shouldn't it?
11:49glennk: the mask applies to writes back to the stencil buffer, analogous to how depth and color masks work
11:50glennk: so the above is a no-op
12:04rellla: glennk, thanks. i think i got it now.
12:34rellla: glennk, to verify: glStencilMask(0x03); glStencilFunc(GL_EQUAL, 51, 0xff); glStencilOp(GL_REPLACE, GL_REPLACE, GL_REPLACE); should end up in a new stencil value of 48, right?
12:35jaganteki: Hi, is it possible to add i2c_client in dsi panel driver and probe based on video pipeline?
12:36rellla: glennk: sry, 3x GL_INCR i meant
12:37rellla: so for GL_INCR it's "stencilNew = (stencilOld & ~writemask) | ((stencilOld + 1) & writemask)"?
12:51glennk: i think new = min(old + 1, (1<<nstencilbits) -1) & writemask
12:51glennk: i think you may be mixing up the writemask with the mask specified in glStencilFunc ?
12:57karolherbst: Kayden, airlied: not against any of that, just the mix of "code developed behind closed doors" and statements like "if everythign works out, it will be the default" are just a combination I don't want to see
12:57karolherbst: especially if upstream LLVM rejected it
12:57karolherbst: the focus should be on _open_ development model
12:57karolherbst: unless it's a core OS API
13:05rellla: glennk, i don't think i'm mixing it up :) but i'm not sure, what writemask should do, so another example:
13:08rellla: (in bits) old stencil value: 01010101 (85) should get a GL_INCR with writemask set to 00000011 , so what is the new value?
13:09rellla: oh, better lets take writemask 00000001
13:12glennk: it takes 85, adds 1, then before writing back to the stencil buffer the writemask is applied, and if the mask is 0b1 you get 0b1 written to the stencil buffer
13:14rellla: if so, i understood the spec wrong. for me it should be 84, because i only enable writing to 0b1 - in our case with a zero, the other bits 0b11111110 stay unchanged. but this might be wrong and is exactly what i wanted to know :)
13:16glennk: err, sorry, i meant to write 0b1
13:17glennk: 0b1 | (old & ~writemask)
13:19rellla: ((old + 1) & writemask) | (old & ~writemask) for GL_INCR
13:20glennk: for GL_INCR_WRAP yes
13:21rellla: yeah, for the others we have to clamp
13:22rellla: thanks, i think i got it and can confirm, that we have a bug in the mali blob or hardware regarding the writemask :)
13:30tanty: MrCooper, just noticed it (43)
13:30tanty: Do you think you could hand me the failing results file?
13:30tanty: I'm a bit puzzled, because the commit you bisected only changes the testing of piglit itself
13:30tanty: it doesn't really change anything in the framework
16:28MrCooper: tanty: looks like the bisect result may be wrong, it doesn't seem 100% reproducible; I'll add a comment if I find out more
17:49Plagman: MrCooper: for flatpak steam, you're saying it would let us have more people on the same mesa runtime and improve pre-caching hits, right?
17:49MrCooper: right; though hopefully at some point flatpak will be able to use the host drivers, in which case this would be moot
17:50Plagman: so flatpak steam is kinda broken in a bunch of ways, and it doesn't let us do per-game containers at the moment
17:50Plagman: but these same per-game containers we just rolled out will let us do the same thing
17:50MrCooper: then again I guess the steam flatpak could always override the drivers if it wants
17:51Plagman: having an optional built-in mesa as a service to users is on my list
17:51MrCooper: out of curiosity, broken how? Seems to work fine for me
17:51Plagman: it's usually features on the edge of it that not everybody uses
17:51Plagman: like some aspects of controller support get cut off from the hardware access they need, remote play has trouble doing the screen readback and pulseaudio magic it needs to do, etc
17:52Plagman: i think steam is more of a platform-ish thing than the flatpak app model allows these days, for better or worse
17:52MrCooper: screen readback should work using portals
17:55Plagman: yeah , i just don't think steam has any support for that
17:59Plagman: the main broken thing right now is that we can't use bubblewrap to do per-game containers
19:46mslusarz: heh, what's up with all those new tags for commits from 5 years ago that keep popping up on mesa/mesa?
19:48karolherbst: curro: the discussion is all mood anyway as IGC won't make it into mesa
19:48karolherbst: and that's a fact
20:06mareko: can somebody from Intel please bisect this failure? https://mesa-ci.01.org/mareko/builds/20/group/63a9f0ea7bb98050796b649e85481845
20:06curro: karolherbst: wasn't trying to convince anyone of the opposite, i'm not particularly fond of IGC myself, just saying that the compile-time argument can be a double-edged sword
20:10dcbaker[m]: mslusarz: That might be my fault, I used git push tags --all because I didn't have tab completion working and was lazy
20:10dcbaker[m]: and didn't realize that it would push tags from other remotes
20:14curro: mareko: do you have a link to the branch that triggered it?
20:18austriancoder: daniels: do you remember when we talked about x11 deps, gbm and piglit?
20:21austriancoder: daniels: without the x11 deps I get no libGL.so and piglit with gbm fails: https://hastebin.com/upadequdef.json and here is the log containing the piglit building
20:21mareko: curro: firstname.lastname@example.org:Mesa_CI/repos/mesa.git intel-ci/dev/mareko
20:22mareko: curro: email@example.com:Mesa_CI/repos/mesa.git dev/mareko
20:22austriancoder: daniels: soo.. I think I need that deps and want to build libGL. in CI
21:00curro: mareko: 59a10301bfc0bbdcc seems like the culprit
21:07FLHerne: karolherbst: I don't really see why? It only affects the Intel driver, so any problems from it being a bad idea don't really hurt anyone else
21:07FLHerne: Except maybe distros
21:08karolherbst: FLHerne: why would mesa want to depend on propriatary code?
21:09karolherbst: the only exception here is really OS provided stuff like for macos or windows where system libs are just you know, developed behind closed doors
21:09karolherbst: but I don't see why we should accept that for any other dependencies
21:10FLHerne: Who said anything about proprietary code? It's open-source now, never mind in a year or whenever thy said
21:10karolherbst: and I am not even worried about that being an optional thing. I am more worried about statements like "If all goes well by the end of next year we could potentially see IGC used by default within Mesa."
21:10karolherbst: FLHerne: it's developed behing closed doors, so that counts as proprietary
21:10karolherbst: even LLVM doesn't want it
21:10karolherbst: so why would we want to?
21:11FLHerne: I don't see how it's different from LLVM from a Mesa standpoint
21:11karolherbst: I don't want to be this project where Intel can say "uhh, mesa accepted it, you just don't like us!"
21:11FLHerne: Both of them are pretty much impractical to patch to solve immediate Mesa problems
21:11karolherbst: FLHerne: that's not the point
21:11FLHerne: (LLVM because the release cycle is so slow)
21:11karolherbst: it's about how the deps are developed
21:11ajax: llvm has an active upstream with multiple invested parties
21:12ajax: igc, not so much
21:12karolherbst: why are we even discussing this? this should be a no brainer
21:12karolherbst: no deps developed behind closed doors
21:12FLHerne: ajax: Yeah, but IGC will only ever be used by the Intel driver, so that's their own problem
21:12karolherbst: and I don't see why anybody would accept anything else
21:12ajax: no, sorry, if i have to ship an intel driver it's my problem too
21:12karolherbst: FLHerne: and ours once we accept it as a dep
21:12karolherbst: especially if people push it as being default
21:13karolherbst: and I don't have this "llvm is at 18, but IGC still targets 12" problem
21:13ajax: open process really does matter
21:13karolherbst: *don't want to
21:13karolherbst: if the code is good, get it in upstream llvm first
21:13karolherbst: _then_ we can talk about changing mesa
21:13karolherbst: not the other way around
21:13karolherbst: or ship it inside mesa and develop it inside mesa
21:13karolherbst: that's fine too
21:14karolherbst: but then no "update to internal code base" crap
21:14karolherbst: anyway, I am most angry at statements like " If all goes well by the end of next year we could potentially see IGC used by default within Mesa."
21:14karolherbst: forget anything else
21:15karolherbst: this is what pisses me off the most
21:15karolherbst: there were _0_ public discussions about this
21:17curro: karolherbst: take it easy maybe that just means someone is going to be disappointed by the end of next year
21:18karolherbst: curro: yeah, I know. it's just.. the audacity to say something like that
21:18karolherbst: I am also not saying that by the end of next year, intel does produce all their cips at TSMC because that's the best path forward :p
21:18karolherbst: and make it sound like I can just decide this
21:20karolherbst: and at this point I am just not sure if that craps comes from Intel or somebody else
21:23curro: karolherbst: personally i don't see anything wrong with driver from random company $x having an open-source external compiler dependency with a stable interface, whatever the development model of that dependency is, as long as there is some compelling material reason to do it (like their windows compiler being massively better than the Mesa one) -- but that's precisely what i think is missing in this case
21:24karolherbst: curro: well, I do personally and because of my job
21:24karolherbst: doesn't matter if it's a factor of 100x faster, I still wouldn't accept it
21:25karolherbst: and I don't see the problem of why that stuff couldn't be developed in the open with a normal/standard open development model
21:27curro: karolherbst: whew, if it was a factor 100x faster maybe it would be time to fork it and take over the project ;)
21:27karolherbst: that would be an alternative indeed :p
21:31airlied: curro: every linux distro now gets another component they have to manager release cycles interactions with
21:31karolherbst: also, live images some distributions are very picky about
21:32airlied: llvm based out of tree is the worst, since unless the vendor ties into llvm release cycle and dates, distros start being limited in what they can ship
21:32airlied: it just doesn't scale
21:32airlied: if we have 3 vendors all with LLVM out of tree backends we'd never ship a new LLVM for 5-6 months
21:34karolherbst: I'd even go that far and say no llvm targeting deps unless they have a branch targeting the _next_ major release already
21:34karolherbst: even if it's openely developed
21:35karolherbst: I am quite happy we got the llvm-spirv-translator to not be an llvm fork before, so that's something working out nicely at least
21:35karolherbst: and intel helps out there as well, which is good
21:36airlied: it's also small enough to keep up with llvm project
21:36airlied: I can't imagine every vendor having out of tree backends with different release cycles
21:36airlied: it would make releasing mesa pretty impossible
21:36karolherbst: well, their choice
21:36karolherbst: as long as they keep up I wouldn't mind
21:36karolherbst: and if its developed in the open
21:37airlied: even that is very messy if you scale it out
21:37airlied: you end up shipping git snapshots of 10 projects in 5 years
21:38karolherbst: ohh, if they want to waste money on that I don't want to stop them :p
21:38karolherbst: would simply say that upstreaming is probably the best path forward
21:38karolherbst: but yeah
21:38karolherbst: it would be a terrible situation
21:39airlied: we'd have to block it at a distro level
21:39karolherbst: ohh sure
21:39airlied: as we don't have the manpower to cover that sort of packaging
21:39airlied: we barely have the power to cover llvm and mesa :-P
21:39karolherbst: and I think that acceptence in distributions is also a big factor
21:40karolherbst: but that is already an unspoken rule :D
21:40airlied: yeah the problem is some distros don't think of long term consequences when make short term decisions :-P
21:41curro: airlied: sure having additional dependencies can always be a burden to the adoption of a driver, but why should we have a rule banning such dependencies a priori? in some cases the vendor my judge the benefit to be enough to justify the risks of random distros not building the driver if the dependency is too annoying to package...
21:41airlied: curro: the problem is some random distro, it's saves me months of saying no to fedora/rhel
21:41airlied: isn't some random distro
21:42airlied: if we can create an upstream consensus it stops distro shopping
21:42airlied: but waaah Ubuntu ships it
21:42airlied: why won't you ship it, you are just mean :-P
21:42karolherbst: also, people should know what to expect before doing stupid things
21:43karolherbst: it's just fair writing down the rules most of us agree on already
21:43airlied: curro: also upstream should provide guidance to distros
21:43airlied: most distros don't have enough domain knowledge to make educated good decisions
21:44airlied: like Ubuntu saying yes to one vendor might be sustainable, but once you get to 5-6 vendors the combinatorial explosion happens
21:44airlied: and they might noe understand the gravity of that first yes
21:45airlied: it sets a precedent and it's hard to step back from when you realise it was a bad idea
22:14tjaalton: the only thing using it (IGC stack) on debian/ubuntu atm is the compute runtime
22:15tjaalton: which, as it turns out, didn't really help using darktable at all
22:22Lyude: anyone else seeing this extremely vague error when trying to compile drm-tip? FAILED unresolved symbol vfs_getattr
22:26ccr: https://lore.kernel.org/bpf/20201016213835.GJ1461394@krava/T/#mf23ff4a28dcb7a51060657a1144dda0f54e9bada seems to indicate that it's some kind of toolchain issue
22:27ccr: though apparently related to kernel .config as well
22:42Lyude: ccr: ugh, seems like it's probably my toolchain. this is very painful
22:44Lyude: ccr: thanks for the help though
22:45Lyude: i wonder if I can maybe just, turn this off
22:52ccr: perhaps you can downgrade gcc?
22:53Lyude: yeah I might, I just tried updating (gcc wasn't on the list of updates, and I should still be able to downgrade it later if this doesn't work) to see if maybe that'll fix it, but otherwise I'll just see if I can find an older version of gcc
22:53ccr: I recently had to install gcc 8 on my Debian alongside 10 for some kernel bisecting due to certain issues
22:54ccr: (iirc it was that gcc 10.x miscompiled some older versions of kernel, which then failed to boot)
22:55Lyude: yeah, bisecting older kernels has started to get really painful
22:56Lyude: huh, config restarted after the update, something has changed
22:56Lyude: ah now everything is broken?
22:56karolherbst: Lyude: you mean when the build system throws away your config and starts from scratch?
22:56Lyude: probably missing distclean or something
22:57Lyude: karolherbst: yes
22:57karolherbst: yeah... no idea, it happened for me in some cases as well...
22:57karolherbst: no clue why or what was up
22:57karolherbst: I think I ended up skipping
22:57karolherbst: or some other workaround
22:58karolherbst: hit this issue too often as well
22:58Lyude: karolherbst: did you see the original issue I was having though?
22:58karolherbst: the last time I hit this config issue was with 4.x something
22:58Lyude: 17:22 <Lyude> anyone else seeing this extremely vague error when trying to compile drm-tip? FAILED unresolved symbol vfs_getattr [ccr pointed to https://lore.kernel.org/bpf/20201016213835.GJ1461394@krava/T/#mf23ff4a28dcb7a51060657a1144dda0f54e9bada ]
22:59Lyude: the config restarting isn't a problem btw, I just noticed it since I assume that means _maybe_ the next build will work :P, or maybe it'll still be broken in the same way
22:59karolherbst: dunno, but the build system is probably just buggy
22:59Lyude: oh no
22:59Lyude: ok yeah now the build is even more broken
23:00ccr: there's a gcc issue about it https://gcc.gnu.org/bugzilla/show_bug.cgi?id=97060
23:01ccr: apparently some gcc 11 code was backported to fedora's gcc 10.x and affects it
23:02Lyude: https://paste.centos.org/view/664a873c this is the issue i'm hitting now btw
23:02Lyude: apparently I really should not have updated
23:05Lyude: karolherbst: are you fully updated, and can you see if you're running into this issue as well?
23:10Lyude: boy i'm, extremely confused. I didn't even update gcc, so why is there suddenly a new gcc attribute breaking compilation
23:15Lyude: oh wow
23:16Lyude: nevermind i'm a fool
23:16ccr:blinks his eyes
23:16Lyude: i somehow switched to the workspace I had open with the rhel kernel and tried compiling that
23:16Lyude: no wonder i suddenly got new compiler errors
23:17karolherbst: Lyude: what should I compile?
23:17Lyude: karolherbst: just a drm-tip kernel
23:17karolherbst: what config?
23:18Lyude: one sec
23:19Lyude: oh. fun. i can't downgrade gcc.
23:19Lyude: because f33 shipped with this gcc.
23:19Lyude: karolherbst: https://paste.centos.org/view/742b208b
23:21Lyude: oh-found an older gcc
23:31Lyude: ugh. still seems to be broken though.
23:35karolherbst: Lyude: "FAILED unresolved symbol vfs_getattr"
23:35Lyude: ugh. yep. that's what i'm seeing too.
23:35Lyude: really will not miss gcc if we ditch it someday
23:36karolherbst: you mean if we use rustc for everything? :p
23:36Lyude: would it kill them to like, actually compile test against the linux kernel before releasing updates
23:36Lyude: karolherbst: maybe. i'm just sick of gcc constantly breaking things.
23:37karolherbst: and your hope is that llvm breaks less often? :p
23:37Lyude: good point
23:39ccr: this particular case of breakage was due to RH/fedora people backporting code from development version of GCC tho :P
23:39Lyude: ccr: oh, right.
23:39Lyude: guess that justifies me in filing a bug then.
23:49Lyude: YES-found an old enough gcc that works
23:49ccr: hooray \:D\