01:20mareko: function inlining indicates a compute frontend
01:23HdkR: Obviously it's Tensorrent hardware bringup.
01:30airlied: we have functions in cooperative matrix now :-)
02:02airlied: glehmann: did you complain before about cmat stuff not being deref based
05:51MoeIcenowy: ~~Now I think we should have a DRM_IOCTL_MODE_CREATE_DUMB2 for FB with modifiers~~
05:53MoeIcenowy: oops it's in the todo document
06:50airlied: glehmann: https://gitlab.freedesktop.org/airlied/mesa/-/commits/coop-mat-use-derefs-wip I've started playing around, nearly have basic KHR_cooperative_matrix CTS passing on lavapipe
08:33MrCooper: MoeIcenowy: that would defeat the point of "dumb"
08:55glehmann: airlied: yeah, I did complain about the derefs. Why do you think it won't work for you?
08:57glehmann: and you current branch looks pretty similar to what cmarcelo originally wrote before someone insisted that cmats should not use ssa
09:39dj-death: what's the source meaning for the nir intrinsics atomic_swap ?
09:39dj-death: src0 = surface/image
09:39dj-death: src1 = coord
09:39dj-death: src2 = data or compare value?
09:41pendingchaos: atomic swap doesn't have a compare value
09:42pendingchaos: it writes the data and returns the old value
09:43dj-death: arg
09:43dj-death: okay
09:44dj-death: so that would be more like nir_atomic_op_cmpxchg
09:46pendingchaos: actually, maybe atomic_swap is a misleading name and it's always nir_atomic_op_cmpxchg
09:46dj-death: pendingchaos: and in the atomic intrinsic, is src1 or src2 the compare value?
09:47pendingchaos: I think the second is the compare value
09:47dj-death: thanks a lot
09:47dj-death: I'll a comment to the nir_intrinsic.py
09:48pendingchaos: actually, might be the opposite, and the first is the compare value
09:48dj-death: heh
09:48glehmann: I wish nir intrinsics sources were named like tex sources
09:49dj-death: yeah I'm reading our backend and I can't tell :)
09:50dj-death: I probably updated nir_get_io_data_src_number() wrong
09:50dj-death: returning 3 for all image intrinsics
09:51dj-death: oh wait no
09:51dj-death: because the python file adds the image handle first...
09:53dj-death: I guess if I believe nak used the same convention
09:53dj-death: let cmpr = self.get_src(&srcs[1]);
09:53dj-death: let data = self.get_src(&srcs[2]);
14:12MoeIcenowy: MrCooper: it's still dumb because it's only for display instead of rendering
14:12MrCooper: you're twisting it to mean what you're looking for
14:40dolphin: airlied, sima: drm-intel-fixes sent early as no new patches were picked today
14:42javierm: MoeIcenowy: dumb buffers are supposed to be easily accessed by the CPU, so making the format non-linear / tiled makes it not dumb anymore as MrCooper mentioned
15:09zmike: mareko: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/40095
15:11mripard: javierm: eeeh, I guess the issue is also that we have no other generic ioctl to allocate a buffer for the scanout
15:14mripard: javierm: so we could definitely have an argument that dumb buffers aren't meant to be tiled (which isn't documented anywhere though) but then what?
15:15alyssa: pendingchaos: tbh I kinda regret not merging the swap and non-swap intrinsics
15:15alyssa: I don't remember really why I didn't
15:30javierm: mripard: yeah, I can understand that there might be a need for a generic ioctl to allocate buffers with a format modifiers
15:30javierm: I'm just not convinced that should be called dumb buffers in that case
15:30mripard: that's the thing too
15:31mripard: it was never defined what a dumb buffer is, except that it can be mapped by the CPU and accessed by the scanout
15:32javierm: yes and that's why I think that limiting to linear makes sense since then the buffers can always be mapped and linearly accessed
15:52mripard: mapped by the CPU doesn't really mean easily accessed by the CPU :)
15:52mripard: which I think is the main point of that TODO item
15:52mripard: the definition of what a dumb buffer is is very vague and ambiguous, and we need to clear it up.
15:54MrCooper: dumb BOs were created for simple generic CPU drawing code, which tiling would defeat
15:55MrCooper: there's no generic ioctl for creating BOs suitable for any specific HW functionality because it's not really feasible
15:57MrCooper: apps are supposed to use GBM/Vulkan/... APIs for that, which internally use driver-specific ioctls
16:04javierm: MrCooper: agreed
16:04javierm: mripard: and also agree with you that the definiton of what dumb buffer really is should be clearly documented
17:04mlankhorst: Anyone willing to review https://patchwork.freedesktop.org/series/162135/ ? Fixes https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14613/shard-bmg-4/igt@xe_module_load@display-opened-unload.html
19:56airlied: cmarcelo: who decided cmats shouldn't be ssa?
19:57airlied: glehmann: I think the hardest problem is you need to propogate the cmat description to a bunch of intrinsics
19:57airlied: and if an intrinsic like muladd, might need 4 cmat descriptions, though you can source them from elsewhere, I'm not sure that is actually 100% going to work always
20:03glehmann: probably not with phis?
20:03glehmann: but tbf, I think it's only 3 descriptions because nobody has hardware where C and D are different
20:04airlied: indeed
20:04airlied: yeah I think with phis I'd be screwed to figure it out from previous instructions
20:05airlied: though I think I'd prefer that ugliness to all the variables
20:08glehmann: airlied: I think this is the old discussion: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/23825#note_2054330
20:13airlied: oh it was gfxstrand idea
20:14airlied: it's really making workgroup scope harder, or at least it's making me blame it for making workgroup scope harder :-)
20:15airlied: mapping all the tmp vars to the shared temporary allocation is very messy
20:18glehmann: at some point I would like to have an opt pass that reduces the number of transposes before lowering, and I think that would also be easier with ssa
20:27airlied: glehmann: yeah I suppose the other option is another pass that tries to clean a bunch of that mess up before driver lowering
20:27airlied: or before I get to workgroup lowering
20:31cmarcelo: airlied: that old discussion is probably the best reference on this yeah.
20:32cmarcelo: havent though much about workgroup scope but I probably should!
21:23airlied: cmarcelo: workgroup scope is painful because you have to do the shared memory loads implicitly in the compiler
21:23airlied: and also in some places shared memory stores
21:24airlied: but you also don't want to do those unnecessarily so it would be nice to fuse ops, but doing that after lowering is messy
21:28mareko: I've just asked Claude whether NIR has any pass that removes unused functions, and it correctly answered nir_remove_non_entrypoints and nir_cleanup_functions and correctly explained the difference between them
21:32zmike: sweatytowelguy
21:33zmike: does this mean we no longer have to write docs?
21:33airlied: mareko: it just hasn't caught up with the cmat one :-)
21:33airlied: nir_remove_non_cmat_call_entrypoints
21:37mareko: my follow-up question "are there any others?" caused it to find more of them, including nir_remove_non_cmat_call_entrypoints
21:37airlied: nice
21:45mareko: this is a common pattern with LLMs that they don't give the best answer or complete answer immediately, but instead they tend to give an answer that's also correct but suboptimal, and more prompts are needed to get a better one
21:47mareko: zmike: we don't have to write docs, but writing good code comments would help since those get fed into it
21:50mareko: or rather, it matters less if the documentation is in .rst or .c or .py
21:51karolherbst: oh right, that reminds me.. we probably need to decide at some point what to do with actual code submissions that were partly created by some genAI agent, because 1. they are in a weird spot in terms of IP (some countries claim that the output isn't copyrightable) so we might have to actually requests to disclose which parts of an MR were generated
21:51karolherbst: to be safe and 2. we can't know if the generated code is already copyright protected or not and if the agent is able to properly disclose it, though there are also cases where false attribution was given...
21:51karolherbst: and given the mesa is basically shipped everywhere, we might not want to risk any weirdo legal battles
21:51karolherbst: *that
21:53karolherbst: and if we want to decide that it's fine actually, I'd prefer if we pay a lawyer to give that to us in writing :)
21:54mareko: it's only a tool like a calculator, the author must still prove correctness, state the purpose, and justify the benefit
21:57karolherbst: this isn't about correctness, it's about copyright laws
21:57airlied: if you are writing code from mesa using an LLM, it's unlikely it will produce code from another project that is copyrightable that you would ship in mesa, it's more likely it will produce code infringing on mesa contributors which mesa contributors already likely do
21:57airlied: ^from^for
21:57karolherbst: not good enough
21:57airlied: it is, you do understand no lawyer exists to answer this question
21:58karolherbst: as I said: I'd rather have that in writing from an actual lawyer
21:58airlied: it's jurisdictional
21:58karolherbst: not gonna make statements about legal liabilities, because I'm not an expert and neither is anybody here (probably)
21:58airlied: some countries in your initial statement, means you need to find lawyers in those countries
21:59karolherbst: sure
21:59airlied: I'm not even talking about legal liabiltiy
21:59airlied: just your statement of trusting a paid lawyer to make any decision different to one we make ourselves
21:59karolherbst: well it would matter in court if we just decided ourselves or did the best to answer that question with proper legal advise
22:00mareko: it gets real complicated with "SW company X has an NDA and contract with AI company Y that nobody knows about, and SW company X develops code with it, and then publishes it as its own and refuses to acknowledge that AI was used because that's confidential"
22:00karolherbst: I'm sure we aren't the only one in this position, and SFC probably would be able to help out
22:00airlied: I think so far the only advice I've seen is maybe mark things as being AI generated so in future you have some idea, but I don't think the genie is going back in the box
22:01airlied: we should probably start using the DCO in mesa
22:01karolherbst: yeah well.. if companies can't disclose they used AI tooling to generate code that they submitted to Mesa, than frankly they shouldn't be get their submissions merged
22:01airlied: at least then we can be assured the person says they've sorted it out themselves
22:01airlied: karolherbst: but how do we know they did that?
22:01karolherbst: just put up a statement somewhere, so it lawyers ask we can say "well.. they were supposed to" 🙃
22:01karolherbst: shit like this sadly matters
22:02airlied: doing DCO would cover that as well
22:02karolherbst: yeah probably
22:02airlied: since it's stated you have the rights to submit the code
22:02airlied: which means either you or your company should know the legality of AI in your jurisdiction
22:03karolherbst: yeah.. maybe, maybe not, thing is that mesa also gets shipped everywhere, so there is that
22:03karolherbst: like the risk is higher on our end, than on the company ones
22:04airlied: we don't ship mesa everywhere, we provide mesa under MIT license
22:04karolherbst: dunno.. feels wrong to yolo it at least
22:04karolherbst: okay sure, I mean distros do, sure
22:04karolherbst: but they also trust us to not fuck it up
22:04airlied: they also ship a mountain of other code that is also going to be AI generated
22:05karolherbst: again, we can't access the legal risks tho
22:05airlied: like when this becomes an actual problem there will be plenty of legal advice
22:05karolherbst: *assess
22:05karolherbst: well it already is kinda, just hasn't reached us yet
22:05airlied: because no distro will be able to ship anything
22:06karolherbst: well this is exactly the part I'm not comfortable with. We can't assess those legal risk and we shouldn't
22:08airlied: I just don't understand what advice you think a lawyer can give the project that will stop the problem from happening
22:08karolherbst: the lawyer might say "it's better to wait for regulation and hold off on any AI generated contributions as a policy for the time being"
22:08airlied: we had discussions about this from the kernel pov at maintainers summit, and even with DCO you still can't avoid the problem of someone just not telling you they used AI
22:09airlied: and you'd rather they did tell you than ban contributions
22:09karolherbst: sure, but if they lie, it's a different situation
22:09airlied: because if you ban contributions, they are more incentivise to lie
22:09karolherbst: the practical outcome isn't as much relevant as us being able to tell that it wasn't supposed to happen and why not
22:09airlied: but I do think without DCO we are probably a bit swinging in the wind
22:09karolherbst: okay, but that's a risk we have to collectively decide on anyway
22:10airlied: so maybe we should work on adopting DCO then worry about AI contributions
22:10karolherbst: sounds like a good first step
22:10airlied: because unless we have some sort of signoff on contributions being something you were legally allowed to contribute in your jurisdiciton we are yolo anyways
22:10airlied: even if we put rules in place, because nobody has to agree to the
22:11airlied: even if we put rules in place, because nobody has to agree to them
22:11daniels: karolherbst: I can assure you that 'are we allowed to ship this code in/to country Y' is absolutely not a question you should ask, or Mesa as a project should discuss
22:13karolherbst: right fair, that's a too specific and not within our scope, but also the point is more, that just yoloing the thing is also kinda sketchy, especially if there is a problem in the future
22:14airlied: the thing is every project is yoloing it because there is no good legal guidance, because there is no precedent
22:14karolherbst: that doesn't mean it's fine to continue to yolo it
22:14airlied: and for mesa it's very unlikely we'll suddenly generate the NVIDIA GL driver source from an LLM that was trained on it
22:16karolherbst: not something I'd comment on tbh. Like it could happen or it might never happen. If you ask for a very nvidia specific thing with very nvidia specific terms it might even do that for real
22:16mareko:should have brought popcorn to this
22:17airlied: I'd be more worried AMD would contribute PAL code under license than LLM generated illegal PAL code :-P
22:17daniels: this has already happened though
22:18mareko: yeah how do you know about PAL? did you get it from AI? :D
22:18daniels: someone contributed a proprietary driver ... it's just that they zipped it up and attached it to Bugzilla
22:18karolherbst: "fun"
22:18daniels: so we quietly deleted it and emailed them to let them know that they probably shouldn't contribute the SGI GL reference implementation
22:20daniels: which informs my personal take on it; if we saw piles of Windows-looking code from a new contributor then we'd have a long private discussion about it, but when we get DC or whatever from AMD, it comes with AMD's assurance about how it's developed and how it isn't just laundered Windows code, and we trust that assurance because we trust them
22:21daniels: the same would be true of LLMs: if someone turned up with a fascinating state-of-the-art NVIDIA raytracing implementation, then we'd go quietly ask them some pretty strong questions, and maybe let NV people know that they should remind the rest of NV not to copy and paste their driver stack into training
22:21daniels: just the same as if someone was copying & pasting from Stack Overflow and it clearly wasn't code they could actually speak to or reason about
22:22karolherbst: right
22:27karolherbst: anyway.. maybe point is just that we need that discussion and we should probably try to find to reach some sort of consensus on what to do here. And I think starting with some basic DCO stuff is _probably_ a good start there
22:28daniels: I think DCO is an excellent idea anyway, and tbh I've never entirely understood why we don't do it
22:28karolherbst: I just wouldn't really want to take the risk and deal with the fallout if it goes loud for whatever reason. And a big part of the community also has firm "no to AI" stance, so I'd even consider AI generated submission before that conversation as a bad faith attempt to steamroll it
22:28daniels: karolherbst: I don't think it would be bad faith at all, given that the closest we've come to agreement so far is 'just be able to take responsibility for whatever it is that you do submit'
22:29karolherbst: and it would be kinda sad to see people leave, because somebody else thought it's okay to just do it
22:29daniels: a lot of people don't like it and that's fair enough, but I'm sure some people have moral objections to DX12 or Metal as well, yet we have three drivers doing exactly that
22:29karolherbst: well... yeah here, but I know that some have a stronger stance on it
22:30karolherbst: and some maintainers already voiced "no AI at all" opinions.
22:30daniels: right, and some would say that enabling DX12/Metal is immoral
22:30daniels: in fact I'm pretty sure I remember you being quite vocal about that early :P
22:30karolherbst: so it's not just hypothetical, some already said it
22:30karolherbst: heh
22:31karolherbst: I think my concerns were about something else on that matter, but fair enough
22:31mareko: svga is also partially a windows drivers, and that's been in Mesa for 16 years
22:31daniels: I'm being serious though - if our policy is that it's unacceptable for any of the myriad excellent reasons, then that's fine, let's codify that. but if our current policy is an ambivalent shrug, then it's hugely unfair to say that someone complying with that is acting in bad faith
22:32karolherbst: okay, maybe "bad faith" is the wrong term here
22:32karolherbst: maybe more like "willingly ignoring concerns from part of the community", because it is known that some aren't happy about it and it feels irresponsible to just ignore that.
22:32daniels: I mean, GNU's policy is that AMD is a disgusting enemy of freedom because they have some stuff running on a microcontroller rather than burnt into silicon, but we do accept their firmware-reliant code, and accept that it means we alienate the hardcore no-firmware-ever wing
22:32karolherbst: or to shrug it off
22:33daniels: karolherbst: again, if our position is that a contribution made partly or wholly from genAI is unacceptable, then our policy needs to become 'you must not contribute anything partly or wholly from genAI
22:33karolherbst: oh sure, and it led to conflict having that stance
22:33karolherbst: which we'll probably not be able to avoid in this case either
22:34karolherbst: which is also why I'd rather have it discussed before the fact
22:34daniels: but if we can't form consensus that it's unacceptable, then it's not right to put it back on the contributor and say that it's their fault for reading our policy and complying with it
22:34daniels: that's just an abrogation of responsibility tbh
22:35karolherbst: right, if we have that discussion and we don't reach a conclusion, then yeah, we'll also have to live with the consequences of not having a conlcusion
22:38daniels: karolherbst: we did have that conversation though; Venemo articulated something that got a bunch of agreement, and that's what we have now
22:41karolherbst: I'd rather have that written down properly, because the topic itself is so dividing and at least documented on gitlab if that was consensus or not
22:44mareko: let's be happy that we have contributors
22:53Venemo: daniels, karolherbst personally, I think that what we added in the contributors' guide should cover this. the submitter should understand the code they are submitting and they must have the rights to submit it under Mesa's license. sounds like those would exclude anything that you want to exclude, already
23:00karolherbst: oh sure, but we also discussed that the topic around genAI specifically is to be postponed
23:01karolherbst: in that MR
23:06karolherbst: like.. I really just want to avoid us to run into the issue of somebody bringing up "why do we allow AI code submission? Can we place talk about it first?" and like have some really upset people here, because it's that kind of topic
23:10airlied: like what part of the contributor agreement should we improve on?
23:10airlied: "The submitter is responsible for the code change, regardless of where that code change came from, whether they wrote it themselves, used an “AI” or other tool, or got it from someone else. That responsibility includes making sure that the code change can be submitted under the MIT license that Mesa uses."
23:10airlied: reads to me like we've covered our bases
23:10airlied: the DCO might add a little more air support
23:10karolherbst: this isn't about improving the contributor agreement, it's about having the conversation about AI code submission
23:11airlied: how do you think that will work out then?
23:11karolherbst: we've added that text, while also saying "AI should be discussed later in a separate MR"
23:12karolherbst: well not literal quote, but that's what at least some agreed on there
23:12airlied: AI submission should like daniels said be rated on the legality and quality of the submission, just like non AI ones, anything more than that is going to piss off at least some subsection of people
23:12karolherbst: I obviously can't know how that will work out, maybe we don't find common grounds or something. I don't know, it just feels wrong to be against having that conversation "because we already cover it sufficiently"
23:12airlied: I don't see a discussion resolving into consensus
23:13airlied: if as you said some people are 100% against AI code submissions, and a bunch of people have employers who fund mesa who really want their employees to use AI
23:13karolherbst: yeah but it's not the call of the two of us to say "yeah well that discussion is pointless anyway, because it's pissing of people"
23:13karolherbst: if we agree that despite the incentives and our employees to ban it anyway, I would accept that outcome, and if we agree that we shouldn't ban it I also would
23:14airlied: it's a no-win discussion and yes we can have it so people feel heard, but there isn't really a comfortable middle ground
23:14karolherbst: but it's not my call to make alone, and not to answer that question for everybody else
23:14karolherbst: I still think it's better to have that proper conversation even if we already think it won't go anywhere
23:15karolherbst: I wouldn't feel good about if some people here would just decide such things on their own, "because a conversation wouldn't go anywhere" and I would be rightly upset about that
23:15airlied: I'm not even sure how you start having it, mailing list or open an issue, but it's more likely it'll just rile people up and piss them off than create something useful
23:15karolherbst: yeah......... I don't really know what would be a good way to discuss it, because it would also attract internet people..
23:16karolherbst: but also not having it also feels wrong
23:16airlied: look you can't bring consensus between the positions of "I don't want a project where genAI is acceptable", "I'm paid to work on a project and my employer insists I use genAI"
23:16karolherbst: well good for the two of us is, that we can still vote for banning AI without violating any company policy
23:16karolherbst: but I do see that others are in a worse spot than that
23:17karolherbst: though we'd probably also get question asked how we dared
23:17airlied: the company would stop funding mesa developers eventually though, so yes you can do it without violating company policy, but the company can decide that this project isn't somewhere they want to focus
23:17karolherbst: but even the decision of "let's not discuss this" is I think something we should collectively agree on
23:17karolherbst: sure
23:18karolherbst: but we as a community might actually want to do it regardless, even if it's a bad decision financially
23:18karolherbst: like
23:18karolherbst: I don't know
23:18karolherbst: maybe we would actually have the majority wanting to disallow it, so what now?
23:18airlied: I'd warrant our community is largely made up of people employed by companies who won't allow them to make a stand
23:18Company: karolherbst: what would your end game look like?
23:18Company: 5-10 years down the road
23:19karolherbst: I don't have a crystal ball
23:19Company: I can only see 2 options: Either
23:19karolherbst: it could be that nobody talks about AI in 5-10 years
23:19Company: either AI explodes
23:19Company: or you're the small holdout that gets forked and dies
23:19karolherbst: sure
23:19karolherbst: I mean that's obvious
23:20Company: in both cases it's not worth fighting AI
23:20Company: in the first case you win anyway, in the second you die no matter what
23:20airlied: I'd warrant the contributors agreement already allows AI submissions, and if someone wants to change that then they should propose it
23:20karolherbst: okay sure, but even if that makes sense, it's not ground enough to not have the discussion collectively on an issue or something and come to the conclusion collectively
23:22karolherbst: like I trust everybody here and I'm sure we find the right conclusion, just need to do it once and then the topic is done for (hopefully)
23:23daniels: we did, and now the outcome is ‘some people don’t like it so we need to revise it in an unclear way’
23:23karolherbst: we didn't
23:23daniels: it was there
23:23karolherbst: we decided that we will decide on AI later
23:23airlied: that clearly isn't what ended up in the contributor agreement
23:23airlied: the contributor agreement says you can use AI if you totally own the output and understand it
23:24karolherbst: it wasn't, because we decided that we don't want to discuss it on that MR to get the current wording in for now until we actually do discuss AI
23:24airlied: that sounds like we don't want to have this discussion, so we avoided it successfully
23:24karolherbst: so now using "we actually merged it" feels dishonest tbh
23:25zmike: we had been avoiding it successfully*
23:25karolherbst: yeah, and I think if we keep ignoring it, it just will lead to a worse and bad outcome
23:25airlied: karolherbst: what worse and bad outcome?
23:25airlied: I'm not saying your doomering here, but it feels a bit doomery
23:26karolherbst: I can see some people I'd actually not see leave the project to leave the project. But maybe that also wouldn't happen, who knows. I just know that some have really strong opinions on that matter and they might or might not act on it
23:26airlied: I think rust had similiar reactions, christ I've had people freak out when I asked to review cmat patches, because they were AI related
23:26karolherbst: and I would feel bad if something of the likes would happen and we wouldn't have had that discussion.
23:27airlied: do we think making a decision will force them out of the project quicker?
23:27karolherbst: not sure that rust compares to AI, because AI has many moral and ethical implications
23:27karolherbst: maybe? maybe not?
23:27karolherbst: does it matter?
23:28airlied: like it does, but they are also unfortunately not something that the world is going to say "we should put this back in the box, and drop it in the sea"
23:28karolherbst: yeah, but my point is just that I feel like we should discuss it, and it's not fair to say "we rather shouldn't unless people are okay to not discuss it", but personally tbh I would rather we do discuss it and at least try to reach an agreement
23:29karolherbst: and if not, then we don't
23:29karolherbst: other projects managed to find an agreement
23:29karolherbst: so why shouldn't we
23:29airlied: most projects ended up where we did
23:29airlied: or where we at least are documented to be
23:29karolherbst: that's not the point
23:30karolherbst: I don't think we should make a decision for everybody else here
23:31karolherbst: and maybe we indeed come to the conclusion, but even then I'd say it's better to have that than not and that it was worth making it collectively.
23:31airlied: if you can figure out how to have that discussion in a productive and safe manner, go ahead and kick if off
23:31airlied: otherwise I assume once we get a major AI contribution, things will come to a point of contention and we will face it then
23:32karolherbst: yeah, I'll think about it...
23:33airlied: if someone uses AI to rewrite the GLSL parser in rust, I'll feel even more conflicted
23:33Company: so the trick is to be the first to do a major AI contribution - if you hate AI make it so bad that people hate it but just good enough that it'd probably be merged if written by a human
23:33karolherbst: imagine somebody using AI to write a C99 frontend for CLC...
23:33zmike: we already have some real AI MRs in existence from people putting in genuine effort to make contributions
23:33zmike: it's not impossible that we've already merged some
23:33karolherbst: yeah.. we do
23:33karolherbst: possibly
23:34daniels: karolherbst: on the flipside, we might never see great contributors if we ban it … same as moving from the other ML to GitLab
23:36karolherbst: yeah.. like it's not like I disagree with most of your points here, but as I said, it's also besides the point I try to bring up, that I'd rather have it a collective agreement, even if we conclude with "yeah... maybe this was a bad idea to discuss it, so let's just move on"
23:36daniels: I think it’s fine to say that people have reservations. I just think it’s factually wrong to say ‘but we never discussed this’ (we did), and really bad leadership to say anyone submitting things in line with our established policy is acting in bad faith.
23:37daniels: the conclusion being a bit grey-area might be unsatisfying, but that’s where we are …
23:37karolherbst: yeah I agree (as stated above) that "bad faith" was the wrong descirption of it
23:40karolherbst: I'm probably biased due to the people I usually interact with and I know that a lot of them really detest genAI to the maximum, so from that perspective it feels like "I just ignore those reservations and do it anyway", but of course many might also just not be aware of it, or actually don't give it a thought like that and are just overwhelmed by
23:40karolherbst: the possibilities to not wanting to think about the ethical and moral implications, or just don't think they matter that much.
23:42daniels: I think you might be reading that I have a different position to the one I actually have fwiw
23:42daniels: but it’s important to draw a line between ‘we haven’t discussed this’ and ‘I don’t like the conclusion’
23:45airlied: I also don't think the mesa project is the point where taking an position due to ethics/morals is going to have any effect other than making use feel smug
23:48karolherbst: while I do have to agree that we did discuss it, as we do right now, I don't think we actually reached the "we have a conclusion" point, though maybe we indeed did so implicitly, and nobody actually wants to discuss it any further, which is a fair position tbh. But I also feel uncomfortable just not do it properly, whatever that means. Like I'm
23:48karolherbst: really not talk about it for the sake of it, but because I'm genuinely concerned we haven't done it properly. But maybe everybody else is fine with the status quo, and it's okay actually.
23:49karolherbst: anyway.. tldr: I kinda feel bad about it, but maybe I can't really specify why
23:51daniels: well, what does ‘properly’ mean?
23:52zmike: I thought we had reached a definitive conclusion
23:53karolherbst: I don't know tbh :)
23:53karolherbst: zmike: well on the MR at least we concluded with "let's discuss genAI in a separate MR" or something along those lines
23:54zmike: I thought we concluded by having Venemo write AI guidelines which were the ones we all agreed to
23:54zmike: and that got merged
23:54zmike: and now we have our guidelines
23:54karolherbst: but I think we should maybe have something with a sufficient heads up like "we'll discuss within those 7 days and whatever we end up with at that point, even if nothing will be the result and that's what we'll go with"
23:55karolherbst: zmike: and on that MR it was decided in one thread that we won't discuss genAI submissions there, but move it to its separate discussion
23:55daniels: raise a -0+0 MR with exactly that then?
23:55karolherbst: yeah... my only concern with a public MR is just people lurking and then doing "mesa discusses AI" shitposting on socials...
23:56karolherbst: which might be fine
23:56zmike: who cares?
23:56karolherbst: I'm fine with just doing it publicly, just saying
23:56karolherbst: I'll just give it a few days of thoughts and will then just create something I guess
23:56Sachiel: make it confidential so only developers can see it
23:56karolherbst: yeah.. that is one option
23:56zmike: I don't see why it matters if anyone sees it
23:57karolherbst: we have enough people who can see it anyway
23:57karolherbst: well some people might feel more comfortable to discuss it if they knew it's not public
23:57zmike: those people already work on a public project
23:57zmike: get used to it
23:57daniels: this channel is also publicly logged
23:58daniels: you could also email everyone with cachet in mesa and ask them to weigh in
23:59karolherbst: yeah.. but also... we had enough shit happening as a result from too much publicity of certain things, so I'm just extra careful I guess