07:13 tjaalton: karolherbst, airlied: btw sounds like you think IGC etc need a forked llvm, but that hasn't been true for some time now? and since the translator is in llvm now, there's only opencl-clang & IGC left to update. that's not too hard to keep up with
07:15 tjaalton: plus ubuntu isn't involved here, these are all synced from debian, and the only motivation was getting the NEO opencl driver
07:20 airlied: tjaalton: does it have up to date llvm support though, so we can releae llvm11 on the same day as upstream?\
07:20 airlied: my feeling from looking before was it lagged llvm master often
07:20 tjaalton: generally no, unless some poking is involved
07:20 airlied: as a mesa dep that would be bad
07:21 tjaalton: as in, igc fails to build with 11 even though they claim that it got fixed
07:21 airlied: since we'd have to deal with amdgpu targetting llvm releases and igc not targetting it
07:21 tjaalton: right, it would be
07:21 tjaalton: I don't mind mesa saying no :)
07:21 tjaalton: just that for getting a somewhat decent opencl driver there's no choice
07:22 airlied: like I'd imagine it's horrible enough just for neo if you had to load the GL driver as well
07:22 airlied: since you could end up with two concurrent llvm versions in process, hope symbol versioning is working
07:23 airlied: tjaalton: can you imagine if we had to scale that out to more vendors though, that oit would be less messy? :-)
07:24 airlied: my problem is having one vendor is possible tractable, but if goes off the rails pretty quickly
07:24 tjaalton: yes
07:24 tjaalton: no I mean
07:24 airlied: maybe we could vendor IGC into mesa :-P
07:25 airlied: like we had the amdgpu backend bofore it was in llvm proper
08:33 curro: airlied: a little easier with dynamically loaded drivers, or? ;) it would be rather unfortunate if loading GL pulled three different versions of LLVM into the address space at once, because three different drivers depend on them indirectly
10:06 emersion: could someone merge https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/5734 ?
10:06 gitbot: Mesa issue (Merge request) 5734 in mesa "radv: add img debug flag" [Radv, Radeonsi, Opened]
10:06 emersion: i got r-bs and acks
10:08 pepp: emersion: done :)
10:08 emersion: ah, thanks!
12:48 ric96: I'd love to understand why the linux graphics stack doesn't use 1 framebuffer per monitor on the same gpu? Why use the same fb-n for all the monitors.
13:35 MrCooper: ric96: that's the Xorg architecture, not the graphics stack in general; Wayland compositors generally use separate FBs per CRTC
13:47 ric96: @MrCooper: but I see just one /dev/fb0 for four monitors. No xorg init yet. So thats drm driver then.
13:57 MrCooper: right, that's the old framebuffer device, which is mostly used for fbcon these days; the current API is DRM KMS, which has no such limitation
14:10 tzimmermann: @ric96, /dev/fb0 roughly corresponds to graphics cards, not monitors
14:12 tzimmermann: and fb0 does not really support multiple outputs
14:13 tzimmermann: see /sys/class/drm/ for information on cards + connectors
14:40 Akien: Hello, I want to test lavapipe in mesa 20.3.0 RC 1 but I don't find any mention to it nor vallium in mesa docs. I assume there's an env variable I can use to enable it?
14:43 LiquidAcid: Akien, well, but it's there: https://gitlab.freedesktop.org/mesa/mesa/-/tree/20.3/src/gallium/frontends/lavapipe
14:45 Akien: LiquidAcid: Yes I have the source code, but aside from diving into an unfamiliar codebase, that doesn't tell me how I can enable it and how to make sure my Mesa builds actually includes this code.
14:45 Akien: Though my build logs doesn't include lavapipe so I guess that's a first hint :)
14:46 LiquidAcid: Akien, https://gitlab.freedesktop.org/mesa/mesa/-/blob/20.3/meson.build#L274
14:46 LiquidAcid: your vulkan-drivers argument has to contain swrast
14:47 Akien: Ah thanks, I had it on gallium-drivers but not vulkan-drivers indeed.
14:47 Akien: And then I use it with `MESA_LOADER_DRIVER_OVERRIDE=swrast` ?
14:48 LiquidAcid: not sure, doesn't vulkan expose an api to select a gpu/driver?
14:48 Akien: Ah right, I guess with vulkan-drivers=swrast I'll get a Vulkan ICD for it
14:48 Akien: Thanks :)
14:48 LiquidAcid: good luck
14:52 austriancoder: daniels: ping about my messages to you I send yesterday
14:57 daniels: austriancoder: oh, x11 deps in CI? sure, go for it
15:02 austriancoder: daniels: okay... that was easy.. last time you were not that confinced
15:03 daniels: well, if it's required, it's required ... *shrug*
15:40 emersion: danvet: question for you: bnieuwenhuizen realized there is a mistake in the AMD modifiers definition
15:41 emersion: since it's not yet released, possible to fix it up?
15:41 bnieuwenhuizen: context: we made one field 1-bit and then try to stuff a (0,1,2) enum in it
15:42 bnieuwenhuizen: 2 gets seldomly used for now, but we can either resize the field and move later fields, or restrict and add a _HI for the second bit
15:42 bnieuwenhuizen: the former is obviously cleaner but is a really incompatible change
15:42 emersion: (bug is in drm_fourcc.h)
15:43 danvet: if you're quick and fix everything before it's released, should be good
15:43 emersion: ok, thx
15:43 danvet: maybe double-check with agd5f_ that nothing got backported anywhere yet (like to their dkms
15:43 danvet: )
15:44 danvet: since I think it landed in drm-next already
15:44 bnieuwenhuizen: If I want to add a Fixes tag, is drm-next stable wrt hashes?
15:45 danvet: I guess absolute worst case you get to puth that 2nd bit into some unused high bit or something like that
15:45 danvet: yup
15:51 agd5f_: yeah, should be fine. modifiers are only in drm-next at this point
15:52 bnieuwenhuizen: thx!
17:44 jekstrand: jenatali: I suppose I should probably review your CLOn12 mr....
17:44 jenatali: jekstrand: Was going to ping you to ask what the path to merging that looked like :)
17:45 jekstrand: Well, NIR patches need to get reviewed.
17:45 jenatali: Yep
17:49 jekstrand:is reviewing now
17:50 jenatali: Thanks :)
19:31 jekstrand: jenatali: Who seriously flushes denorms these days?
19:31 jenatali: jekstrand: D3D
19:32 jenatali: The D3D spec requires all float ops to flush denorms...
19:32 jekstrand: *sigh*
19:32 jenatali: Fortunately CL has an option that allows that, but then it needs to extend to things like nextafter :)
19:33 jekstrand: jenatali: I'm very sure all the hardware D3D runs on these days can handle denorms. :P
19:33 jekstrand: Or fairly sure.
19:33 jenatali: I think shader model 6.5 finally added a flag that lets you opt out of denorm flushing, but we didn't want to tie CLOn12 to require drivers that new
19:34 jekstrand: On our HW, denorm flushing is sometimes conflated with some of the funny mul/div rules so that might be an issue, I guess.
19:41 jekstrand: jenatali: Are denorms required to compare equal to zero?
19:41 jekstrand: I think that's where I may be confused
19:41 jenatali: jekstrand: Yep
19:43 jenatali: jekstrand: https://microsoft.github.io/DirectX-Specs/d3d/archive/D3D11_3_FunctionalSpec.htm#22.6.1%20eq%20(equality%20comparison) (carried forward into D3D12/DXIL)
20:01 jekstrand: jenatali: What are these _masked things?
20:03 jenatali: jekstrand: DXIL's lowest alignment for UAV (SSBO) writes is 4 bytes, and a RMW pattern wouldn't necessarily work, so we use an intrinsic which expands out to atomic and/or to write only a part of a 4byte value
20:04 jekstrand: jenatali: Isn't that busted if you have a data race on that one byte?
20:04 jekstrand:feels like he's had this conversation before....
20:04 jenatali: jekstrand: Yeah, unfortunately...
20:06 jenatali: jekstrand: I don't recall of CL has a well-defined result for data races, I don't think it does. I don't see anything in the CL spec, but it might inherit it from C
20:08 jekstrand: I don't know either
20:08 jekstrand: I don't know that C does, TBH.
20:09 pendingchaos: GCN/RDNA can only do *0.5/*2/*4 output modifiers if denormals are flushed and the MAD instructions (but not FMA) always flush them
20:10 dcbaker[m]: robclark: if you wanted to be the most awesome person in the world and help me figure out when the half thing got introdoced for a6xx on the 20.2 branch I'd be eternally grateful :)
20:11 robclark: yeah, ok.. let me dig thru git history..
20:12 dcbaker[m]: you might be able to narrow it down quickly by looking at the 20.2 branch CI history, since I only push to that during releases
20:13 dcbaker[m]: I'd do it, but I don't have any hardware to test with, and figuring out how to push git bisect to gitlab has me stumped
20:17 robclark: no worries.. I meant to dig this up the other day but was distracted finishing up some other stuff. I have a bit of time now
20:31 robclark: dcbaker[m]: ok, looks like ce335dcb19297d04f3fb6ce0d290ff99130d09f7 is the thing missing on 20.2 .. I guess you ended up with part of that MR but not that particular commit
20:32 robclark: it confused me a bit since that commit didn't update the reference output.. but I guess that happened because CI only runs on entire MR, which doesn't help if there is bisectability issues
20:53 danvet: sravn, for kmb patches just poke anita to review them and then fix her up with commit rights
20:53 danvet: at least for small fixups like the ones showing up now
20:54 danvet: *anitha
20:55 jenatali: jekstrand: https://en.cppreference.com/w/c/language/memory_model says "If a data race occurs, the behavior of the program is undefined."
21:00 jekstrand: jenatali: Ok, cool.
21:01 Lyude: anyone know off the top of their head if it's safe to grab modesetting locks in drm_audio_component_ops.get_eld()?
21:02 danvet: uh
21:03 danvet: Lyude, should be
21:04 danvet: hm
21:04 danvet: snd side could have a locking encumbering going on
21:04 Lyude: i915 is why I'm in question
21:04 Lyude: (it doesn't grab modesetting locks)
21:04 danvet: add a might_lock in the callback, throw it at intel-gfx-trybot?
21:04 Lyude: ah good idea
21:05 danvet: empirical locking design ftl :-/
21:06 danvet: Lyude, can't we just access the thing without locks?
21:06 Lyude: I guess another question I've got - there's a lot of old style encoder/crtc lookups in nouveau's nv50+ display code (as in-checking drm_crtc->encoder, then saving the crtc we're assigned to during our encoder's enable phases in nouveau_encoder->crtc). I know the first variant, e.g. using drm_encoder->crtc is definitely wrong, but is there anything wrong with the second one where we're only
21:06 Lyude: storing the current CRTC in our own private struct?
21:06 Lyude: danvet: yeah i think we can actually, was mostly curious because of ^ that question
21:06 danvet: iirc eld is embedded in drm_connector, so at most a data race
21:07 danvet: which kinda meh
21:07 Lyude: eh
21:07 danvet: or we could add some spinlock for these display_info and related things
21:07 danvet: and avoid the big lock headaches
21:07 danvet: but maybe not
21:07 Lyude: i mean the locking isn't causing any issues, I was just wondering for updating nouveau display code
21:07 danvet: drm_modeset_lock is kinda really big (the entire core mm nests within), so might be nice to not leak that to another subsystem
21:08 danvet: Lyude, yeah all these old state pointers are kinda not a great idea in atomic modeset code
21:08 Lyude: even in our own private structs I'm assuming?
21:09 danvet: but they also work even better than they ever did with legacy crtc helpers
21:09 danvet: see drm_atomic_helper_update_legacy_modeset_state() for full discussion
21:09 danvet: in the drm core code these are absolutely no-go
21:09 danvet: for the locking reason explained in there
21:10 danvet: but in drivers it's just a mild eye sore and perhaps a bit inconsistent
21:10 danvet: especially if you have both an atomic and legacy modeset stack in the same driver
21:10 danvet: consistently using atomic state structs should help a bit with readability, but that's it
21:11 danvet: Lyude, I'd only bother if you're really bored :-)
21:12 Lyude: danvet: gotcha
21:57 dcbaker[m]: robclark: thanks! I've pulled that into the staging branch and pushed it
21:58 robclark: thx, sorry about that.. it is kinda a weird edge case that can slip thru the cracks, with CI only run on entire MR's but individual commits cherry-picked to release branches.. not really sure a good solution
21:58 dcbaker[m]: The only solution I can come up with is that the release manager doesn't manually pull things with a script like we do now, and everyone sends an MR against the staging branch
21:59 dcbaker[m]: but that really changes the burden on stable releases to the developers
21:59 dcbaker[m]: not sure how people feel about that
21:59 dcbaker[m]: maybe that's a better question for mesa-dev though
22:14 robclark: dcbaker[m]: could script instead of pushing directly, wrap things up in a gitlab MR so it goes thru the CI process?
22:15 dcbaker[m]: I could do that right now, the problem is really that the script is designed for serially pulling one commit at a time, and it does look ahead to see things like "A" is nominated and "B" fixes "A", so nominate that too.
22:15 dcbaker[m]: could be done, but would be a lot of work
22:15 dcbaker[m]: basically rewriting the scripting I think
22:15 dcbaker[m]: because to make the CI gating useful you'd want it to not put unrelated things in the same MR
22:16 dcbaker[m]: otherwise I'd be testing 30 unrelated commits, and then trying to bisect which one broke things
22:17 robclark: I think pulling 30 unrelated commits into a 'stable backport' MR would be ok
22:17 robclark: I guess this is more of a rare case
22:17 dcbaker[m]: happens a lot early in the cycle
22:17 dcbaker[m]: look at what just landed on the staging/20.3 branch today, it wasn't 30, but it was probably 20
22:17 dcbaker[m]: mostly unrelated
22:18 robclark: (maybe script could also tag authors of patches that are pulled in, so they notice if pipeline fails, or can speak up if they notice something missing)
22:18 dcbaker[m]: Although, actually, with marge adding the "part-of" tags maybe it wouldn't be as terrible
22:19 dcbaker[m]: trying to figure out what patches should be grouped was the part I worried about the most
22:19 robclark: I wouldn't even do that.. just grab 20 or 30 or however many and stuff them in a single MR
22:20 robclark: that sounds like it shouldn't be a *major* change from how it's done today
22:20 robclark: and it adds a CI sanity check
22:20 dcbaker[m]: I mean, all that's doing is moving the CI from "check the branch after CI finishes" to "create an MR, check the CI, press merge"
22:21 dcbaker[m]: to be honest, 20.2 was the first release I've done that there was more CI than just "check that meson and scons build"
22:23 robclark: well, I'm just spitting out ideas.. might not be good ones.. but does seem like (with the addition of tagging original authors on the backport MR) it would make others more aware of CI fails on backports.. like other dependent patches
22:23 dcbaker[m]: that's true
22:27 sravn: danvet: for Anitha already filed a "bug-report" to get commit rights. It was blocked by someone who thought she should go via the i915 team - which does not make sense
22:28 sravn: So until she have this resolved I promised to be gateway - like I try to do for a few other (un-maintained) drivers
22:29 danvet: sravn, huh I thought we solved this ages ago
22:29 danvet: still stuck?
22:29 sravn: And for anything non-trivial I will poke for review - but trivial spellign stuff is better to process and forget
22:30 sravn: Anitha confirmed the other day, in private chat, that she has not yet commit rights
22:31 danvet: mripard, https://gitlab.freedesktop.org/freedesktop/freedesktop/-/issues/291 ping
22:31 gitbot: freedesktop.org issue 291 in freedesktop "drm-misc access for drm/kmb" [Opened]
22:31 danvet: sravn, maybe ping mlankhorst and tzimmermann when they're around next week or so
22:31 danvet: oh tzimmermann acked
22:32 danvet: mlankhorst probably just missed the ping
22:33 tango_: is there something I can do to debug a seemingly periodic (negative) spike in performance of my iGP? the device is 00:02.0 VGA compatible controller [0300]: Intel Corporation UHD Graphics 630 (Mobile) [8086:3e9b] and recently it has started to have periodic performance drops by as much as a factor of 10 (e.g. playing mintest I go from 60ms/frame to as much as 600ms/frame worst case, more typically
22:33 tango_: 240ms/frame). the performance drop oddly seems to happen on compute too, although that's much harder to pinpoint, but sometimes, for a few seconds, my kernels take again 4 to 10 times more
22:33 tango_: I'd like to prepare a bug report if possible, but I'm not sure how to collect the appropriate information
22:36 sravn: danvet: I would be happy to process next batch from Lee - so far I have left out the one touching the typically well maintained drivers
22:37 sravn: And agd5f_ have applied most/all? amd patches and some radeon - so I think maybe half of the patches has already landed