00:14memleak: I'm back, if PS/2 dies out I'll give that a shot next
00:18memleak: brb
00:30memleak: Heh, that was my bad.. I went to go enable the PS/2 keyboard driver and noticed CONFIG_INPUT_EVDEV was disabled..
00:31psykose: i've done that before too! :-)
00:31psykose: helps to view the full .config diff when you touch it..
00:31memleak: I did a make tinyconfig and tuned for my hardware and missed one lol
00:31memleak: I started over my config because I was jumping from 6.1 to 6.4
00:32psykose: tbh there's no need to do that
00:32psykose: just use the old one and make defconfig and you get prompted for anything new
00:33memleak: It wanted to restart my config though every time so instead of doing make olddefconfig I just made a new one, anyway all is well now!
00:35memleak: Sorry for the noise!
00:37anholt: sergi: what happened that the piglit uprev bot didn't bump the other image tags in d75973a1422d86799312d7aa60d0dce846fb3dba ?
06:03zzag: emersion: do you know what gbm_bo_import() does when importing a dmabuf buffer allocated by another gpu? would gbm_bo_import() start a transfer to another gpu or something? or would it just create a gbm_bo wrapper for the specified dmabuf attributes and that's all?
06:03zzag: a transfer from another gpu*
06:19RAOF: zzag: *Mostly* you don't care? It's *mostly* transparent. (ie: if you try and import it as an EGL image and then sample from it the driver will do... a thing, and sampling will work)
06:22RAOF: The mostly there is that it's almost certainly not going to let you scanout of that gbm_bo. I think all the bits necessary to make that possible are there, but nothing's hooked up (see my gbm_bo_import + ALLOW_MIGRATION proposal a couple of days ago)
06:23zzag: RAOF: well, that still leave me wondering about what gbm_bo_import() does. we have some multi-gpu code which I would like to change, but it makes some assumptions about what gbm_bo_import() does, e.g. starting data transfer and allocating local storage on the gpu
06:23zzag: still leaves*
06:23zzag: and I wonder whether it's right thing to do
06:24RAOF: What are you using gbm_bo_import for?
06:25zzag: Multi-gpu in compositor: render on one gpu, then gbm_bo_import() that buffer on another gpu and present it
06:25RAOF: Because if you've got a dmabuf then you can probably just import it directly into your rendering API of choice and use it directly?
06:25RAOF: Yeah, that's the thing that doesn't work.
06:26RAOF: Unless by "present" you mean "sample from in your rendering on the GPU that's going to be displaying it"
06:26zzag: I mean scanout the imported buffer
06:27kode54: zehortigoza: your branch fixed the frame flipping glitches
06:27RAOF: Unless I misread the code, that's not expected to work.
06:28RAOF: Or, rather, gbm_bo_import(, USE_SCANOUT) will check that it's scanoutable, but not do anything to make that happen, and GPUs pretty much only scanout of device-local memory.
06:29RAOF: (At least, that was the state the last time I tried to scanout of foreign dmabufs and we patched the drivers to return EINVAL at add_fb time rather than silently display black when you tried)
06:32RAOF: If I am misreading the code I'd love to know, because I'd love that to work properly :)
06:39zzag: RAOF: "but not do anything to make that happen" yeah, that's what I'm trying to understand. I see that i915 calls drmPrimeFDToHandle() and fills other internal data structures in i915_drm_buffer_from_handle. amdgpu seems to do the same but it also does some memory stuff with amdgpu_va_range_alloc+amdgpu_bo_va_op_raw (not sure what they really do) in amdgpu_bo_from_handle
06:47emersion: i don't really know zzag
06:48emersion: RAOF: where is jour proposal?
06:48RAOF: "proposal" might be a bit strong; it was in this channel, a couple of days ago.
06:51RAOF: Basically, add an ALLOW_MIGRATION flag to gbm_bo_import, use the dri blit infrastructure to actually do the migration if necessary, and then some wondering about maybe plumbing fences through.
06:51emersion: i'd really rather not have this
06:51emersion: GBM is an allocation library, not a rendering library
06:52emersion: blitring involves sending command buffers, handling synchronization, etc
06:53emersion: blitting*
06:53emersion: plus it won't fly with e.g. minigbm
06:55emersion: is using GL or Vulkan a big problem?
06:57RAOF: It's not a big problem, it's just annoying and there seems like there must be a better way.
06:57emersion: why is it annoying?
06:58RAOF: I mean, mesa's gbm literally has access to a "make this work" function pointer :)
07:00emersion: another reason:
07:00RAOF: It's annoying because there's a whole bunch of setup, and a whole bunch of useless state that we hope gets mostly ignored, and it's harder to plumb explicit fences through.
07:00emersion: compositors should try to import BOs only once
07:01emersion: if you do a blit at import time, you need to import each frame
07:01emersion: which is not very nice
07:01emersion: well, it's setup you need anyways, since you're doing rendering when compositing
07:02emersion: and explicit fences are in fact something which will be hard to get right with a GBM API, whereas it's already all there in GL/Vulkan
07:02RAOF: Not if you're compositing on the other GPU, though.
07:02emersion: i mean, you already have some kind of GL/Vulkan abstraction
07:02emersion: for me it's just a few function calls to blit
07:02emersion: in wlroots i mean
07:05emersion: zzag: i think i asked MrCooper a while ago, and the answer was iirc something like this:
07:06emersion: in theory nothing should happen on cross-device import, but some drivers might migrate the buffer to a different location
07:10zzag: hmm, okay.. it seems like the best option is just to do blits and avoid gbm_bo_import because there are no concrete guarantees how it works in multi-gpu case
07:15emersion: i'd really like a "please fail if you're going to migrate" kind of flag
07:16emersion: client DMA-BUFs might come from anywhere, i want to try to do direct scan-out for these, but i don't want to start any kind of heavy work when doing that
08:41karolherbst: I'm kinda not in the mood of dealing with random CI fails (softpipe, zink on lavapipe and virpipe-on-gl): https://gitlab.freedesktop.org/mesa/mesa/-/pipelines/901704
08:42karolherbst: looks like there are failing tests and nothing has anything to do with my MR
09:31MrCooper: zzag: curious how you plan to do blits between GPUs without gbm_bo_import or something equivalent :)
09:33MrCooper: emersion zzag: a BO which is shared between devices has to be accessible by all of those devices; traditionally, this means the BO has to be in system RAM (which generally means scanout can't work with dGPUs)
09:34MrCooper: on setups where PCIe P2P DMA works, the BO can stay in the exporter device's local memory in theory
09:35MrCooper: (not sure scanout from another dGPU can work / is a good idea in that case)
09:58zmike: karolherbst: at least some of these were noted as having been missed in the recent piglit uprev (did someone forget to stress?), but I don't know why the CI expectations haven't been updated
09:58karolherbst: zmike: okay.. I added a commit to my MR to just add those fails: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/23110/diffs?commit_id=0930eb21518b95e08dc5ac7cff87d0fa8c8959f8
10:01zmike: lgtm
10:01daniels: RAOF, zzag: I'd personally be furious if gbm_bo_import was silently allocating and blitting under the hood ...
10:01daniels: I mean, almost all problems on mobile hardware come down to memory bandwidth, so I absolutely don't want to be adding _more_ memory bandwidth as a silent 'helpful' action
10:02daniels: but then it also pretty much breaks the whole idea of dmabuf, if the client holds one BO which it renders into, and the server holds another BO which is an older shadow copy of some content that used to be in the client's BO, and they don't realise that they no longer refer to the same storage
10:03daniels: I can see the call for something like a gbm_bo_copy() which would use a DMA engine or something if necessary, but that would _have_ to be an explicit alloc+copy step, not changing import to be a hidden alloc+copy step rather than the lightweight ref it is today (MMU/TLB overhead notwithstanding)
10:16MrCooper: agreed
10:17zzag: Okay, thank you all for the comments! :)
10:19MrCooper: zmike: no image tag bump → piglit snapshot stays the same in images used by CI → no change in results
10:23daniels: yeah, the piglit-uprev script did break after the great job renaming; there's a fix in there which actually produces the right image tag bump now, as well as making it scream out erroring when it fails to substitute
10:24karolherbst: soo.. should I just push the updated CI fail lists or should I wait on something else?
10:29daniels: pls push
11:01tintou: Hi there, if anyone with Gallium/pipe-loader want to give it a look https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/23054 it avoids a crash on context creation failure on my end :)
11:20daniels: enunes: eric_engestrom reports that the lima runner is out of disk
11:22enunes: daniels: I'll look at it now... I wonder how, no matter how many disk cleanup scripts already run
11:22daniels: enunes: are you using docker or podman? if the latter, I pushed some changes fixing failures that I saw on our shared runners
11:23daniels: if you're still back on docker, you'd have to look at what was failing and why ... they seem to change their API pretty frequently :\
11:23enunes: I still have the docker setup, but I will gladly move to podman if that is supported now
11:28enunes: daniels: where did you push fixes that run on podman setups? something I need to pull and run locally too?
11:29enunes: runners should be good for now
11:29pq: zzag, in the end, you'll probably end up implementing all possible variations of how to get images from a GPU to a KMS device, and then you have to wonder how pick the combination that not only works but is also performant.
11:29daniels: enunes: it's all in https://gitlab.freedesktop.org/freedesktop/helm-gitlab-infra/-/commits/main/gitlab-runner-provision
11:29daniels: eric_engestrom: ^ no need to pause the runner if you haven't already done so
11:29daniels: enunes: thanks!
11:29pq: zzag, reminds me of https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/810 - I wonder what Mutter does nowadays.
11:33alyssa: austriancoder: congrats on the Igalia gig :~)
11:50zamundaaa[m]: pq: we already have most variations. Atm we do gbm_bo_import -> egl with blit -> CPU copy. The first one that works gets chosen
12:06pq: zamundaaa[m], which device do you have executing the blit?
12:08zamundaaa[m]: the target device
12:09pq: zamundaaa[m], do you make sure the target device is not software-rendering?
12:09zamundaaa[m]: yes
12:09pq: cool
12:10zamundaaa[m]: If you're asking that way, I gotta ask back: does blitting with the source device have advantages over doing this?
12:11emersion: i don't think blitting with the source device works
12:11emersion: it would essentially render to a foreign buffer
12:11emersion: (maybe it works in some cases? not sure)
12:11zamundaaa[m]:goes and tests it
12:11pq: only fairly special case: source device blit could be faster than CPU copy into a dumb buffer if that can be accessed at all.
12:13pq: I'm thinking iGPU as source device, and virtual device as target.
12:15pq: but in the DisplayLink case with iGPU, the zero-copy with source-allocated buffer worked.
12:16pq: so maybe source device blit doesn't have use in practice
12:19pq: There is glBlitFramebuffer or something that may not have to be the same as "rendering" hardware-wise.
12:45mareko: karolherbst: I don't know if the vectorization patch works
12:50karolherbst: ahhh
12:50karolherbst: mareko: well, it does seem to vectorize the loads we've discussed on the issue
12:52karolherbst: but it would also be nice to know if any GL workloads are impacted here
13:49alyssa: jenatali: So, I have a lowering pass to convert imageLoad to txf
13:49alyssa: The problem is... that's not legal in NIR
13:49alyssa: imageLoad has access qualifiers and txf doesn't
13:50alyssa: this is causing a fail in KHR-GLES31 which has a test doing
13:50alyssa: imageLoad()
13:50alyssa: imageStore()
13:50alyssa: barrier()
13:50alyssa: imageLoad()
13:50alyssa: as images, NIR knows better than to CSE the loads
13:50alyssa: as txf, NIR will CSE and then get stale data
13:50alyssa: I could do the lowering Late(TM) but that seems like a hack
13:50jenatali: Yep, sounds about right
13:51alyssa: you get around this by doing it only for readonly I guess
13:51jenatali: Right
13:51alyssa: Hmm
13:51alyssa: I definitely need this for read/write
13:51alyssa: So my options are either do this As Late As Possible and hope it's late enough
13:51jenatali: A read-only image can be equivalent to a texture
13:52alyssa: or just keep the image_load and emit the txf internally in the backend at NIR->backend IR time
13:52jenatali: Yeah the latter is what I'd do here
13:52alyssa: cool and good
13:52alyssa: shouldn't be terrible I think
13:52alyssa: will do
13:52alyssa: thanks for the input
13:53alyssa: also why are you online isn't it stupid early for you
13:53jenatali: Both txf and image load map to the same DXIL intrinsic, so it's effectively what I do too
13:53jenatali: I have a baby who wakes up at 5am
13:53alyssa: womp
13:53jenatali: I only do the txf translation because DXIL also cares about variable types, and an image load pointing to a texture is weird
13:55jenatali: Oh and FYI soon enough I'll have another baby so I probably won't be around much for a few months after that
13:56alyssa: congrats :)
13:56jenatali: I'll still be on IRC thanks to the matrix bridge though so if you need me I'll at least hear about it
13:58alyssa: =D
14:00austriancoder: alyssa: thx
14:15alyssa: jenatali: ugh, so the reason this is sticky is that I have a lot of txf lowerings
14:15alyssa: so it would end up getting duplicated
14:15alyssa: might be a lesser evil, but
14:17jenatali: Like what? Just curious
14:17alyssa: slice and dice of the sources into the order the hardware wants as backend1/backend2
14:18jenatali: Ah fun
14:19jenatali: Then maybe the very late option, after all CSE, is the right approach
14:19alyssa: this sounds dodgy
14:22alyssa: I guess duplicating the slice and dice isn't so bad
14:24DemiMarie: alyssa: is this in C or in Rust? In Rust pattern matching should be able to make short work of stuff like this.
14:25DemiMarie: Also Mesa would be a good use for all of Linux’s work on fallible Rust allocations.
14:32DemiMarie: And unlike Mesa’s C code, the Rust code actually has a chance of recovering from them, at least assuming the OS doesn’t kill it.
14:37zamundaaa[m]: > * <@zamundaaa:kde.org> goes and tests it
14:37zamundaaa[m]: so I did, and it *kinda* works with Intel + NVidia
14:37zamundaaa[m]: That is, importing the buffer and rendering to it seems to work fine, until I do glFlush, where it just hangs indefinitely...
14:37zamundaaa[m]: probably not worth investigating more
14:40MrCooper: that was a buffer from nvidia imported into an intel context?
14:40zamundaaa[m]: yes
14:41MrCooper: not sure how the buffer provenance would make a difference for glFlush; do you have a backtrace where it hangs?
14:43MrCooper: hmm, one possibility might be the nvidia buffer's implicit sync object having a fence which never signals, there's currently a known nvidia bug like that
14:44eric_engestrom: daniels, enunes: I didn't pause the runner, I was in a meeting and didn't see your "ok" on doing it until I also saw the "no need anymore" ^^
14:44eric_engestrom: enunes: thanks for cleaning it!
14:46enunes: eric_engestrom: no problem, hopefully that should not be regularly needed... I'll plan some maintenance to switch to a podman runner with the new better cleanup script
14:58zamundaaa[m]: MrCooper: I don't have a debug build of Mesa installed on that laptop right now, so all I can say that under a few layers of iris_dri.so it's stuck in an ioctl
15:05alyssa: jenatali: k, typed the thing out
15:05alyssa: realized I definitely need to distinguish them, because even my backend does CSE (and soon scheduling) so needs to know if a given txf can be reordered or not
15:06DemiMarie: zamundaaa: Is this nouveau or proprietary?
15:07zamundaaa[m]: proprietary
15:08DemiMarie: I did not realize the proprietary driver supported dmabufs. I thought that was all EXPORT_SYMBOL_GPL.
15:12zzag: There's an open source (kinda) nvidia driver
15:12zzag: https://github.com/NVIDIA/open-gpu-kernel-modules
15:14DemiMarie: Would it be best to write a new driver from scratch?
15:39karolherbst: besides that being a massive amount of work, yes
16:35emersion: vulkan YCbCr question: is it a big deal to use a nearest filter for implicit chroma reconstruction?
16:35emersion: or is linear really better>
16:35emersion: i'm talking about VkSamplerYcbcrConversionCreateInfo.chromaFilter
16:36emersion: maybe you know gfxstrand?
16:51dj-death: emersion: this is normally controlled by VkSamplerYcbcrConversionCreateInfo::chromaFilter
16:51emersion: sure, but what is the effect?
16:52emersion: how bad is nearest compared to linear?
16:52emersion: is it reasonable to use nearest?
16:53dj-death: never tried it
16:53dj-death: visually I mean
16:54dj-death: if I remember the CTS verifies that you're using the right filter
16:58emersion: yeah, and intel disables it because of a bug
16:58emersion: disables the linear one
17:08dj-death: for one format
17:10emersion: for single-plane YUV formats yeah
17:11alyssa: Mesa: error: GL_INVALID_VALUE in glDeleteProgram
17:11alyssa: CTS, chill
17:27emersion: hwentlan_: do you have a plan already to merge the HDR patches? do you want to go through the AMD tree, or drm-misc?
17:28hwentlan_: was thinking of taking the DRM patches through drm-misc, then merging the whole thing, including amdgpu patches through the AMD tree
17:29emersion: hm, not sure it's a good idea to merge the DRM part via drm-misc
17:29emersion: will ask danvet
17:30sima: if it's just for a single tree usually just an ack for the drm parts is enough
17:30hwentlan_: any reason? I don't mind taking everything through the AMD tree either but wouldn't want the DRM patches to go stale and cause merge conflicts if they're sitting in the AMD tree for while
17:30sima: it'll come back pretty quickly through the drm.git pulls anyway
17:31sima: hwentlan_, well if you sneak them in still now before merge window freeze it's quick
17:31sima: otherwise you kinda have a bit of trouble anyway since drm-next isn't open until about -rc2 again
17:31hwentlan_: sima, sneak them through drm-misc?
17:31sima: and doesn't matter whether the drm patches are stuck in drm-misc or amdgpu during that time I think
17:31sima: hwentlan_, all through amdgpu and make sure agd5f still does a pull?
17:32hwentlan_: in that case I guess might as well just take the whole set through the amdgpu tree
17:32sima: I mean assuming it's all ready and all, I've been drowning terrible last few weeks so now idea :-/
17:32hwentlan_: I see
17:33hwentlan_: I think they're finally ready for merge. Was going to give people an extra day before merging but if that means we'll miss the merge window I'll merge them today
17:34emersion: yeah the DRM core part is ready for merging now
17:34sima: hwentlan_, check with agd5f I guess
17:34emersion: well, let me know if you want me to merge via drm-misc
17:46agd5f: hwentlan_, sima was planning to do one more -next PR this week
17:46agd5f: probably friday
17:46hwentlan_: I'll merge it to amd-staging-drm-next today
21:11karolherbst: jenatali, gfxstrand: I'm trying to figure out what's wrong with the spec id parsing inside clc. My understanding is, that multiple values can be assigned the same spec constant id, right? clc atm asserts, that there is a string 1:1 relation between values and spec constants.
21:11karolherbst: clc_helpers.cpp:340
21:11karolherbst: and I think we can just remove that check
21:11jenatali: Sounds reasonable without looking at it
21:11karolherbst: yeah, but you also added that assert so I kinda want to understand why :)
21:12jenatali: That was a long time ago, I really don't remember
21:12karolherbst: okay
21:12karolherbst: guess I just submit a MR ditching it
21:13karolherbst: I run into this assert with spir-vs I get from HIP and SyCL
21:21karolherbst: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/23512
22:14mattst88: anyone up for a pretty easy review? https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/23482
22:16airlied: always happy to keep sparc64 going
22:17mattst88: I would have used an alpha, but the sparc was faster
22:17airlied: karolherbst: your rusticl MR needs to drop the softpipe CI change, since it came in via another MR and rebase duplicated it
22:22daniels: airlied: thought you were all about the VAX
22:26airlied: daniels: I had a brief period of putting Linux on university sparcs when nobody was looking :-P
22:29daniels: I had a brief SH4 period of putting Linux on my Dreamcast, then realising I had nothing to do with it and didn't have a year to compile anything so just went back to Tony Hawk 2
22:31airlied: the VAX at least had interesting but pointles problems to solve :-)
22:35idr: SPARC was faster? mattst88, shut your mouth.
22:35idr: Lololololol
22:36mattst88: idr: gentoo's sparc devbox is 2x 16-core/32-thread CPUs @ 3.6 GHz with 512 GB of RAM :)
22:37mattst88: it's a biiiiiit faster than my dual 1.15 GHz Alpha, and I don't have to pay for its electricity :)
22:37idr: I sometimes forget that they keep making more advance SPARC CPUs.
22:38mattst88: I think most people have, including maybe oracle themselves
22:38idr: Ha!
22:40karolherbst: airlied: CI failed anyway :)
22:42daniels: karolherbst: ergh yeah, the duplicate-case thing is pain; really wish we had a linter for that
22:43karolherbst: well.. a job failed saying a line is there twice, but yeah.. would be nice to know that ealier
22:44daniels: right, I mean something that could either be run locally or in a fast-fail check (like the existing 'sanity' job) to avoid bouncing through marge
22:45karolherbst: yeah.... guess that could help with taking some load of CI, because it kinda is overloaded a lot of the times these days :'(
22:45airlied: at some point if we had a job fail die we blow the pipeline up and marge would notice or was I dreaming?
22:46airlied: oh maybe dreams you always had a chance to retry a job before the deadline
22:49daniels: mmm
22:50daniels: airlied: if all jobs in the pipeline complete and one or more fails, then marge does give up and walk away - it's not just sleep(3600); check();
22:50daniels: if one job fails and others are still running, you've got a window to retry - but we did also add in automatic retries anyway
22:51daniels: that has the side effect of hiding some flakes, but realistically everyone just smashed it straight back at marge regardless, so it's mostly an efficiency gain