00:02Lynne: zzoon: here's a really simple av1 sample - https://0x0.st/8ee7.mkv
00:03Lynne: I can still replicate, it looks like a bitstream desync
00:03zzoon: yeah totally I can see it
00:04zzoon: could u explain about the "bitstream desync"
00:05Lynne: the decoder misreads a flag/is sent a bad flag, reads one more/one less bit, and from that point on, loses sync
00:06zzoon: aha. okay thanks.
00:06Lynne: it was a recent change, let me try to bisect
00:11zzoon: you mean this issue happens recently?
00:17Lynne: yup, definitely did not happen last month
00:18Lynne: I think it might be a kernel issue?
00:18Lynne: I tried to find a working version, but nothing worked
00:19zzoon: Mine is 6.13.2-arch1-1
00:19Lynne: hmm, but then vaapi would be broken too
00:20zzoon: yeah..
00:20Lynne: and the dirty desktop machine I'm using I haven't upgraded the kernel in half a year
00:47ndufresne: Company: I haven't encontered YCbCr444 in 24bit packed from hardware yet. Though, I'm working on rk3588 HDMI receiver, and it produces 24bit packed RGB, since 444 is not enabled yet, I would not be surprised if it reused the same storage format
00:48ndufresne: The main challenge for be is not owning a signal generator to ease testing all the combinations :-)
00:49Company: so I'll better not add support for them to GTK yet until somebody actually has tests for them - so I don't use the wrong mapping
00:50ndufresne: What kind of mapping you have in mind?
00:50Company: Vulkan maps Y, U, V into G, R, B respectively
00:51Company: so YUV888 would map to GRB - not RGB or BGR
00:51Company: if it used the same mapping
00:53Company: so that would need some swizzling, and https://registry.khronos.org/vulkan/specs/latest/man/html/VkSamplerYcbcrConversionCreateInfo.html has rules about what swizzles are allowed and I haven't crosschecked that that would even work
00:54Company: I don't even know if using YCbCrConversion on any RGB format is allowed
00:54Company: and I don't want to dive into that before I actually have a use case
00:54pinchartl: ndufresne: Company: not sure if it can help you, but the Xilinx zynq dpsub support packed 24-bit YUV
00:55pinchartl: (and 30-bit as well)
00:55pinchartl: ah, but it's on the input side, so it won't give you a dmabuf
00:56pinchartl: (not one filled with data)
00:56pinchartl: cameras that produce packed YUV 444 are definitely not common
01:01Company: Mesa advertises XYUV here, so I could just create udmabufs and set the bytes to random values to see what comes out
01:01Company: but that's a lot of poking in the dark, and having a real-world working example would be nicer
06:10a-user: I'm loading a minimal custom EDID (via drm.edid_firmware=) containing only these specific modes: 2160p60 (necessarily 4:2:0), 2160p30, 1080p60, 720p60, and 640x480. However, my xrandr/display settings are spammed with a dozen other modes that aren't in this EDID (eg 1920x1200, 720x480, etc). Where exactly are these extra modes being introduced? Is there any way to stop this?
06:10a-user: AMD 5700 XT btw
06:24a-user: (if this is the wrong place to ask please direct me to a better place)
08:16MrCooper: a-user: they're probably added by the Xorg modesetting driver
08:27a-user: Is there something I can do to make it not do that
08:31a-user: Also: is this something that just always happens, or is this being triggered by something in my setup?
08:59MrCooper: a-user: AFAIR the xf86-video-amdgpu driver doesn't do that
11:24FireBurn: mareko: The steam issue appears to be a general 32bit issue, I get segfaults running a 32bit glxinfo, I've put the back trace into the issue if it helps. If there's anything you want me to run in gdb, just let me know
11:44FireBurn: I'm just testing turning a lot of the static things back to not static
11:50FireBurn: Is there supposed to be 2 entry_patch_public ?
11:54FireBurn: *multiple
12:26sima: jfalempe, on your new panic kmap_local series, I'd just do a debug printk_once for the unsupported format case
12:26sima: dumping an entire stack frame when we're already in panic is not a great idea imo
12:27sima: otherwise lgtm, a-b: me
12:27sima: jfalempe, I guess you'll look at kunit testing this all later on in a future series?
12:29jfalempe: sima: ok thanks, you mean the DRM_WARN_ONCE() if the pixel width is unsupported ?
12:30sima: yup
12:30jfalempe: sima: Yes, I have to look at kunits, I didn't use them yet.
12:30sima: also you might want to include the changelog in the patch, in drm we generally do that (but outside it's sometimes frowned upon for reasons I don't really understand)
12:31sima: jfalempe, yeah makes sense to do that later, just think that we get to a complexity where unit testing all the cases is really good
12:31sima: so that we can validate the panic printing code without requiring hw drivers that cover all the special cases
12:31sima: since we now have vmap, pages and set_pixel cases, plus quite a pile of failure paths that all must be absolutely rock solid correct
12:31jfalempe: sima: this patch series is mostly for virtio-gpu, so it's easier to test.
12:33sima: jfalempe, kunit is more for keeping it all correct going forward
12:33sima: eventually there's going to be enough panic enabled drivers that you wont be able to test them all, so we need to automate this
12:33sima: and make sure we have pretty good coverage even if you don't have any of the special case drivers
12:33sima: and kunit fills that need
12:34jfalempe: sima: agreed, also I wasn't able to test 16 and 24bits because virtio-gpu doesn't support that, but it's symetrical to the mapped framebuffer which I have tested on Matrox.
12:34sima: yeah that's another one, ideally we cover all combos so that people who enable more don't hit nasty surprises
12:35jfalempe: sima: amdgpu and nouveau are merged, but I didn't get reviews for i915/xe
12:36sima: I guess ping them again here
12:36sima: jfalempe, we do have an igt to test-drive the entire thing with igt with the debugfs interface, right?
12:36jfalempe: sima: sure, I will rebase it first, my latest patch is 2 months old
12:37sima: so if you haven't yet, might be good to double-check that intel-gfx-ci does run your test and it passes
12:37jfalempe: there is a debug interface, but it's not connected to igt. Also it's a bit unsafe to use it.
12:47sima: jfalempe, well igt is very controlled, nothing else should use the gpu
12:47sima: so would be good to have an igt for this, so that we could CI it
12:47sima: maybe even on gitlab with the stuff mripard is working on (with vkms or virtio-gpu), or perhaps on msm when that has support
12:47sima: max out on the test coverage and all that
12:49jfalempe: sima: ok, I will take a look. currently I made it work on all my intel-powered laptops (from Haswell to Lunar Lake).
13:56sima: jfalempe, sounds good
14:59MrCooper: while I'm in favour of radeonsi dropping clover in favour of rusticl, it's kind of unfortunate that it still requires RUSTICL_ENABLE=radeonsi
15:38karolherbst: MrCooper: well.. if radeonsi maintainers are fine with it, it could be enabled by default
15:39karolherbst: I don't really want to make the decision myself
15:39zmike: you can enable it by default for zink
15:39zmike: yolooooo
16:09alyssa: asahi is the only driver it's default on for
16:10alyssa: which is ironic because asahi isn't default on in Mesa at all
16:10alyssa: (:
16:12a-user: MrCooper: using modesetting (I think; not explicitly setting amdgpu; using nixos, and new to needing to care about this subsystem, so this is somewhat opaque to me) adds like 2x more unasked-for resolution spam
16:13a-user: (sorry this is attempting to continue a conversation from 10 hours ago, not sure the etiquiette)
16:16MrCooper: that's fine, if you don't like the modesetting driver's behaviour though, maybe try the amdgpu driver?
16:17a-user: Yes, sorry, I haven't been clear: I believe I've tried both. modesetting (or whatever it is nixos does when I don't explicitly specify amdgpu) adds like 2 dozen spam resolutions. amdgpu adds like 1 dozen spam resolutions. I'd like 0.
16:18a-user: This is using an edid override, because not using an edid override causes "no signal" on my tv after grub (works fine out of the box in windows on same machine, and with my old machine which has an old nvidia gpu)
16:19a-user: Though I'm not sure if this is due to my edid override, or universal behaviour, because this is all I have on hand right now
16:23a-user: Ideally there is an option to just say "stop adding all those resolutions not in my edid", but as far as I've been able to discover, there isn't? There might be on nvidia proprietary drivers.
16:24a-user: But I wanted to check with actualy people who might know, rather spend another million LLM tokens chasing hallucinations
16:25MrCooper: does the /sys/devices/pci*/*/*/drm/card*/card*-<output name>-*/modes file not contain those modes?
16:36a-user: cat /sys/class/drm/card1-HDMI-A-1/modes does seem to list all the modes
16:37a-user: I think I may have just discovered the edid-override is not working
16:37a-user: Because edid-decode /sys/class/drm/card1-HDMI-A-1/edid shows the original edid
16:37a-user: (ie that the tv sends)
16:37zamundaaa[m]: a-user: FYI amdgpu adds non-native modes on the kernel side
16:38a-user: But now the mystery is, if I do not include an edid-override kernel param, I get "no signal"
16:38zamundaaa[m]: And afaik there is indeed no way to turn that off
16:39a-user: perhaps I'm misunderstanding what I'm seeing here: am I actually just reading the i2c or whatever by doing edid-decode /sys/class/drm/card1-HDMI-A-1/edid ?
16:39a-user: Because that returns the original tv edid, not the override (but booting withotu the override -> no signal)
16:40a-user: zamundaaa[m] thank you for the clarification. To be absolutely clear: that is the case even in the best case when everything is nice and happy?
16:42zamundaaa[m]: Yes
16:42a-user: Thanks. So I am assuming the only option to get rid of resolution spam is to xrandr delmode etc
16:45zamundaaa[m]: Or patch your kernel. Or add an option upstream to disable this behavior
16:45MrCooper: not sure xrandr delmode works for modes not added via RandR in the first place
16:47sima: a-user, could be a kernel bug in not parsing the edid you have correctly
16:47sima: unless your hw is busted override really shouldn't be needed
16:48sima: also the spam resolution might actually be in your edid, hdmi cea encodes an enormous pile of modes
16:48sima: and the kernel has become pretty good at parsing them all
16:50sima: a-user, now if those additional modes are indeed incorrect, that would be a kernel bug, since we really should only be adding modes that the edid encodes in one fashion or another
16:53a-user: Okay so stepping back, my situation is: new (used) computer w/ 5700 XT + c. 2015 TV that apparently only does 4k60 at 4:2:0 (everything else 4:4:4). TV works fine including at 4k60 with old computer w/ GTX 650, new computer on Windows, new computer using intel iGPU. TV works fine with new computer w/ 5700 XT... until after GRUB, then "No Signal".
16:55sima: yeah that sounds like a kernel bug to me
16:55a-user: I get an edid from linuxhw edid repo, override using kernel param, TV works fine with new computer using AMD now -- but edid does not list 4k60, and I want 4k60 which I know the TV can do
16:55sima: but gtg now, heading out for a play in zürich
16:55zamundaaa[m]: > if those additional modes are indeed incorrect, that would be a kernel bug, since we really should only be adding modes that the edid encodes in one fashion or another
16:55zamundaaa[m]: That's not how that works with amdgpu, unfortunately
16:56a-user: I edit the linuxhw edid to add a 4:2:0 block listing 4k60, now works at 4k60. While I'm in there, I also delete all the resolutions but the ones I actually foresee using. Still works fine, but xrandr/Gnome Setting > Display/various game display options still filled with garbage resolutions
16:59a-user: Current side-confusion (last 20 minutes): when booting with my stripped-down edid override, why is `edid-decode /sys/class/drm/card1-HDMI-A-1/edid` still showing the TV's original edid, despite my override? Is that the edid it is actively using, or is it reading from the tv to answer that despite actually using my override
17:24a-user: Okay I have resolved my confusion: I forgot I had booted with the earliest working version of my edited EDID for a sanity check. Rebooting with my fully stripped down edid, I can confirm: 1. /sys/class/drm/card1-HDMI-A-1/edid does indeed show my edid override (and not the tv's original edid). 2. /sys/class/drm/card1-HDMI-A-1/modes shows the same resolutions as xrandr 3. xrandr shows 8 resolutions that are not in my edid override
17:25a-user: And this is using amdgpu (or whatever services.xserver.videoDrivers = [ "amdgpu" ] does in nixos, presumably that)
17:30a-user: And my understanding of the situation is that if I want to not have (or even just not see) those extra modes, options that will NOT work are: 1. any exist configuration options. 2. using a different display that naturally sends a rock solid edid and is perfectly fine in every way
17:31a-user: Options which might work (or might not; just not yet explored): 1. get an nvidia card. 2. imperatively delmode/rmmode the extra resolutions from xrandr
17:35mareko: karolherbst: feel free to enable rusticl by default for radeonsi
17:37agd5f: a-user, file a bug: https://gitlab.freedesktop.org/drm/amd/-/issues
17:41a-user: I presume this needs to be 2 seperate bugs, 1 about the tv and 1 about the added modes?
17:43agd5f: sure you can file two
18:45lumag: karolherbst, I've stumbled upon another issue when trying to cross-compile RustiCL. We have to specify target and includepath to LLVM bindgen, otherwise it can't find some of the system headers. Do you think if something like this makes sense, or would you have other suggestions? https://lore.kernel.org/openembedded-core/20250327221807.2551544-8-dmitry.baryshkov@oss.qualcomm.com/
18:46karolherbst: mhhhhh
18:46karolherbst: I think this should probably be somehow handled in meson, but might be better to wait until the cross compilation problems are figured out?
18:48karolherbst: lumag: I think the issue will be that you'll have to do that for a couple of other projects and targets as well
18:48karolherbst: like I'm sure that NAK might run into similar issues
18:48karolherbst: but maybe it doesn't matter there?
18:51K900: lumag: It works in nixpkgs with no extra setup
18:51lumag: karolherbst, well, the question if it should be kept OE-specific or whether we should submit it upstream (in some version).
18:52lumag: K900, hmm.
18:52lumag: K900 cross-compilation?
18:52K900: Yes
18:52karolherbst: lumag: I'd rather that somebody fixes it on the meson side. There are outstanding cross-compilation problems there
18:52K900: Bindgen should respect normal CFLAGS things
18:53K900: So it should work
18:53lumag: K900, hmm.
18:53lumag: K900, do you have full clang toolchain or are you using gcc toolchain?
18:54lumag: karolherbst, one of the issues is that meson doesn't provide an easy way to get target triplet...
18:54lumag: also do you mean just bindgen or some other issues?
18:55lumag: (and is there a meson issue for those troubles?)
18:55karolherbst: generally. Like there is the outstanding issue on what if targets are used for host and target code
18:55K900: lumag: GCC
18:55karolherbst: and bindgen kinda has the same issue, that you might need it for both
18:56karolherbst: but yeah.. normally the bindgen invocation should just use the target environment
18:56lumag: K900: hmm. It totally fails without those flags by being unable to find headers.
18:56lumag: hmm, I'll keep it as OE-Specific, claiming generic rust.bindgen() problems to be solved first
19:00lumag: karolherbst, btw: I'm getting a weird issues when trying to build RustiCL for 32-bit ARM. Is it one of supported targets?