02:54 dwlsalmeida[d]: managed to get interlaced working too
02:54 dwlsalmeida[d]: now at 89/135
02:55 dwlsalmeida[d]: some stuff is still faulting, I will debug that later this week
09:26 avhe[d]: dwlsalmeida[d]: so did the EOS thing fix the problem with corrupted macroblocks?
09:59 asdqueerfromeu[d]: dwlsalmeida[d]: I will be GOBsmacked if it actually ends up working 🥁
11:54 dwlsalmeida[d]: avhe[d]: The +16 did
11:55 dwlsalmeida[d]: I also copied the EOS marker and set the explicit eos flag to 1
11:57 dwlsalmeida[d]: Btw I just noticed that the nvidia beta driver sits at 97/135 so we’re not too far behind
11:59 dwlsalmeida[d]: Hey any ideas how we can force all planes of a vkimage to share the same value of gob height?
11:59 dwlsalmeida[d]: I hacked this, but a proper solution will be needed of course
12:20 dwlsalmeida[d]: Ah, also, how do I run CTS for this? I have no idea which tests are applicable
14:13 gfxstrand[d]: dwlsalmeida[d]: We talked about that a while back. We need to add something to NIL. Either a function to create two images at the same time or something that lets you specify tiling and always use the tiling from the smaller image.
14:15 dwlsalmeida[d]: gfxstrand[d]: yeah that's my question, i.e.: if anybody had a approach they preferred.
14:15 dwlsalmeida[d]: IMHO, we can check for `VK_IMAGE_USAGE_VIDEO_DECODE_DST_BIT_KHR` and friends, and special-case that to do what you said above
14:20 gfxstrand[d]: I'm not sure. If we make NIL make both of them at the same time, it'll make NVK more complicated. If we go the other way, we're betting on NIL's heuristics to always use a smaller tiling for a smaller image. I'm not sure I really like that bet.
14:21 gfxstrand[d]: So I think I'm leaning towards the former
18:39 dwlsalmeida[d]: airlied[d]: did you get the provisional VP9 stuff in already for radv or anv?
18:41 dwlsalmeida[d]: I volunteered with the vulkan video tsg to help with that, so after h.264 I will probably try to get us working with their WIP API, just to see whether that really works..
18:41 dwlsalmeida[d]: nobody has VP9 support yet, not even intel, amd and nvidia
18:43 dwlsalmeida[d]: lol wouldn't it be funny if we got support before their official driver
19:02 avhe[d]: fun fact i'm pretty certain nvidia licensed their vp9 block from hantro
19:02 avhe[d]: so you've got relevant hwaccel code in linux already, for v4l2
19:02 avhe[d]: for backward updates
19:03 avhe[d]: i think i remember reading that you rewrote that stuff in rust, daniel?
19:03 dwlsalmeida[d]: oh, I worked on that actually
19:04 dwlsalmeida[d]: I think a colleague did the v4l2 driver
19:04 dwlsalmeida[d]: and I rewrote a few parts in Rust, yes
19:04 avhe[d]: if you look at the nvdec_drv.h header in open-gpu-doc, it's pretty much taken verbatim from hantro's definitions
19:04 avhe[d]: (the vp9 section, i mean)
19:06 linkmauve: dwlsalmeida[d], hantro in Rust? Now I’m interested!
19:08 dwlsalmeida[d]: linkmauve: we have a small kernel library doing, among other things, the probability updates and etc
19:08 dwlsalmeida[d]: that drivers share
19:08 dwlsalmeida[d]: I rewrote that in Rust, and ported a few drivers to use that, instead of the C version
19:09 dwlsalmeida[d]: which includes the hantro driver
19:09 linkmauve: Probability updates for decoders you mean? Is that in mainline already?
19:12 dwlsalmeida[d]: I've been trying to negotiate with the v4l2 maintainers for a couple of years now
19:12 linkmauve: :(
19:12 dwlsalmeida[d]: it's more a political thing tbh
19:12 linkmauve: I know. :(
19:12 linkmauve: Do you have a tree somewhere?
19:13 linkmauve: Being able to do V4L2 in Rust would be lovely.
19:13 dwlsalmeida[d]: last time we met in person, they seemed more welcoming to the idea, but still they want more proof that this is actually an improvement over our current C code
19:13 linkmauve: I’ve worked on the JPEG driver for cedrus, but it always feels so brittle.
19:13 linkmauve: In C.
19:13 dwlsalmeida[d]: https://lwn.net/Articles/970565/
19:14 linkmauve: In Rust I would feel much more at ease with parsing user images in the kernel.
19:14 dwlsalmeida[d]: well, show up to the Media Summit
19:15 dwlsalmeida[d]: one thing they told me is that they want a second person working on this
19:15 dwlsalmeida[d]: before comitting
19:15 avhe[d]: linkmauve: v4l2 has you parsing jfif headers in kernel land? :blobcatnotlikethis:
19:15 linkmauve: avhe[d], yes.
19:15 avhe[d]: dwlsalmeida[d]: right that was the paper i read, i hope it gets mainlined eventually
19:16 linkmauve: dwlsalmeida[d], I’m definitely interested, are you on #linux-media or similar?
19:16 linkmauve: For cedrus and for rkdjpeg, at least.
19:16 avhe[d]: linkmauve: weird design... i guess some accelerators take the thing and parse it in microcode, so v4l2 needs access to it?
19:17 linkmauve: avhe[d], no, it’s just that it’s a simple enough format that no one felt like adding a complex uAPI for it.
19:18 linkmauve: But then you have issues like capture format selection and allocation must happen before queuing the JFIF data, but said data contains properties like the capture format and bit depth and such.
19:18 linkmauve: Last time we discussed that issue the stateless API was suggested, but I’m still not completely certain this is the right way forward.
19:21 avhe[d]: personally i'd say stateless makes more sense, especially in kernel land ¯\_(ツ)_/¯
19:22 linkmauve: avhe[d], it just isn’t how all JPEG drivers got written so far.
19:24 linkmauve: avhe[d], in the case of JPEG, a single file (or frame in MJPEG) is freestanding, it doesn’t require any other piece of data to decode.
19:24 linkmauve: So here it’s more of a whether to use the request API or not.
19:25 dwlsalmeida[d]: linkmauve: I should be on #linux-media yeah, eventually I get logged out automatically by the IRC lounge service being used here at Collabora
19:25 linkmauve: dwlsalmeida[d], are you aware of my onix crate btw? https://linkmauve.fr/dev/onix/
19:25 dwlsalmeida[d]: btw, one of the things I wanted to do was to rewrite the JPEG parser in Rust too, but no use getting more and more code in before we get Mauro and others onboard..
19:26 linkmauve: Ah, I remember trying to use the IRC stuff offered at Collabora, before switching back to my own XMPP gateway which worked much better. :D
19:26 linkmauve: dwlsalmeida[d], indeed…
19:26 dwlsalmeida[d]: tbh, no, I hadn't heard of your crate
19:26 linkmauve: But that’d probably be one of the first things I’d do too, because I’m more interested in images than in videos atm.
19:27 dwlsalmeida[d]: huh, AVIF, cool
19:27 dwlsalmeida[d]: > Onix is an image format library, which relies on hardware decoders and encoders using the V4L2 API. It currently supports both JPEG and WebP (lossy, opaque, static), and will support AVIF in the future.
19:27 linkmauve: dwlsalmeida[d], API still in flux, ideally it’d be equivalent to the image crate, but for hardware decoding.
19:27 linkmauve: Someone offered me a rk3588 board exactly for the purpose of adding AVIF support to onix!
19:28 dwlsalmeida[d]: how does this work? is it a standalone binary, or is it integrated somewhere else?
19:29 linkmauve: For now it’s just a Rust crate, which you can use in your Cargo.toml like any, I have a beginning of a gdk-pixbuf loader written, as well as some C API for easy integration in FFI languages.
19:30 linkmauve: The example binaries included are for testing mostly, an encoder, a decoder, one which displays to DRM (if the display controller supports the capture format on a plane) and another to Wayland’s zwp_linux_dmabuf_v1.
19:31 linkmauve: On my PinePhone it’s somewhat miraculous, instead of waiting 1s, sometimes 2s to display an image using eog or loupe, it takes less than 50ms because it delegates everything to the hardware. :D
19:31 linkmauve: From decoding to rgb conversion and to scaling to the screen’s size.
19:34 dwlsalmeida[d]: cool stuff!
19:34 linkmauve: But obviously, only for JPEG and WebP, and even then at a very limited resolution.
19:35 linkmauve: (As you can see in the joke image I put on the website, there is a corruption happening if the image is larger than 2047px. :p)
19:36 HdkR: Thats a pretty sad limitation for 4k screenshots :)
19:37 linkmauve: Indeed, and I haven’t managed to get documentation from AllWinner about improving that. For H.264 they support having an auxiliary buffer for the 4K usecase, but I haven’t managed to do the same for VP8.
19:54 airlied[d]: dwlsalmeida[d]: I wrote the original Vulkan vp9 impl on radv as a mesa extension, I think I rebased to provisional a few months ago but ffmpeg has some limitations in its vp9 that were beyond me
19:55 dwlsalmeida[d]: join the gstreamer side of the force..
19:55 dwlsalmeida[d]: heheh
19:59 dwlsalmeida[d]: speaking of provisional things, I am using your 1year and a half old nouveau.ko patch that makes it possible to create a video queue
19:59 dwlsalmeida[d]: I should probably send that thing upstream one of these days