02:12kode54: cheako: spilling is where referenced data exceeds the size of the register file, and it spills over into writing temporaries to scratch space and rereading things from memory
02:13kode54: My cup runeth over
02:13kode54: Literally, running out of registers and having to use ram for temporary data
07:53pq: wv, what you are asking for is explicitly forbidden in DRM KMS. The rationale is to have a single process noted as "the DRM master" be in control of all the display on a device, so that random programs cannot mess things up. However, DRM leasing is a feature that allows co-operating processes to lease parts of KMS to another process, but that it mostly meant for whole-monitor hand-off on multi-head cards.
07:55wv: pq, what I want to do is explicitely telling my application (cog/wpewebkit) to be sitting at the back layer, and playing video (kmssink) on the top layer. This is the sole usecase I have. I've been playing with wayland, but I don't get things right like that..
07:56wv: and in my head, it looks so easy :-p
07:58pq: wv, if you don't use any service (e.g. a Wayland compositor) beneath both application, you get to either hack out security checks in the kernel or modify both apps so that one creates a DRM lease and hands it over to the other to use. Neither I'm not sure how well it works, mind, for both hardware and driver reasons.
07:58wv: per default, video is played through the GPU compositor, but I'm lacking performance there. So I just want to keep the video on the hardware plane. Don't want to do fancy stuff anyway
07:59wv: I've been plaing aroungd with drm-lease manager, which purpose would be as you describe
07:59wv: handing out leases
08:00wv: but cog fails to start with a lease for some authenticate reason. It gets a lease, detects properties and all, but fails to get through (wl_display fails to authenticate)
08:00pq: wv, the hardware reason why this is a bad idea might even be relevant to you: a single output is driven by a single CRTC, which determines the timings (update times) for the output. If you have two processes independently submitting updates to the same CRTC, they will randomly block or even fail each other, causing no-one to get a steady framerate.
08:01wv: pq, I have 2 hardware planes available
08:01pq: using two hardware planes really is fancy stuff :-)
08:01wv: in the old days, there were just to fb's available (/dev/fb0 and /dev/fb1)
08:01pq: yes, but they both feed to the same CRTC if you want both to show on the same monitor.
08:02pq: the legacy fb API was full of full frame copies and no defined timings
08:02pq: full frame copies on the CPU, even
08:02pinchartl: wv: wayland really seems the way to go for this, picking a compositor that supports KMS planes
08:03pinchartl: I got asked the very same question yesterday
08:03pinchartl: it's a common one
08:04wv: pinchartl, what compositor supports kms planes? you mean one can specify which application goes to what plane then?
08:04wv: and defining the hardware plane?
08:04pq: (disclaimer: I'm a Weston developer); Weston is probably one of the best compositors to get app contents straight to KMS planes without compositing on GPU when at all possible.
08:06pq: OTOH, if you have two completely independent apps and you want to make them look like one coherent app, with Weston that will need some extra development work to make happen.
08:06pq: that work would be window placement in the compositor
08:08pq: wv, you don't explicitly define window-plane associations, because that can conflict with your window placement and z-order. Instead, the compositor automatically uses KMS planes for the windows.
08:08wv: well, usecase is really rather simple. I have cog/wpewebkit, which per default renders video in a proper glimagesink, so using gpu. But mine is to slow to handle all that. So wpewebkit also has a holepunch approach, where you delegate the video to a custom sink. This sink would be waylandsink (or kmssink) on a proper overlay plane
08:08wv: and that's all I need
08:08wv: no other apps, or other windows, or minimize/maximize stuff
08:08pq: Two processes working together on the same screen is definitely not simple.
08:09daniels: pq: s/probably one of // :P
08:10pq: at least, if you care about smooth performance and coherent appearance.
08:10daniels: wv: and yeah, hardware doesn't let allow you to control the planes independently anymore, so neither does the KMS API. you need one coherent process which can handle the whole thing for you. generally people use lightweight Wayland compositors (such as Weston) which try very hard to use hardware planes where possible. this lets the apps focus on being apps and not a presentation layer
08:12pq: wv, that wpewebkit punch-through - sounds like you *could* do all KMS programming in the same process, can you?
08:13pq: even if you use Wayland, it would be best to have both wpewebkit and the custom sink use the same Wayland connection, whinh means they would need to be in the same process, so that they could use sub-surfaces, and you don't have to hack a Wayland cmopositor for window placement.
08:14wv: pq, I think UI and Webprocess are different processes
08:14pq: wv, are you sure that if you were to run webkit on Wayland, it still would not use Wayland sub-surfaces for the video on its own?
08:15wv: yes, as it renders to a glappsink => it uses the GPU for compositing
08:15wv: but thing is, I'm trying this out. I put cog fullscreen in wayland
08:15pq: if webkit used sub-surfaces, you don't need a custom sink at all, and the Wayland compositor would automatically put the video on its own KMS plane and the rest of webkit on another KMS plane.
08:16wv: and then I just do a gst-launch videotestsrc ! waylandsink, but this is not visible at all
08:16wv: if I don't put cog fullscreen, it is working though
08:16pq: you are starting a new process. Don't do that.
08:18pq: The absolutely best design would be to have webkit use a Wayland sub-surface for the video. The you'd have a single Wayland client which makes the whole app coherent, and the compositor would automatically all the KMS capapbilities for the best performance.
08:19daniels: yeah, you need a compositor here, you can't have separate processes both directly driving different parts of the display
08:19pq: *automatically use all
08:22pq: We've talked about two different designs: a) wpewebkit and the custom sink as separate processes, and b) wpewebkit using a Wayland sub-surface for the video. Both designs require a Wayland compositor in practise.
08:23pq: option a) is strictly inferior, because you would need to patch the compositor to make this arrangement look like a single app, instead of two separate apps.
08:24pq: Both designs allow using two KMS planes the way you want.
08:25pq: Well, I'm not quite sure of option a) if the video must be an underlay instead of overlay, but b) does.
08:25wv: well, video should be an overlay. I'm on rgb565 planes (reducing bandwith), so no alpha channel
08:25pq: ok
08:26pq: but that also means it cannot be punch-through hole
08:27wv: well, punch through... What's in a name. As long as the area is foreseen to put the video. whether it's beneath or above, I don't care. Agree, I'll not be able to put content on top of the video
08:27pq: you need alpha to have a hole (or colorkey)
08:28pq: right, that's important: not needing to put anything on top of the video
08:28pq: so you don't need a hole, you will be fine with a simply overlay
08:28wv: option b looks like the better option, and a nice addition to webkit anyway. I'm only afraid that implementation wise, this goes over my head
08:29wv: so you don't need a hole, you will be fine with a simply overlay => that's correct
08:29pq: I would be surprised to learn if webkit didn't support sub-surfaces yet.
08:30pq: if it doesn't, I'm sure many would want it to, just like you
08:32pq: you said you can get video already to a custom sink and out of webkit, so to me that would be like the hardest part already done
08:33pq: then again, I'm familiar with Wayland and have no idea what cog/wpewebkit looks like inside. :-)
08:34wv: Well, I'm reading through the code and options a bit right now. And it appears that maybe I'll need some other options
08:34wv: USE_WPE_VIDEO_PLANE_DISPLAY_DMABUF and USE_GSTREAMER_GL look promising
08:35pq: video plane dmabuf sounds very interesting, GL I
08:35pq: I'm not sure
08:35pq: does it refer to decoding with GL or display with GL?
08:37pq: while "display with GL" might end up on a KMS overlay plane too, it may also imply a copy done on the GPU, just in the app side.
08:45wv: display with GL I think, but copy done on the GPU. Dits som dot examination yesterday, and there's a glupload moving textures from decoder to gpu
08:50pq: wv, I was thinking of a rendering pass because EGL or GL does not have a way to send a texture to the window system as-is, so glupload might even be a second copy in the worst case, or zero-copy if dmabuf import succeeds.
08:51pq: if you have a GL texture, then I see no other way to push that to the window system than doing a GPU rendering pass (a copy).
08:54wv: I'm not completely following. What comes out the decoder is not a gl texture. It only becomes a gl texture when passing the gpu. So if I now disable this GSTREAMER_GL, and enalble this WPE_VIDEO_PLANE_DISPLAY_DMABUFF, I hope it'll be able to just pass from decoder to plane, without passing gpu
08:55pq: yes
08:55pq: I was simply explaining the "display with GL" path, not that
08:56pq: it sounded to me like you were thinking about enabling USE_GSTREAMER_GL rather than disabling it, hence my confusion.
09:01wv: pq, no, USE_GSTREAMER_GL was enabled, and the other one disabled, so...
09:01wv: doing a compile right now ;-)
09:09wv: hm, apparently it got removed from master https://github.com/WebKit/WebKit/commit/ae659460148afd04a3b40f9df0d742c801ed8c96
09:09wv: Well, I'll see
09:53pq: wv, oh, for content protection. Wonder if they have yet another path for direct dmabuf submission to Wayland sub-surfaces.
09:53pq: or maybe that goes through Gst somehow
10:20Company: I'm pretty sure they don't
13:13Company: FINISHME: support YUV colorspace with DRM format modifiers
13:13Company: sounds like Vulkan is indeed the future
13:20tnt: :)
13:21Company: my code works neither on AMD nor on Intel, and now I don't know if my code is broken or it's too futuristic because I get those warnings...
13:22Company: but I guess there aren't that many people importing YUV dmabufs on Vulkan yet
13:37anholt: robclark: prepare-artifacts.sh is the script to change
13:40robclark: anholt: looking at manpage, --strip-debug looks plausible
13:57Lynne: Company: an issue is that you need two different dmabuf formats on intel and amd
13:57Lynne: since intel demands that both planes are close in terms of memory
13:58Company: and amd demands that things are far apart, so stride can be a multiple of 256, I know
13:58Company: so far I just get a red image on AMD (despite no validation errors), so no clue who's at fault there
13:58Company: and that fixme on Intel
14:00Company: and I think AMD doesn't do disjoint dmabufs either, though I stopped caring about that when I figured out my webcam doesn't give me disjoint buffers
14:03dj-death: is that importing a dmabuf for 3d or video?
14:03Company: dj-death: for video - I'm trying to get my webcam's NV12 dmabuf into Vulkan
14:04emersion: you can look at wlroots maybe
14:04Company: well, I get it into Vulkan - I'm trying to get it rendered
14:04dj-death: yeah but you're not using vulkan video?
14:04Company: no
14:04Company: not Vulkan video
14:04emersion: we're importing client DMA-BUFs and then rendering them via the YUV vk ext
14:04dj-death: so I don't think there is any restriction for disjoint stuff in terms of placement
14:05Company: actually, on Intel I'm using YUYV
14:05Company: because that laptop doesn't speak NV12
14:05Company: *that laptop's camera
14:05dj-death: I guess you can bind the same VkDeviceMemory with a different offset
14:05Company: I should try the NV12 one there
14:06Company: yeah, the NV12 dmabuf is just one fd
14:06Company: with offsets
14:47Company: ha, progress - my NV12 cam works on Intel
14:48Company: well, the anv works, and the cam does - my code seems not to
15:15anholt: anyone want to sanity check some deqp-runner quality of life improvements? https://gitlab.freedesktop.org/anholt/deqp-runner/-/merge_requests/61
15:24anholt: in particular, UnexpectedPass has become UnexpectedImprovement(Pass) so that we can also express Fail->Skip as UnexpectedImprovement(Skip)
15:32kisak: Thanks to everyhody participating and attending XDC. The live stream is appreciated. I hope that everybody has safe travels on their way home.
15:33eric_engestrom: anholt: is fail->skip really an improvement?
15:34anholt: eric_engestrom: yeah. otherwise you end up with junk left around in your xfails that looks like your driver sucks when you fixed it long ago. or, people try to fix things by dropping features and being surprised and asking me why it wasn't caught.
15:40eric_engestrom: ok
15:40eric_engestrom: (not really convinced, but also I have to go)
17:29zmike: gfxstrand: I blame you for this https://gitlab.freedesktop.org/mesa/mesa/-/issues/10016