00:00airlied: no Red Hat used to ship that crap as our hypervisor :-P
00:01DemiMarie: crap?
00:02airlied: yeah at least from our paid customers pov, I'm sure it has use cases, but kvm just made life a lot simpler
00:02DemiMarie: not surprised
00:02DemiMarie: My understanding is that most uses of Xen nowadays are for things KVM just can’t do, at least not without a ton of additional work.
00:03airlied: or because someone still uses citrix?
00:03DemiMarie: not my use-case
00:04DemiMarie: In Qubes OS we use PCI passthrough so that the NICs and USB controllers are handled by less-privileged VMs, thus protecting the host.
00:05karolherbst: but couldn't that be done with KVM as wekk?
00:05karolherbst: *well
00:11Ermine: doesn't xen utilize kvm?
00:13DemiMarie: karolherbst: not easily at least, because KVM doesn’t support one VM providing networking to another directly
00:14airlied: nope
00:14karolherbst: DemiMarie: that kinda sounds like a configuration problem?
00:14DemiMarie: karolherbst: nope, it’s way more fundamental.
00:15DemiMarie: In Xen, two VMs can communicate directly
00:15karolherbst: mhh
00:15DemiMarie: In KVM, you have to write a bunch of userspace stuff and try to make it work
00:16DemiMarie: The closest you can get is virtio-vhost-user but that is still experimental and also assumes that the backend is trusted. In Qubes OS, the backend is not trusted.
00:16mareko: zmike: any further comments on https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27736 ?
00:17DemiMarie: Xen is also working on being able to deprivilege dom0 and on safety certification, so that one can have e.g. a safety-critical QNX guest alongside Linux guests doing infotainment stuff.
00:17DemiMarie: Trying to do that under KVM would be a nightmare, if it can even be done at all.
00:18DemiMarie: Also Xen’s security process is vastly better than Linux’s, and therefore KVM’s.
00:22zzoon[m]: airlied: when you have time, could you review https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28063 ?
00:28airlied: zzoon[m]: left a coment
00:29zzoon[m]: thanks!
00:44zmike: mareko: I've still had the tab open, but I've been too busy to get back to reviewing
00:44zmike: I didn't intend for that to be a blocking comment if it's been stalling pepp's review
00:44zmike: hoping to get to it in the next couple days if someone doesn't beat me to it
00:47HdkR: `amdgpu 0005:03:00.0: [drm] *ERROR* Error waiting for INBOX0 HW Lock Ack` Anyone ever see this error spamming in dmesg, or should I try updating my kernel?
00:56DemiMarie: airlied: do `FOLL_LONGTERM` pins of VRAM work?
01:03airlied: DemiMarie: don't think it makes any sense
01:04DemiMarie: airlied: Ouch. Why?
01:05airlied: VRAM isn't like RAM
01:05airlied: the PCIE bar can act like a remapping table on some gpus
01:05airlied: probably easier to not expose mappable VRAM to guests
01:06DemiMarie: airlied: what will that break?
01:06DemiMarie: Does it mean no Vulkan and no OpenGL4.6+?
01:07DemiMarie: If so, it’s probably better to make whatever Xen-side changes are needed to make it work.
01:07airlied: probably hurts performance on those
01:07DemiMarie: how much?
01:07airlied: but I think you can get away with just doing everything in RAM instead of VRAM where you want mappings
01:07DemiMarie: what about vkMapMemory?
01:07airlied: though not sure if some apps always assume you can map vram
01:08airlied: they shouldn't but who knows what ppl do
01:08DemiMarie: yeah
01:09DemiMarie: I suspect it will really hurt compute, though.
01:09DemiMarie: Compute seems to care much more about shared mappings
01:14jenatali: D3D has gotten by without mapping VRAM until *very* recently
01:15DemiMarie: How recently?
01:15jenatali: Today
01:15DemiMarie: Literally March 11 2024?
01:16jenatali: Yes
01:16DemiMarie: What was announced?
01:16DemiMarie: If I can avoid mapping VRAM that makes things vastly simpler.
01:17jenatali: Demi: https://devblogs.microsoft.com/directx/agility-sdk-1-613-0/
01:18DemiMarie: jenatali: how long before applications will require upload heaps?
01:18DemiMarie:wishes that upload heaps had never been created
01:19jenatali: Dunno. Probably a while
01:20karolherbst: not that it changes much because drivers need it anyway
01:20kode54: Have teaching everyone how to turn on rebar, if they can
01:21DemiMarie: karolherbst: why is that?
01:21karolherbst: because they wanna upload stuff to the GPU in a non painful way
01:22karolherbst: like NVK relied on being able to map VRAM since forever
01:22DemiMarie: what about Intel and AMD?
01:23kode54: Arc already requires it outright or else it runs like crap
01:23karolherbst: dunno, but probably the same
01:23DemiMarie: kode54: crap?
01:23kode54: Like worse performance than really old hardware
01:24DemiMarie: because of faulting on each access?
01:24DemiMarie: Anyway, so this will need to be dealt with in Xen or we will need to switch to KVM or not support GPU acceleration.
01:24DemiMarie: The old version that runs the userspace driver on the host is not even being considered.
01:25jenatali: Out of curiosity, is anybody working on an implementation of the AMD work graph extension (for RADV or otherwise)?
01:26kode54: I don’t know if faulting is why the windows drivers are slow since I don’t have that information
01:26kode54: I just know they outright tell people to have rebar support, and reviewers have found the card performance to be a stuttery mess without it
01:34DemiMarie: How common is rebar support nowadays?
01:34DemiMarie: airlied: on which GPUs can the BAR act as a translation table?
01:35airlied: amd and nvidia do it
01:35airlied: though I don't think amdgpu takes too much advantage of it
01:35airlied: but I haven't looked in a while
01:38agd5f: we've supported it in amdgpu for almost maybe 8-10 years? Christian added the bar resizing the the kernel PCI code.
01:38agd5f: As long as you have enough MMIO space
01:41agd5f: Simplifies the kernel side since you never have to worry about faulting BOs in and out of the BAR window
01:46airlied: agd5f: I don't think you do translations in the BAR though
01:46airlied: like you resize it
01:46airlied: but you map BAR address 0x100 to VRAM address 0x100 always
01:47airlied: nvidia has a page table between VRAM and the BAR
01:47airlied: so even with a 256MB you can map any parts of the 8GB VRAM on a page granularity, it's just a pain because you have to do evictions
01:51DemiMarie: agd5f: why does amdgpu use non-refcounted pages?
04:14marex: airlied: I wouldn't mind that, but is there something convenient like drivers/gpu/drm/drm_gem_dma_helper.c with iommu support ? Or how do I even ... what do I even grep for ?
04:17marex: iommu_iova_to_phys maybe ?
04:18marex: nope
04:20airlied: you enable the iommu and the dma layer should do the magic
04:20marex: airlied: hmmm, so ... I would use the SHMEM allocator, then use -something- from probably include/linux/iommu.h to turn buffer I get from that shmem allocator into ... uh ... IOVA ? ... and pass that to the device ?
04:21airlied: dma_map_* should do it
04:22airlied: https://www.kernel.org/doc/Documentation/DMA-API-HOWTO.txt
04:25marex: airlied: lemme read that, thanks
04:25airlied: now I'm not 100% sure how to make it linear mapping from the device side, but there should be info somewhere
04:25marex: airlied: I was always under the impression that iommu was used mostly to protect the system from access where devices shouldnt access
04:27airlied: btw cma should work on x86 as well, just not all distros enable it
04:27airlied: cma=<size> on command line might be needd
04:28marex: airlied: right (I'm mostly familiar with the arm side)
04:31marex: airlied: drivers/gpu/drm/rockchip/rockchip_drm_gem.c: ret = iommu_map_sgtable(private->domain, rk_obj->dma_addr, rk_obj->sgt,
04:31marex: airlied: I think I have a winner, even with an example
04:31marex: one I can even digest easily, this is awesome
04:54marex: airlied: thanks !
05:08Calandracas: Is there any plans for rusticl panfrost at some point in the future?
05:20CounterPillow: The future is now
05:20CounterPillow: panfrost is one of the supported drivers for rusticl: https://docs.mesa3d.org/envvars.html#envvar-RUSTICL_ENABLE
06:16airlied: bcheng, Lynne : I've pushed https://github.com/airlied/FFmpeg/tree/av1-decode-wip as the code I think should work on the ffmpeg side (it doesn't work yet though on amd at least)
06:56airlied: bcheng, Lynne : okay I'm pretty unsure how this is meant to work :-P
06:58airlied: tchar__: I don't trust that code in radv for filling out the ref frame map
06:58airlied: but I'm unsure how the API is meant to be used here
09:07robmur01: marex: you only need to bother with the IOMMU API if you care about managing the address space and exactly *where* buffers are mapped in the device view
09:08robmur01: otherwise just use drm_gem_dma_helpers and it all simply happens by magic
09:09robmur01: (the iommu-dma layer can't strictly *guarantee* to linearise any given scatterlist, but it does try its best)
09:17pq: DemiMarie, gfx card hot-unplug, perhaps?
09:22pq: DemiMarie, see https://dri.freedesktop.org/docs/drm/gpu/drm-uapi.html#device-hot-unplug
09:24pq: SIGBUS would be bad, if you expect userspace to not crash on the spot.
09:27HdkR: Oh, it looks like Anv and Iris compiles on AArch64 now?
09:27HdkR: Might need to get a Battlemage GPU for testing
09:29airlied: Arc you mean
09:30HdkR: Well, I'm in no rush so I can wait for Battlemage :P
09:34psykose: still surprised they actually named it that because it's a cool name
09:41HdkR: Using DND class names sorted alphabetically is cute
09:48pepp: DemiMarie: my coworkers are aware of how Xen works and figured out a way to expose GPU memory to the guest correctly
09:49pepp: DemiMarie: but I don't know much about the details of the implementation
09:51tchar: airlied: can you elaborate on the issue you are seeing? That code is working around some firmware bugs, so it's a bit cursed.
09:52tchar: Oh, I see there's a new FFmpeg branch, I'll give it a spin
09:56airlied: tchar: I'm not sure the new ffmpeg is right either
09:56airlied: the main problem is around how many reference frames/slots we need to send
09:57airlied: I'm not sure how to fill out ref_frame_map properly
09:58airlied: like if we have 7 frame refs pointing at 2 references, I'm not sure what ref_frame_map needs to contain
09:59airlied: if we only send two dpb references vs the old code which sends 8 slots with the same image in different indices
09:59airlied: so the old code and my hacks that work send referenceSlotCount = 8 pretty much always
09:59airlied: and in those 8 there are repeated slot indices
10:00airlied: and that fills out ref_frame_map all properly
10:00airlied: but if we don't send 8, but instead only send say 2, I'm having trouble working out the ref_frame_map contents that will work
10:01airlied: I'll try and attack it again tomorrow when I've had a coffee to see if I can at least work out a good description :-P
10:14tchar: airlied: yeah, you're right that the intention is only to send 2 dpb references in pReferenceSlots in that case, and you use the referenceNameSlotIndices to convey the "duplicates"
10:15tchar: i'll see if I can spot any issue in the meantime
12:06Calandracas: CounterPillow, thats awsome that panforst is supported. I thought it wasn't beacuse of this page: https://docs.mesa3d.org/drivers/panfrost.html
12:06Calandracas: "Other graphics APIs (Vulkan, OpenCL) are not supported at this time."
12:09Calandracas: Neat, i have hardware for all supported drivers
12:23Calandracas: rusticl is super awesome and a complete game changer. Now I can do OpenCL development on my pinebookpro
12:52randevouz: Yeah the basic arithmetic with all the list/seq/predicate functionality is in the core of w3c libraries, that confused me , so disambiguation there, it has unlicensed github project, turtle is subset of n3 https://ruby-rdf.github.io/rdf-n3/RDF/N3/Algebra/Math/Negation.html, but i am lost as i got tired. Not sure if i have to write my own stack, needs testing. And they have no logarithms . Fell a sleep yesterday before inspecting the
12:52randevouz: compression or doing any testing.
12:54aleasto: is there an environment variable to disable the egl zink fallback if it breaks an app?
12:54zmike: no
12:55aleasto: sad
13:02agd5f: airlied, we have a page table too, but we don't use it.
13:07bcheng: airlied: filling out referenceSlotCount = 8, with repeated slotIndex is illegal I believe.
13:10MrCooper: aleasto: LIBGL_ALWAYS_SOFTWARE=1 ?
13:11bcheng: airlied: the problem that ref_frame_map filling code is working around is that the vulkan api only provides references used by the frame, not all 8 codec defined slots. But the FW was designed for other APIs which are always given the state of the codec DPB, in which case if a frame stops being seen in ref_frame_map, the FW will drop the metadata for that slot
13:12bcheng: the code is trying to fill into ref_frame_map, the real references first, then as a workaround, fill in the slots that were not specified by the app, in order for the FW to keep the metadata alive
13:16mripard: we're moving the drm-misc repo to gitlab, expect disruption for some time
13:21randevouz: https://www.w3.org/TR/xpath-functions/ they have natural and base10 logarithms, but somehow the ruby project lacks them.
13:52mripard: the migration is done now
14:32karolherbst: the hell.. marge nuked my pileine which succeeded in ` 59 minutes 31 seconds, queued for 4 seconds` :')
14:59Lynne: airlied: added your dedup changes to my branch - https://github.com/cyanreg/FFmpeg/tree/av1dec
14:59Lynne: also rebased it to git master
15:16Lynne: on nvidia, it crashes in vkCreateVideoSessionParametersKHR, which is very weird
15:31Lynne: right, fixed, I forgot that the spec forbids empty av1 session params, which we use for flushing the decoder on startup
15:32Lynne: now both drivers crash during queue submission (though with radv, the kernel outright rejects the CS)
15:50MrCooper: jfalempe: AFAIK a cache flush doesn't flush WC buffers
16:06marex: robmur01: ack, thanks for the input. I need to dig into this first, then I'll come back (or probably won't, because this seems like a perfect fit for my purposes)
16:06mareko: DemiMarie: one of the Xen architects now works for AMD
16:09DemiMarie: pepp: does it support dGPUs or only iGPUs?
16:09Lynne: airlied: with f3ab454f0 reverted in mesa, it sorta works, I get an image!
16:10Lynne: it's not correct, but it's a start
16:11pepp: DemiMarie: both
16:11DemiMarie: pepp: does it handle paging of GPU memory and resizeable BAR?
16:12pepp: DemiMarie: I don't ReBAR is supported. If "paging" means "moving memory around", then yes
16:12pepp: I don't *think* ReBAR is supported
16:14DemiMarie: No ReBAR could be a significant problem for broader use.
16:16pepp: DemiMarie: I don't see why?
16:16DemiMarie: pepp: because some GPU drivers simply require it
16:17DemiMarie: Intel Arc and NVK for example
16:26mareko: it's open source, people can change it
16:30DemiMarie: mareko: my concern is that Xen-specific code in Intel and Nvidia drivers will receive no testing in upstream CI, and users will be left with bugs that are nigh-impossible to track down, much less fix.
16:44DemiMarie: pepp: do AMD GPUs have ReBAR?
16:51pepp: DemiMarie: yes.
17:03mareko: DemiMarie: that's what SW development is for
17:13tchar: Lynne: I made a of couple commits, 1 of them it sounds like you already fixed it: https://github.com/charlie-ht/FFmpeg/tree/av1-decode-wip
17:14tchar: I'm looking now at the rvp / rav mgmt, to try and better match what the spec ended up on
17:16Lynne: tchar: yes, all are fixed
18:37DemiMarie: pepp: would it be reasonable to write an email to both xen-devel and dri-devel to try to get this all sorted out?
18:40airlied: bcheng: i do wonder shouldnt we according to the spec but the complete dpb state in begin video coding?
18:42DemiMarie: mareko: Neither I nor anyone else at Invisible Things Lab is a full-time GPU driver developer, and even if I was, I don’t have the resources to do the kind of testing that Intel and AMD do. If the same APIs that work outside of Xen also work under Xen, then Xen users benefit from the non-Xen testing for free.
18:44DemiMarie: mareko: Also, there are quite a few additional hypervisors, mostly in the embedded space. Having GPU drivers need to know about a hypervisor is a giant layering violation, IMO.
18:45DemiMarie: The GPU driver should just call the appropriate kernel APIs, and it should be the responsibility of the hypervisor interface to ensure that everything just works.
18:48alyssa: mareko: i'm planning to rereview opt_varyings today or tomorrow
18:48alyssa: fwiw
18:56eric_engestrom: karolherbst: marge's timeout starts when it finishes pushing the branch, and then gitlab starts processing, figures out it needs to create a pipeline and how, and then the pipeline is created. Depending on the load on gitlab, I've seen this take up to a couple of minutes, so I'm guessing this is where the extra 25+ seconds you are missing went
18:58eric_engestrom: it's really unfortunate though, and we've been discussing to try to find a way to address these "the pipeline is almost finished, please give me X extra time instead of the normal timeout" but we don't have a solution yet
18:59eric_engestrom: (also, there's the obvious problem of abuse of extra time, but I believe this is a social problem that will require a social solution)
19:03tchar: airlied: isn't that currently the case? the complete dpb state being the current frame and all its unique dependent frames. The idea of putting the whole "VBI" in the API was rejected
19:04alyssa: eric_engestrom: I mean... given the 60m timeout and the 25m expected worst time, if we're hitting timeouts stuffs already on fire, so engineering a "please give me extra time" mechanism doesn't seem too necessary?
19:05eric_engestrom: agreed, but until those who take too long stop taking too long, the users are left with nothing but the "womp womp, try again later" reassign button
19:06tchar: airlied: I had a look and started trying to move FFmpeg in the direction of the latest spec in https://github.com/charlie-ht/FFmpeg/commit/e426de63843123ec7cbd2bbb575f2a8901132bce
19:07jenatali: A "please give me 5 more minutes" seems better than "oh well, guess I'll have to take another ~hour by starting from scratch later"
19:07tchar: just in case it is of any use... I will take another look my tomorrow, I didn't figure out how to handle the frame_id wrapping yet
19:08alyssa: jenatali: what I mean is, we're defaulting to "please give me 35 more minutes" on every pipeline
19:08alyssa: and if that's not enough... things are really on fire!
19:09jenatali: Oh I agree, but borrowing 5 minutes now can help get things back in shape sooner
19:16DemiMarie: airlied (and others): if a `FOLL_LONGTERM` pin of VRAM succeeds, does that mean there is a kernel bug?
19:17airlied: DemiMarie: just means the bar pages are pinned, doesn't mean what is behind the pages is
19:17airlied: agd5f: yeah that was my understanding, it would definitely relieve having to move BOs to use that page table, but also with rebar it's probably not much point
19:18DemiMarie: airlied: what does that mean in practice? My understanding is that other subsystems (such as RDMA) also use `FOLL_LONGTERM` pins, and I don’t know if RDMA can handle faults.
19:21airlied: DemiMarie: they don't usually have stuff in device memory space
19:21DemiMarie: airlied: what happens if userspace tries?
19:22DemiMarie: Suppose userspace maps some VRAM with vkMapMemory and passes the resulting address to an RDMA verb.
19:23airlied: no idea
19:23airlied: tchar: interesting, definitely looks bigger than I'd considered
19:23airlied: Lynne: ^ you might want to start taking a look
19:23DemiMarie: would this be considered a userspace bug?
19:26airlied: probably, unless the kernel oops
19:27DemiMarie: Ouch
19:28DemiMarie: Does this mean that GPU acceleration under Xen will require either per-driver patches or recoverable page fault support in Xen?
19:31airlied: don't know maybe ask Xen developers
19:34Lynne: tchar: would you mind using my code
19:34Lynne: there are a lot of incomplete stuff in the branches of both of you
19:35DemiMarie: airlied: I am going to ask them, but first I need to know what the GPU drivers require.
19:35DemiMarie: From the kernel perspective, not Xen's.
19:36zamundaaa[m]: I've recently hit a GPU reset, which happened while a pageflip from the compositor was pending - because of the reset, that pageflip never happened and timed out.
19:36zamundaaa[m]: When I tried to work around that timeout in KWin, the result of the next commit was EBUSY because a pageflip was still pending on the kernel side...
19:37zamundaaa[m]: Afaict, there's no way for the kernel to signal to userspace that a commit failed. So when this happens, could / should the kernel signal the pageflip as completed, and allow commits to happen again?
19:37zamundaaa[m]: Because the GPU reset itself seemed successful, after restarting the compositor everything worked fine again
19:41DemiMarie: airlied: I think one might be able to trigger an oops with vmsplice.
20:01airlied: Lynne: I've got it to decode properly by using the API illegally with my code
20:01airlied: but I'll rebase on yours to see if I can hack it
20:19airlied: Lynne: your branch has inconsistent loop restoration
20:19airlied: at least with radv, but I'm not sure where the spec ended up
20:23airlied: removing dedup fixes rendering for me on radv
20:23airlied: if I comment out tchar's assert
20:31DemiMarie: airlied: I see. Does that mean that P2PDMA between an RNIC and a GPU isn’t currently supported upstream?
20:34airlied: DemiMarie: probably depends on the gpu driver cooperating
20:36Lynne: airlied: the loop restoration unit sizes?
20:36Lynne: they're supposed to be log2, but the spec misnamed them, there's a spec fix that should've been merged already
20:37DemiMarie: airlied: Same cooperation I need, as it turns out.
20:40airlied: Lynne: yeah the unit sizes
20:40airlied: Lynne: - .LoopRestorationSize[0] = frame_header->lr_unit_shift,
20:40airlied: - .LoopRestorationSize[1] = frame_header->lr_uv_shift,
20:40airlied: - .LoopRestorationSize[2] = frame_header->lr_uv_shift,
20:40airlied: + .LoopRestorationSize[0] = 1 + frame_header->lr_unit_shift,
20:40airlied: + .LoopRestorationSize[1] = 1 + frame_header->lr_unit_shift - frame_header->lr_uv_shift,
20:41airlied: + .LoopRestorationSize[2] = 1 + frame_header->lr_unit_shift - frame_header->lr_uv_shift,
20:42airlied: that at least gets things to render on current radv
20:42airlied: (the first frame)
20:51airlied: not sure even with tchar fix the spec is all that clear :-
20:51airlied: :-P
21:00Lynne: airlied: the version in my branch should be the correct one
21:00Lynne: wait, what?
21:01Lynne: you have to add 1 and do the diff
21:01bcheng: When LoopRestorationSize (codec defined) = 256, log2 - 5 would be 3, but lr_unit_shift gives 2
21:03Lynne: you're right, even the sample program does that
21:03Lynne: updated my brach with that
21:04airlied: so with that, and the dedup reverted and the assert commented out, I get a proper video decode
21:05airlied: so now we just need to work out how to make things work like the spec intends
21:08bcheng: Seems like there's some issues with the dummy dpb addrs: https://gitlab.freedesktop.org/bcheng/mesa/-/commit/14eb7a417eee6bbd99b11cf7bce6e8fdf7b864c4
21:10bcheng: dpbArraySize needs to be 8 if the ref_frame_map is filled out like it is
21:11bcheng: but with dedup code, I get referenceSlotCount=0 even for the inter frame?
21:33Lynne: airlied: reverted the dedup
21:33Lynne: sadly that puts us back to where we started
21:34bcheng: good news is I got the dedup working :)
21:34bcheng: - for (int j = 0; j < AV1_NUM_REF_FRAMES; j++) {+ for (int j = 0; j < ref_count; j++) {
21:34bcheng: Sorry
21:34bcheng: - for (int j = 0; j < AV1_NUM_REF_FRAMES; j++) {
21:34bcheng: + for (int j = 0; j < ref_count; j++) {
21:42airlied: bcheng: doh!
21:42airlied: Lynne: dedup with bcheng change seems to work for me
21:51Lynne: updated
21:52Lynne: and it works a little bit more than before - it works fine on nivida too
22:04DemiMarie: Will pinning pages returned by e.g. vkMapMemory work as expected for iGPUs?
22:10Lynne: airlied: passes all my standard tests so far, no crashes
22:16DavidHeidelberg: mareko: gfxstrand robclark jljusten airlied would you be ok to create nine-tests namespace for testing gallium-nine? Code from Xninehttps://github.com/axeldavy/Xnine (https://github.com/axeldavy/Xnine) would be hosted there and used in Mesa3D CI
22:18DavidHeidelberg: details: in general it's wine tests adapted to run directly on Linux + gallium-nine with GTest suite (for deqp-runner integration). Very small, much fast, much wow. Ofc MIT/LGPL licensed.
22:32Lynne: tchar: do you think frame_id is still currently incorrect?
22:42alyssa: DavidHeidelberg: been a while since i saw a doge meme. nice :3
23:00tchar: Lynne: it's possible I was mishandling them in my exploratory patch, it looks like the av1dec layer was taking care of the values not going over the maxDpbSlots value (9)
23:01tchar: where's the latest working stuff in terms of branches atm? I will test here in the morning too
23:04DavidHeidelberg: alyssa: I seriously miss them
23:09Lynne: tchar: https://github.com/cyanreg/FFmpeg/tree/av1dec
23:10Lynne: passes superres, film grain (not on intel), weird invalid files we found crashes on with the old extension
23:10Lynne: it should be on par with the old extension
23:12Lynne: airlied: it still has the same 8/10bit flickering issues as before (and 10bit hevc), you should talk to jkqxz again
23:16DemiMarie: robclark: are there any Chromebooks with discrete GPUs?
23:25airlied: Lynne: cool, I really have to figure out how to reproduce that here
23:26airlied: do you see it on both navi2x and navi3x?
23:26Lynne: only 3x
23:27Lynne: not swapchain related, I've seen it in ffmpeg
23:28Lynne: latest status was that jkqxz thought that the 10-8 conversion was activated by uninitialized structs which radv wasn't using or filling in (but you said they are)
23:29Lynne: we did try zeroing every single piece of memory manually, but that didn't work, only RADV_DEBUG=zerovram helps
23:50bcheng: Lynne: is there a sample clip?
23:50bcheng: I can check on my end
23:51Lynne: no sample clip because I've been able to replicate with everything with the same consistency
23:52bcheng: just any random av1/hevc 10 bit clip?
23:53bcheng: do you do -vf "format=nv12" or something to get ffmpeg to do a 10-8 conversion?
23:53Lynne: no no, any bit depth for av1, but for hevc, only 10bit
23:53Lynne: no conversion either, just output whatever the file's pixel format is
23:54Lynne: happens randomly, more frequently that rarely, less frequently that often, and if a single process does RADV_DEBUG=zerovram, globally it stops happening for a random amount of time
23:55robclark: DemiMarie: currently no
23:55DemiMarie: robclark: so right now a major decision to be made is whether to pin all VRAM shared with guests.
23:56Lynne: bcheng: I recommend using my branch of ffmpeg, patching mpv with https://0x0.st/HhD5.diff to use the new extension, and opening and closing the same clip until it happens
23:56DemiMarie: With iGPUs this is a non-issue, with dGPUs it is a significant concern.
23:57bcheng: Lynne: thanks, will see if I can replicate
23:57bcheng: only on navi3x?