00:23 ngcortes: NB: intel mesa ci is back online (looks like it was a network outage). Thank you for your patience
00:33 DemiMarie: <robclark> "it's really just some sqe code..." <- Time for some clean room RE?
00:37 robclark: I mean, for anything other than dev boards we wouldn't be able to sign our own zap fw.. OTOH upstream only uses it to take the gpu out of secure state on when taking gpu out of power-collapse (suspend).. a stub zap fw that didn't really do anything might be useful for some limited cases (ie. CI on dev boards where we didn't want to redistribute the official fw as part of the CI infra)
00:38 robclark: writing our own sqe.fw would be more useful.. but also a big time suck and there are plenty of more important things to do
08:52 jani: tzimmermann: mripard: mlankhorst: ack for merging https://patchwork.freedesktop.org/patch/msgid/20230302081532.765821-2-arun.r.murthy@intel.com via drm-intel?
08:56 tzimmermann: jani, ok by me. Lyude? ^
09:05 mlankhorst: jani: ack, seems sane
09:06 jani: tzimmermann: mlankhorst: thanks!
09:49 dj-death: what notations would people use if we wanted to extend the max vector length from 16 to 32? :)
09:49 dj-death: would run out of letters at 26
09:50 dj-death: use capital letters? ;)
09:52 dj-death: cwabbott: I see you're awake :)
09:53 cwabbott: dj-death: well, I'm on european timezone so this is a normal time to be awake :)
09:53 dj-death: ah :)
09:54 dj-death: could use digits too
09:54 dj-death: don't know
09:56 dj-death: DG2 is able to load up to 64 dwords
09:56 dj-death: not sure we want to go up to that
09:56 dj-death: but 32 would be nice
10:18 glehmann: dj-death: couldn't you use a 64bit vec16 for 32 dwords?
10:20 glehmann: or maybe not because bit sizes are weird on intel iirc
10:25 dj-death: glehmann: 64bit data should work on DG2, but if I can avoid some massaging of the data types, it's nicer tbh
13:02 alyssa: zmike: zink really needs to get on the load_input/store_output train
13:02 alyssa: choo choo
13:03 alyssa: ("isn't that impossible though?" "CHOO CHOO!")
13:08 ccr: "all aboard the train!"
13:51 zmike: patches welcome
13:56 derRichard: i have a question on mipi dsi. my panel enables power supply in ->prepare() and does mipi_dsi_dcs_exit_sleep_mode() in ->enable(), but on the scope i see that display data is sent via mipi dsi *before* ->prepare(). the data sheet of my display requires that power has to be enabled first. how can i control this?
15:34 DavidHeidelberg[m]: anholt: is these lines "deqp-vk[534]: segfault at 24 ip 0000556d8da75f42 sp 00007ffeea9cd920 error 4 in deqp-vk[556d8cedd000+35b3000] likely on CPU 2 (core 1, socket 0)" considered as OK or serious issue with HW/driver?
15:51 frieder: derRichard: I think this depends on how your DSI controller driver is implemented. If it is a bridge driver things should work as described here:
15:51 frieder: derRichard: https://docs.kernel.org/gpu/drm-kms-helpers.html?highlight=mipi+dsi+bridge+operation#mipi-dsi-bridge-operation
15:53 frieder: derRichard: You might need to use the recently added prepare_prev_first flag in your panel driver.
16:39 karolherbst_: davinci resolve runs on mesa :3
16:42 HdkR: karolherbst_: Does it require OpenCL or something?
16:42 karolherbst_: yes
16:42 HdkR: Wacky
16:42 karolherbst_: it requires a few things even
16:42 karolherbst_: image2d_from_buffer
16:42 karolherbst_: gl_sharing
16:42 karolherbst_: proper workgroup info
16:42 karolherbst_: so it depends on 3 MRs atm :D
16:43 HdkR: whoa
16:43 karolherbst_: but it does run
16:43 karolherbst_: and the big thing is: apparnetly it doens't work on Intels official CL runtime :D
16:43 karolherbst_: but yeah
16:43 karolherbst_: it uses all those crazy features
16:43 HdkR: Just replace the proprietary Intel runtime with Rusticl, sounds good to me
16:44 HdkR: So how soon until I can run Davinci Resolve on my Snapdragon laptop? :P
16:44 karolherbst_: :D
16:45 karolherbst_: hopefully I can merge some stuff soon
16:45 HdkR: nice nice nice
16:47 anholt_: DavidHeidelberg[m]: segfaulting in deqp is probably just a driver bug. you probably have some xfails with crashes?
16:50 karolherbst_: yeah.. the image2d_from_buffer stuff is good to go, so with that merged there are just 2 MRs left, and I might remove the radeonsi commit from one so I can merge it :)
16:50 karolherbst_: the gl_sharing one might still require some work tho
16:53 HdkR: Well it works in my emulator, but I just don't have a real GPU apparently :P
16:53 karolherbst_: heh
17:56 MrCooper: karolherbst_: rusticl with radeonsi when though :P
18:31 karolherbst_: heh
18:31 karolherbst_: hopefully this month
18:53 DavidHeidelberg[m]: anholt_: I'll check xfails (in ~ 2hrs), but I'm afraid I didn't see anything in artifacts
20:31 airlied: anholt_, zmike : is there a writeup on cmd buffer usage diffs between angle/zink?
20:32 anholt_: I've talked about it in this chan or #zink before, I forget where
20:32 airlied: beyond the queries?
20:32 airlied: like you make it sound like a fundamental design problem in zink
20:32 airlied: I don't remember hearing that conversation at all
20:33 anholt_: basically: angle records its commands in a thing called a secondary, which is not actually a vk secondary. This lets it go in and edit renderpass stuff as it catches buffer invalidates, late clears, etc. Only at submit time does that secondary get turned into a vk cmdbuf
20:34 airlied: ah okay that is pretty fundamental, thanks!
20:34 anholt_: they don't love the overhead of doing so, but had to. so there's been some discussion of "how much could we move the late fixups to the vk driver, by emitting commands for like "whoops, actually make my load op a dont_care instead of a load for this render pass, thanks"
20:34 zmike: this is what tc renderpass optimizing fixes in zink
20:35 anholt_: zmike: so, do you process arbitrary amounts of calls looking for the end of the rp in tc?
20:38 zmike: yeah something like that
20:38 zmike: all the attachment usage/invalidation is accumulated for use at the start of the renderpass
20:38 zmike: the overhead is pretty minimal, like 5% in the base drawoverhead case and otherwise unnoticeable
20:43 anholt_: hohoho. If this testing is to be trusted, with the 16-bit fix, and having switched my test device to adl, zink is -18% perf of iris on android, and angle is -22% of iris. couple of big wins for zink (probably some of the fixes we've written), but also a bug chunk of angle sucking more on this hw for reasons I haven't investigated.
20:43 zmike: oooo
20:43 anholt_: this is my first time seeing zink pull ahead of angle in any set of tests, though. so pretty cool.
20:43 zmike: itshappening.gif
22:39 DemiMarie: How does Panfrost compare to the ARM Mali driver in terms of performance and features?
23:09 daniels: anholt_: ooh!
23:09 daniels: DemiMarie: yes
23:10 DemiMarie: daniels: yes?
23:10 emersion: "yes"
23:10 emersion: perfect answer :D
23:10 daniels: DemiMarie: https://youtu.be/xXaYrufaNRA
23:11 daniels: I mean, it's +/- 40% depending on which usecase or benchmark you select
23:11 daniels: same with all drivers
23:12 DemiMarie: emersion: are you saying it was a bad question? daniels: what about correctness (e.g. Khronos CTS)?
23:13 emersion: no, it's not a bad question
23:14 daniels: DemiMarie: on most generations it's every bit as conformant as the proprietary driver. it's an unanswerable question though, as is 'is Windows or Linux faster on my laptop?': Panfrost optimises for the usecases that are important to our userbase within the limitations of our driver architecture, and the proprietary driver does the same within theirs. there's no universally correct answer.
23:20 DemiMarie: emersion: “yes” is not really a *useful* answer, at least not without additional context. daniels: the reason I asked is that I was told that only the Mali and Adreno drivers have passed the Android CTS.
23:22 emersion: was more of a joke than anything else
23:22 emersion: nobody here will tell you that the blob is better ;)
23:24 daniels: DemiMarie: the Android CTS is different to the GL/VK/etc CTS; it's also not possible to give an answer generically for Panfrost because the Android CTS tests a bunch of platform-integration stuff which needs to be done for each SoC family
23:24 javierm: daniels: that video is so funny :D
23:24 daniels: DemiMarie: the Panfrost CTS pass results are visible within CI, as well as by going to khronos.org and looking for the list of conformant products
23:25 DemiMarie: daniels: What kind of stuff?
23:25 daniels: javierm: the full one is even better, as you can see his translator sweating profusely
23:25 daniels: DemiMarie: display, gralloc, etc
23:26 DemiMarie: daniels: I see. So this isn’t a Panfrost limitation then.
23:27 javierm: daniels: LOL
23:28 daniels: DemiMarie: not specifically, no
23:31 FireBurn: zmike: Have you done much testing with zink on any PRIME devices?
23:31 zmike: all my devices are prime 💪 💪 💪
23:32 zmike: but also no
23:34 FireBurn: I've an AMD/AMD setup here, and I'm seeing very similar FPS on the Unigene benchmarks for radeonsi and zink, but I notice that playing it under zink "feels" sluggish
23:35 HdkR: Throw some Mangohud at it to see 1% and 0.1% lows and see if it is stuttering more? :)
23:36 zmike: yeah you might not be getting the right device since zink does its own ordering
23:36 zmike: see also the 50 tickets open about it
23:38 robclark: DemiMarie: that is not true, freedreno (and intel and amd) all regularly pass android CTS.. any CTS fails are release blockers for ChromeOS
23:39 FireBurn: zmike: Pretty sure it's rendering to the correct one, as the APU is no where near as fast as the 6800M, just wondered if it was the copy to the APU that was "slow"
23:40 zmike: uhh
23:40 FireBurn: There were quite a few improvements to radv and radeonsi to use SDMA to copy the image to allow the render to get back to rendering
23:40 zmike: no idea tbh
23:40 zmike: I have no such machine, so it's not something I can examine
23:41 FireBurn: No worries, hopefully AMD decide to gift you with one of their AMD Advantage laptops in the future, they really are killer :D
23:41 zmike: intriguing
23:41 zmike: maybe I'll get one
23:42 zmike: I need a rival for my old intel icelake
23:42 FireBurn: I previously has Intel/AMD
23:43 FireBurn: and Intel/AMD before that
23:43 FireBurn: Anything to avoid nVidea