14:38 karolherbst: tagr: your tearing issue also got resolved with those patches, right? http://patchwork.ozlabs.org/project/linux-tegra/patch/20210302124445.29444-2-digetx@gmail.com/ I didn't got do test it yet, but I think those addresses that issue we both saw on the nano, no?
15:45 tagr: karolherbst: it has the potential to eventually help fix it
15:45 karolherbst: yeah.. my thoughts as well
15:45 tagr: that patch currently doesn't do anything on Tegra210 and later
15:45 karolherbst: ohh
15:45 karolherbst: people reported that it did help with tearing issues on at least some tegra devices
15:46 tagr: so if I remember correctly one way to make the tearing go away was to crank up the EMC frequency, though I think that doesn't help from the DC perspective, but rather from the GPU perspective
15:46 karolherbst: and I think I remember talking with somebody about it and that tegra210 isn't considered there yet :D
15:46 tagr: maybe on Tegra124 and earlier
15:46 karolherbst: right..
15:47 tagr: well, that'd be Tegra124 since earlier generations had a completely different GPU
15:47 tagr: I suspect that there's still an issue somewhere else, though
15:48 tagr: because bumping the EMC frequency only increases the memory bandwidth available to the DC and GPU, which means they'll go faster
15:48 karolherbst: could be
15:48 karolherbst: ahh yeah
15:48 tagr: but from what I can tell the DC already goes fast enough, otherwise it'll underrun, rather than tear
15:49 karolherbst: mhh, maybe the desktop load is too heavy for the nano and we can't go any faster anyway
15:49 tagr: if the GPU doesn't go fast enough to put out one frame per refresh rate, then double-buffering should still take care of the tearing
15:49 karolherbst: but yeah.. we have to sync somewhere, and I wasn't able to find that commit in i915 to fix the issue for real
15:50 tagr: is this on X or Wayland?
15:50 karolherbst: there you also had a trade of between speed and tearing (just less worse)
15:50 karolherbst: and they fix it inside i915
15:50 karolherbst: tagr: uhmm.. I only checked wayland, because X is.. well...
15:50 karolherbst: I think X 1.21 was showing the same symptoms though
15:51 tagr: for X I think the tearing would be unavoidable, but for anything... "newer"... it should work
15:51 karolherbst: right.. but the tearing was more like tearing on the compositor side or so
15:51 karolherbst: you clearly saw the vertices
15:51 tagr: Wayland compositors should be fully double-buffered, right? in that case if we properly page-flip on vblank there should be no tearing, no matter how slow the GPU is
15:52 karolherbst: yeah, but I think something there is going very wrong
15:52 karolherbst: tegradrm needs to wait on the GPU to finish and there is a way to do that kernel side, which i915 did, but I really can't find it anymore :/
15:53 tagr: tearing on the compositor side? do you mean the compositor would be somehow rendering frames that it shouldn't be rendering?
15:53 karolherbst: more like you are in between frames. It's hard to explain. You see those two vertices using for blitting and a tear line across one of them.. or both or something
15:54 tagr: oh right... I suppose it's possible that the fence stuff could help with that
15:54 karolherbst: right, but that was adding an UAPI we don't need :p at least on i915+nouveau we don't
15:55 tagr: huh... well, not sure then
15:56 karolherbst: tagr: I think it's the drm_syncobj stuff
15:56 tagr: not sure I even understand what exactly the problem is that were seeing, because basically the compositor will perform the render, then do an eglSwapBuffers() or some other equivalent then page-flip to the new buffer
15:56 karolherbst: which tegra doesn't do anything with
15:57 tagr: after eglSwapBuffers(), the GPU should be done rendering, not touching the buffer anymore, so by the time we page-flip to the buffer, there should be no operations on it in flux
15:57 tagr: drm_syncobj stuff is really just "that fence stuff"
15:58 karolherbst: sure, but we already have the UAPIs for that on the drm level, no?
15:58 tagr: except that we can't use drm_syncobj directly because they are driver-private
15:58 tagr: in order to pass them to another driver (i.e. tegra-drm) you have to convert them into a syncfd
15:59 karolherbst: well.. we don't create any from inside nouveau though, mhhh
15:59 karolherbst: I wonder how all of that actually works
15:59 tagr: see DRM_SYNCOBJ_HANDLE_TO_FD
15:59 karolherbst: or maybe it's just crappy with nouveau and implementing that would benefit perf or whatever?
16:00 tagr: that's why we need that new UAPI, to create those fences
16:00 karolherbst: sure, but it does actually work without that, which keeps me wondering
16:01 tagr: it's possible that there's something we're missing on the Tegra side, something that doesn't exist on the dGPU
16:01 tagr: it's quite different for i915+nouveau because you basically have a built-in double-buffer there
16:01 karolherbst: I know that I pinpointed the commit in i915 like ages ago which fixed tearing with prime
16:01 karolherbst: but...
16:01 karolherbst: I kind of lost that information :D
16:02 karolherbst: tagr: ohh.. because of memory.. right
16:02 tagr: I mean, if you render on nouveau and then share that buffer with PRIME, you basically need to put that buffer into shared system memory, or perform a blit to a temporary system memory buffer
16:03 tagr: it's possible that we're missing some sort of synchronization that ensures the Tegra GPU has a) flushed all operations and b) flushed all caches
16:05 karolherbst: I might have to dif into channel logs on my older laptops to find it :O
16:05 karolherbst: but yeah...
16:05 karolherbst: could also be something silly like this
16:06 karolherbst: and if adding that UAPI is the shiny new thing anyway, then that's the way to go regardless
20:04 tagr: might be worth rev'ing the UAPI and export drm_syncobjs instead of syncfds and then do the conversion in userspace, or perhaps allow either to be emitted so userspace can choose which one it wants
20:04 tagr: the extra HANDLE_TO_FD might be a bit much depending on the use-case
21:10 karolherbst: dunno.. I'd check what other drivers are doing
22:00 gergo: Hello! Do you know if the nvidia's consumer grade GPUs' 4 monitor limitation is imposed in the driver or in the firmware ? Would I be able to use more than 4 monitors with your driver ?
22:01 imirkin: gergo: the hardware only has 4 CRTC's
22:01 imirkin: so you can only display 4 images at once
22:02 imirkin: it's *conceivable* that you could drive more than 4 monitors if some of them shared identical scanout settings (and obviously the same image being displayed), but that hasn't been done in practice.
22:06 gergo: alright, thanks. so using 6 monitors would only be possible with enterprise grade Nvidia GPU or AMD GPU right ?
22:06 imirkin: i don't think any nvidia GPUs support > 4 monitors
22:06 imirkin: AMD does (or at least did) support up to 6 monitors
22:06 HdkR: Even the Quadros only support 4 monitors
22:07 imirkin: sometimes they glue 2 GPUs onto the same board
22:07 HdkR: Which means Wall displays need a wackload of them, bonded by the Quadro timing card :P
22:07 imirkin: and it can magically support 8 monitors, but it's not a great experience
22:07 HdkR: True
22:10 gergo: okay. then if I want to stick with Nvidia, only 2 GPUs would solve my problem ? (I'm not looking the have a wall display with synced video across them .. this is only for my PC, I just like monitors :D)
22:11 imirkin: assuming the problem is "have more than 4 monitors plugged in at once", then yes
22:11 HdkR: (Or AMD GPU)
22:11 imirkin: however the experience is much better if all monitors are on the same GPU
22:12 gergo: alright, thanks for clearing it up for me :)
22:12 imirkin: and the higher end AMD GPU's can do 6 displays
22:12 imirkin: although i don't know precisely which ones
22:13 gergo: imirkin, why is it worse with 2 GPUs ? I mean what issues could I face ?
22:13 HdkR: Caveat on AMD hardware is something like most of the displays need to be DP, only one of the six can be HDMI or something? They've not been super clear about it...
22:13 karolherbst: gergo: you have two GPUs
22:14 imirkin: gergo: data has to move from GPU to GPU
22:14 karolherbst: you have to copy data between them
22:14 karolherbst: and the likes
22:14 karolherbst: synchronisation issues
22:14 imirkin: the "remote" GPUs are likely to feel "laggy"
22:14 karolherbst: although in theory it shouldn't be terrible
22:14 imirkin: er, the monitors on the "remote" GPU
22:14 karolherbst: with nouveau it will be
22:14 gergo: even if I run separate apps on separate monitors ?
22:15 karolherbst: gergo: one GPU renders, the other displays
22:15 imirkin: gergo: usually only one GPU will be doing "the work", and then sending the iamge to the other GPU for display
22:15 gergo: alright, I didn't know that
22:15 karolherbst: essentially, dual GPU display setups are fundamentally fully broken in linux
22:15 karolherbst: sadly
22:15 karolherbst: there are plans to change that
22:15 karolherbst: but those are generally very messy
22:16 karolherbst: e.g. one problem is, what if you move one app to a display of another GPU
22:16 karolherbst: do you tear down the rendering context?
22:16 gergo: ah I'm on windows actually, I just thought you have the most knowledge about GPUs
22:16 karolherbst: or do you keep it and then have one GPU render the other display again?
22:16 karolherbst: gergo: windows _solved_ this problem :D
22:16 karolherbst: linux didn'
22:16 karolherbst: t
22:16 imirkin: i dunno that it's *solved* on windows
22:16 imirkin: but it's definitely better
22:17 karolherbst: well.. they can migrate their desktop to a different GPU
22:17 karolherbst: but yeah.. not fully sure how that looks like on multi GPU setups
22:17 karolherbst: if they have.. multiple rendering contexts or not
22:17 karolherbst: and then you always have to deal with applications not being able to recreate their rendering context as well
22:18 gergo: hmm it seems then I've gotta switch to AMD then
22:19 karolherbst: well AMD has the same problems on Linux :p but if you are on windows it doesn't really matter as long as you use a GPU fitting your needs
22:19 karolherbst: but it does have less performance problems
22:19 karolherbst: on linux (except you use the propriatary driver...)
22:19 karolherbst: imirkin: actually.. I know that on MacOS applications get transfered over if they indicate supporting switching the GPU
22:20 imirkin: yea
22:20 karolherbst: and some even claim it works on multi GPU setups and stuff...
22:20 karolherbst: this will be so messy to implement on linux
22:20 karolherbst: well...
22:20 karolherbst: the kernel bits are all there ¯\_(ツ)_/¯