00:05Mangix: looking at that pixel clock chart again, my DP monitor that supports 1.2 goes past 600MHz, which is totally out of spec. I haven't had anything have a problem with it.
08:28hell__: Mangix: 110 Hz is a weird refresh rate. out of curiosity, are you doing display overclocking?
18:08RSpliet: Mangix: IIRC the current code for setting a display mode is telling the GPU "here's the width, height, refresh rate and some other params, oh and here's the requested display clock in kHz (or mHz or w/e)", and it then goes off and either calculate a PLL value to set itself, or computer says no.
18:09RSpliet: Which would be different from all the other PLLs in the GPU... which is why I keep a small reservation around it - my memory is Tesla/Fermi levels of old and so are the GPUs I looked at at the time
18:17RSpliet: There was a NVIDIA-released PDF around that listed several pixel clock limitations on HDMI. I recall first gen Kepler to have a max pixelclock of 297 MHz even though the HDMI spec allowed up to 340MHz or sth. Can't find the PDF now obvs
18:20RSpliet: https://www.pny.com/file%20library/support/pny%20products/user%20guides%20and%20tutorials/quadro/quadro-and-nvs-display-resolution-support-da-07089-001_v02.pdf
18:22RSpliet: There's a few ifs and buts on that, but I recall running into that limitation on my on PC as well
19:22Mangix: hell__: yes. this display unfortunately cannot do 120Hz
19:23Mangix: RSpliet: those pixel clock limitations are bogus
19:23Mangix: I can go past 404MHz on Fermi just fine
19:24Mangix: going past certain pixel clocks just prevents the GPU from clocking down
19:24Mangix: same if the vertical blanking interval is too low
19:27hell__: hmmm, not sure how clocking in the GPU works, but maybe the display engine doesn't work well with high pixel clocks and low GPU clocks
19:28hell__: maybe limitations in clock domain crossing, but I'm speculating
19:28RSpliet: Mangix: not being able to clock down the rest of the GPU is because that's the only way to guarantee enough DRAM bandwidth for display scan-out. That was definitely a thing in the olde days, nowadays I suspect less so as DRAM bandwidth grow a lot quicker than display scanout BW requirements
19:29Mangix: the actual bit of code is here: https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/tree/drivers/gpu/drm/nouveau/nouveau_connector.c?h=next-20220722#n1040
19:30Mangix: the DVI spec leaves the max pixel clock for dual link DVI unspecified
19:30RSpliet: Yep. Raising them could lead to the display engine failing to set a mode with an error that amounted to "PLL didn't lock"
19:30RSpliet: I'm sure the limitations here aren't perfect
19:30RSpliet: Also, some cards have external decoders, with different pixel clock limitations
19:30hell__: a comment says some limit is conservative
19:31Mangix: notice the difference here: https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/tree/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c?h=next-20220722#n1178 <-- no upper bound is specified
19:32RSpliet: AMDGPU != NVIDIA
19:32RSpliet: They could have used a different type of PLL in their designs. The NVIDIA GPU has other PLLs in it (for DRAM, core clock etc) that go way over 297MHz
19:32RSpliet: NVIDIA's display pipeline could have been insufficiently pipelined to safely allow clocks exceeding that limit
19:32hell__: AFAIK, AMD is involved in AMDGPU development, whereas Nvidia isn't really involved in nouveau
19:32RSpliet: Loads of reasons
19:32RSpliet: Yes
19:32RSpliet: This is true
19:33Mangix: oh funny they have this comment: /* XXX check mode bandwidth */
19:33RSpliet: Well, NVIDIA contributed some bits to support Tegra GPUs a long time ago. But they have different display logic
19:33Mangix: In any case, amdgpu has a 340MHz limitation somewhere else in the code
19:34Mangix: I flashed an EDID with a resolution with a 399.99 pixel clock to work around
19:35Mangix: anyway, notice how https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/tree/drivers/gpu/drm/nouveau/nouveau_connector.c?h=next-20220722#n1040 puts the max at 330MHz for no good reason
19:36RSpliet: Pretty sure there's a good reason, even if it isn't documented in the code
19:37Mangix: dual link DVI supports > 330MHz just fine
19:37hell__: yeah, I suspect there isn't enough buffer space somewhere to run high pixel clocks
19:37RSpliet: Yes, but again, that doesn't mean that NVIDIA GPUs support that
19:37hell__: and I know nothing about Nvidia GPUs
19:37Mangix: RSpliet: what makes you say that?
19:38RSpliet: A lot of this is hazy memory, but I was involved with nouveau dev a bit ~10 years ago.
19:38RSpliet: This kind of logic most likely was derived from trial and error with the blob. Like "with these GPUs the blob won't let us set pixel clocks over a certain clock, so let's not do that either"
19:39Mangix: ah funny you say that
19:39Mangix: the blob's restrictions got lifted aroundmaxwell time.
19:40Mangix: i remember nvidia advertising that actually
19:40RSpliet: I only vaguely recall a hack that allowed 4K on Kepler at like 30Hz with YUV4:2:0, a party trick I don't think nouveau devs ever tried to figure out (or have we/they?)
19:41RSpliet: I've always suspected that such limitations would be described somewhere in the VBIOS, such that the OEM could set them based on their PCB's signal integrity and be able to set them for external decoders should the OEM choose to use those
19:41RSpliet: But I think we never found it, and I may have thus been wrong ;-)
19:42RSpliet: Anyway, I haven't been involved with nouveau for years. I lost my appetite for working on it around the time I started my PhD, I lost interest in NVIDIA when they started gaslighting the nouveau community around signed firmwared, and now I'm a happy employee for a competing company
19:43RSpliet: So I've missed some of the more recent developments
19:46RSpliet: I'm sure that if you find the conditions under which nouveau could permit a higher pixelclock your effors would be mainlineable, but that will involve tracing the blob both at init time and at modesetting time
19:48RSpliet: Because there's a good chance that if NVIDIA used to have these limitations but now don't, it's for a reason
19:48RSpliet: Maybe they had to hack up init sequences to tweak a lot of random register values we don't know of
19:48RSpliet: Maybe they modeset in multiple phases, because locking a PLL is easier if the frequency diff between old and new is smaller.
19:48RSpliet: I don't know, just speculating
19:49RSpliet: But it might not be so easy as "raise the number and display clock goes brrrrr"
19:53RSpliet: And if it is, I'm betting the lead dev will have to see evidence of that too. And "it worked on this one card today" is probs not sufficient to convince, because production quality differences could mean things like "the other card in the wild only accepts this pixel clock when it's running under 50°
19:53RSpliet: Or something stupid like that
19:58hell__: or "this other card will accept the pixel clock, but it has a 1 in 10 chance of misbehaving when resuming from S3 (suspend to RAM)"
20:00RSpliet: Less likely, but stuff like that. If the blob changed their limits, we need to understand which hoops they jumped through to make that happen. No hoops would be the worst outcome of that research
20:01hell__: if the limits differ between chip generations, it could be that they improved the hardware
20:01RSpliet: Yes
20:01RSpliet: I assumed they differed between blob versions
20:01RSpliet: If they differ between HW gens, then yes, they improved HW
20:01RSpliet: Sorry, let me rephrase that
20:02RSpliet: I interpreted Mangix as saying they differ between blob versions on the same GPU. I may have misinterpreted :-)
20:58Mangix: RSpliet: aka too hard to mainline a pixel clock increase
20:59Mangix: as far as the blobs are concerned, I don't know if it's maxwell and above that they increased the defaults.
20:59Mangix: I remember having to use https://www.monitortests.com/forum/Thread-NVIDIA-Pixel-Clock-Patcher . I no longer do.
21:00Mangix: Even on Fermi (550
21:02Mangix: 400/500-series cards will not reduce clock speeds if the pixel clock is greater than 404 MHz. <-- is wrong. this is true with maxwell 1 as well
21:10RSpliet: Mangix: I can't judge your skills, it would be wrong for me to claim it's "too hard", but it's defo more work than a one-liner integer change. Feel free to change nouveau locally and build your own kernel (or out of tree driver, but that stuff can be tricky)
21:10RSpliet: worst comes to worst it doesn't work and you have to reboot into your old kernel to remove the new oen
21:11RSpliet: open source gives you that freedom to mess about with your system and find out ;-)
21:34fodasso: Hello. I'm using Nouveau on a musl system with my RTX 2060. It hangs after several seconds/minutes of gameplay in a 3D game. It usually goes back to normal after some time (again, some seconds/minutes). Is it a known issue?
21:42Mangix: fodasso: sounds like a question for #dri-devel . as for musl, I think all the developers only test with glibc
21:43fodasso: OK, tyvm for the reference.
21:44fodasso: I will try uploading dmesg log for reference too
21:45RSpliet: dofasso, Mangix: nah, that's defo a question for the nouveau devs. There's just that many of them available at the mo, so grab a bouncer and stick around
21:45fodasso: It is too bad we can't post here from Matrix
21:45RSpliet: And yes, I suspect musl is not tested much with nouveau, but it shouldn't matter
21:46RSpliet: is my non-expert opinion
21:46fodasso: Well, proprietary drivers are a no-go with musl, even though Nvidia promised them some months ago.
21:46Mangix: my favorite musl segfault: printf("%d\n", (time_t)x)
21:47fodasso: Not even their new FLOSS drivers work, because some component (opengl implementation?) is still closed source if I understand correctly.
21:47RSpliet: fodasso: yeah... well... I think you could get an on-line IRC bouncer in some places (perhaps even free). I've got an old home server with ZNC to do the job for me
21:48fodasso: https://matrix-client.matrix.org/_matrix/media/r0/download/matrix.org/TgnnGlxoNDLKDLrfyJMwRqhi
21:48fodasso: My dmesg log
21:49fodasso: RSpliet: Matrix is supposed to be compatible with IRC channels natively.
21:49fodasso: You can view messages there, but sent messages are not being relayed properly, and Matrix users receive no error prompt as response.
21:49RSpliet: that's a lot of debugging info you're printing in your kernel
21:49fodasso: So they might even feel ignored
21:50fodasso: Well, I got them by following the instructions on nouveau.freedesktop.org.
21:51RSpliet: Sorry, I didn't mean that as a bad thing :-)
21:51fodasso: OK, ty ^^
21:51RSpliet: Anyway, GTX2060 is way out of my jurisdiction, afraid you'll have to wait for some of the current devs to help you out.
21:51fodasso: Just tried sending a message from Matrix, nothing received here :)
21:52RSpliet: If you want to do it asynchronously, you could try filing a bug in gitlab instead. Or... yeah I think that's where they check these days. I think...
21:52RSpliet: Hmm
21:53RSpliet: According to the bugs page it is :-)
22:01mynacol: fodasso: Weiss-Föder: You need to login to OFTC through the matrix bridge. Send a private message, e.g. `!help` to @oftc-irc:matrix.org to login.
22:04fodasso: tyvm for the tip. Will look into it later.