00:09imirkin: if someone's running blob, can you check if the piglit 'bin/nv_alpha_to_coverage_dither_control 8' passes for you?
02:51imirkin: interesting. it passes for 2x msaa, but not 4x or 8x
02:51imirkin: [with nouveau]
10:09karolherbst: imirkin: btw, I can switch to the blob at anytime :) no need to reboot or restart my desktop on a laptop
10:13karolherbst: imirkin: same with nvidia btw
10:23earnestly: I'd like a bit of advice before spending too much time on this: I'm trying to get nouveau and bbswitch working together on an optimus laptop. I have bbswitch loading initially so the card can be turned off. When modprobing nouveau the X11 session (sddm/plasma) restarts automatically and I have access to Intel and nouveau via xrandr --listproviders
10:23earnestly: But now I'd like to be able to unload nouveau, or otherwise regain bbswitch to turn it off again
10:24earnestly: I've somewhat managed to do this by setting an xorg.conf.d/intel.conf to use the intel driver instead of letting xorg determine it, that way I can rmmod nouveau and use bbswitch to turn the card off
10:24earnestly: But after this point I can't use nouveau again
10:25earnestly: If I were to modprobe it again, remove that intel config, and restart sddm/plamsa it doesn't end up being available via xrandr --listproviders
10:26earnestly: So what I'd like to know is if what I'm doing is sensible or possible; using nouveau on demand, but still allowing bbswitch to turn the card off as otherwise it sits spinning fans quite loudly while heating up
10:27earnestly: (Or is there a way for nouveau to turn off the card? It's an NVC1 435M
10:27earnestly: (Dell L701X)
10:30earnestly:.oO(Perhaps the suggestion of using video=VGA-2:d or such applies here so that the nouveau card turns off automatically, I do have a phantom display as listed in <https://nouveau.freedesktop.org/wiki/Optimus/>)
10:31RSpliet: earnestly: why would you want to use "bbswitch"? For "optimus" graphics off-loading to an NVIDIA GPU it's the wrong tool for the job.
10:32RSpliet: On any modern distribution, this works out of the box - including powering down the NVIDIA GPU when it's not in use
10:32karolherbst: RSpliet: well, we had the runpm bug, but yeah.. probably not on that system
10:32RSpliet: Gnome Shell even has a right-click menu option on applications "launch on discrete GPU"
10:32karolherbst: so yeah
10:32karolherbst: get rid of bumblebee
10:32karolherbst: _completly_
10:32karolherbst: it's not needed anymore
10:32karolherbst: not even with the nvidia driver
10:33karolherbst: well.. except for power savings :/
10:33karolherbst: but it's equally broken there
10:33karolherbst: so meh
10:33RSpliet: karolherbst: there's also the HDA intel bug I ran in to ;-) but we can solve them, as opposed to bumblebee problems
10:33karolherbst: bumblbee has the same bug though
10:34karolherbst: RSpliet: ahh, yeah, I verified that nvidia is hit by the same runpm bug nouveau was :)
10:34karolherbst: they enabled runpm on turing+, so I was able to investigate a little
10:35karolherbst: and bbswitch runs into the same problem as well
10:35karolherbst: it's all very funny as pci core handles the ACPI bits
10:35karolherbst: so the driver aren't doing anything much
10:35karolherbst: the only "safe" way to runpm the GPU is by having no driver loaded
10:35karolherbst: or modules with workarounds applied
10:36karolherbst: so yeah, bumblebee outlived is usefullness kind of
11:01earnestly: RSpliet: What used to happen is that the dGPU would start spinning fans constantly and heat up; so I used to blacklist nouveau and use bbswitch to turn the card off for good. But lately I needed the power of the card for video (ironlake just isn't quite enough even with vaapi). This lead me to the situation of thinking I needed bbswitch to turn the card off when not in use
11:02karolherbst: earnestly: nouveau does that for you
11:02earnestly: However I recent tried with nouveau and found that it seems to power down the dGPU
11:02earnestly: karolherbst: Yeah, it looks good
11:02earnestly: I still followed the article's advice to get rid of that "phantom" VGA port
11:02karolherbst: ohh, but that shouldn't be needed though
11:02earnestly: But it powered down quite nicely regardless of that port being present or not
11:02karolherbst: yeah...
11:02karolherbst: but it's unfortunate we can't detect phantom ports
11:02earnestly: But now my xrandr output looks nice :D
11:02karolherbst: do you know if nvidia reported those ports?
11:03earnestly: I can't remember, it probably did
11:04earnestly: It's nice to be finally free of nvidia and bumblebee, not that I'm particularly bothered by nvidia, but my card is so old that I can't use its recent implementation of PRIME and so was stuck with bumblebees awkwardness
11:04earnestly: And trying with some difficulty to get hdmi with intel-virtual-output faffery
11:04karolherbst: ohh :/
11:05karolherbst: earnestly: ohh, so you have a few ports on the GPU but the VGA one is fake?
11:05earnestly: Now I don't need any of that \o/, although I've yet to try HDMI (reverse prime?) with nouveau
11:05karolherbst: it might be that the VGA isn't fake then
11:05earnestly: karolherbst: It only exposes one HDMI (which is listed correctly)
11:05karolherbst: it would be interesting to know what happens with a passive HDMI to VGA adapter
11:06earnestly: That is, the laptop only has one hdmi port (and ~a~ hdmi port is listed in xrandr)
11:06karolherbst: because that might end up using the VGA port
11:06earnestly: Hm, I don't have such an adaptor
11:06karolherbst: yeah.. no worries
11:06karolherbst: just keep that in mind if you ever run into this situation :p
11:06karolherbst: probably not
11:06karolherbst: but who knows
11:06earnestly: But it's a fair point, I wouldn't even begin to know how it would detect such a contorted setup
11:07karolherbst: yeah.. it's all weird
11:07karolherbst: I also have the same thing on this laptop but with HDMI and DVI
11:07karolherbst: uhm...
11:07karolherbst: HDMI and DP
11:07earnestly: Do you think optimus was necessary to push X11 and co. into supporting multi-GPU systems?
11:07karolherbst: and depending on the adapter (active vs passive) a different output is used
11:07karolherbst: earnestly: probably
11:08earnestly: karolherbst: I have a thinkpad which lists HDMI even though it has none, but that's just Intel
11:08karolherbst: although nvidia supported it before that already
11:08karolherbst: earnestly: that laptop has probably a dock connector on the bottom, doesn't it?
11:08earnestly: Yeah, but xrandr and PRIME didn't exist. I know servers were gaining this HVEC(?) stuff, so I wonder if optimus, for all its pain, was a needed push
11:08earnestly: karolherbst: Yes indeed, that's a good point
11:08karolherbst: ;)
11:08earnestly: I didn't think of the dock
11:09karolherbst: yeah, so the dock also uses the GPU outputs
11:09earnestly: That makes sense
11:09karolherbst: these days they do DP-MST though
11:09karolherbst: but on older system they had to use native GPU ports
11:10earnestly: This is an x220 so it might classify as old
11:10earnestly: MST came out in 2009 apparently, so definately wouldn't have it
11:10karolherbst: yeah.. but that doesn't matter
11:10earnestly: Er, no x220 is 2011
11:10karolherbst: vendors started to drop their prop. dock connectorsy in favour of TB3
11:10earnestly: Anyway
11:11karolherbst: *connectors
11:11karolherbst: so.. the change is quite new
14:26karolherbst: imirkin: mind checking the nouveau related MRs I created for libdrm? Those should be all very trivial https://gitlab.freedesktop.org/mesa/drm/-/merge_requests?scope=all&utf8=%E2%9C%93&state=opened&label_name[]=NVIDIA
17:25karolherbst: imirkin: fifo: fault 01 [WRITE] at 00000000044d0000 engine 00 [GR] client 1d [HUB/DFALCON] reason 02 [PTE] on channel 2 [00ffb26000 glcts[306416]] sooo.. apparently the address is not always the same
21:01karolherbst: why must the CTS be that crappy
21:11karolherbst: imirkin: mind checking if GTF-GL45.gtf21.GL.abs.abs_float_frag_xvary passes for you?
21:16RSpliet: karolherbst: I suspect in the coming few days I can finally push the OpenCL experimental set-up I used for my PhD experiments to github. I'm just adding output validation for the remaining kernels.
21:16RSpliet: Most of them are picked from rodinia/parboil - but made to always run with the same input data (thus generate ~the same output data). There's a few home-grown kernels in there as well. Is this useful to you?
21:16karolherbst: dunno
21:16karolherbst: I could try to make them run with Nouveau though :p
21:17RSpliet: I don't expect them to be useful for anything else ;-)
21:17RSpliet: Well, once they run they could serve be a first attempt at CI - until something more mature pops up if it hasn't already
21:18RSpliet: s/be/as
21:20imirkin: karolherbst: pass on GP108
21:20karolherbst: yeah....
21:20karolherbst: I expect some local screw up
21:20karolherbst: it also fails with nvidia for me :)
21:20karolherbst: just updated to fedora 32
21:20imirkin: and the GTF-GL33 variant passes on G84
21:20karolherbst: and there is like gcc 10 and stuff
21:20karolherbst: and I had to force c++11 on a subproject because otherwise gcc crashed
21:20imirkin: ah yeah, could be
21:20karolherbst: but.. maybe something is funky
21:21karolherbst: imirkin: what's the git commit of the gtf for you?
21:21karolherbst: this is the error I see: https://gist.githubusercontent.com/karolherbst/1a3da7e166d74255ed08b78487e37cd4/raw/d6660af30c3a7f74bb86b9c2bcef46a868dbf00d/gistfile1.txt
21:21karolherbst: I bet something is messed up with paths
21:24karolherbst: I mean.. it used to work just a week ago :)
21:24karolherbst: and I could swear it worked today before I did a clean build
21:24imirkin: gcc10 does introduce lots of weird
21:25imirkin: i have gcc 9.x
21:25karolherbst: let's see what strace says
21:25karolherbst: could also be something cmake related
21:25karolherbst: ehhh
21:25karolherbst: wait
21:25imirkin: seems less likely
21:25karolherbst: I have to define GLCTS_GTF_TARGET, right?
21:25karolherbst: :D
21:25imirkin: i mean, do any tests pass?
21:26karolherbst: some
21:26karolherbst: but I am sure it's the missing GLCTS_GTF_TARGET
21:26karolherbst: yep
21:26karolherbst: passes now
21:26imirkin: yay
21:27karolherbst: I had two regressions though
21:27karolherbst: wait...
21:27imirkin: if you have blob around
21:27imirkin: can you test that piglit?
21:27imirkin: i want to know if the test is picky, or nouveau is busted
21:27karolherbst: I already did
21:27karolherbst: and it fails there as well :p
21:27imirkin: oh, should i scroll up?
21:28karolherbst: switching to the blob is really no issue for me, takes me like 20 seconds :p
21:28imirkin: i've had a major headache this whole weekend so i haven't really caught up
21:29imirkin: karolherbst: so wait, did it fail? you didn't say that explicitly
21:29karolherbst: yes, it failed on nvidia
21:29karolherbst: for 4 and 8
21:29imirkin: with just "8" or "8 1 1" as well?
21:29karolherbst: I only tried 8 and 4
21:29imirkin: ah ok
21:29karolherbst: should I test more?
21:29imirkin: could you pass "8 1 1" as arguments?
21:30imirkin: (it's a weird test)
21:30karolherbst: "8 1 1" fails as well
21:31imirkin: ok cool
21:31imirkin: so then it's not just nouveau
21:31imirkin: i was afraid some blit-related thing was a pile of fail
21:33karolherbst: imirkin: btw, mind checking those very trivial MRs to libdrm? https://gitlab.freedesktop.org/mesa/drm/-/merge_requests?scope=all&utf8=%E2%9C%93&state=opened&label_name[]=NVIDIA
21:34imirkin: yeah, i saw that one, but haven't looked yet
21:57karolherbst: imirkin: any idea about KHR-GL45.direct_state_access.textures_buffer_errors and KHR-GL45.direct_state_access.textures_buffer_range_errors?
21:58karolherbst: those are failing for me
21:58karolherbst: ehh fails for intel as well
21:58karolherbst: wait...
21:59karolherbst: didn't I figured that out already?
22:00karolherbst: I guess not.. let's see
22:01imirkin: i saw some commits semi-recently about that
22:03karolherbst: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/4759 :)
22:03karolherbst: and I thought I rebased today or so
22:03karolherbst: nope. still broken
22:08karolherbst: ohh wait
22:08karolherbst: Revert "Fix expected errors for some DSA functions" :) in my local CTS
22:13karolherbst: ahh
22:13karolherbst: KHR-GL30.transform_feedback.api_errors_test gives this invalid bitfield error
22:15karolherbst: imirkin: any idea what should we do if so->stride goes out of bound or how to prevent that?
22:17imirkin: is that what's happening?
22:17imirkin: so->stride can't really become a crazy value
22:17karolherbst: well. it is a crazy value :)
22:17imirkin: you sure?
22:17karolherbst: yes
22:18karolherbst: I checked with gdb
22:18karolherbst: the test does dumb stuff on purpose though
22:19karolherbst: but I can also check how the value goes through
22:19karolherbst: maybe not initialized value or crap like this.. let's see
22:27imirkin: yea
22:31karolherbst: imirkin: nvc0_tfb_validate is the only place where the stride is set, right?
22:31karolherbst: just making sure I didn't miss anything
22:32imirkin: iirc it's set in 2 places
22:32imirkin: but one place overwrites the other? something like that
22:32imirkin: i forget
22:32karolherbst: mmhhh
22:32karolherbst: let's see
22:34karolherbst: does't look like it tbh
22:34imirkin: maybe it's not stride then
22:34karolherbst: anyway, nvc0_so_target_create doesn't set it. So that would explain the random value
22:35karolherbst: let's test that theory
22:35karolherbst: ahh yeah
22:35karolherbst: so it's never set
22:36karolherbst: the thing is.. the CTS expect the call to fail
22:36karolherbst: but we still execute a draw
22:36karolherbst: so this is strange
22:36imirkin: ok, so then core is messing up?
22:36karolherbst: probably
22:37karolherbst: but well
22:37karolherbst: there is an error :)
22:37karolherbst: ../external/openglcts/modules/gl/gl3cTransformFeedbackTests.cpp:1386 is the call
22:42karolherbst: heh
22:42karolherbst: guess what
22:43karolherbst: we fail the test anyway
22:43karolherbst: INVALID_ENUM was not generated by DrawTransformFeedbackInstanced and DrawTransformFeedbackStreamInstanced when <mode> was invalid.
22:44imirkin:hopes that he didn't add it
22:44imirkin: ah no, probably not, i don't think i ever did draw + tf bringup stuff
22:45imirkin: gotta go get some groceries. bbiab
22:46karolherbst: ehhh.. that test is just broken
22:46karolherbst: or maybe that's fine actually
22:47karolherbst: ehhhh
22:47karolherbst: ctx->API == API_OPENGL_COMPAT
22:48karolherbst: ehhh
22:48karolherbst: why is API_OPENGL_COMPAT set :D
22:49karolherbst: oh well
22:49karolherbst: whtvr
22:50karolherbst: don't care now
22:50karolherbst: only happens with the 30 and 31 version
22:50karolherbst: I just wanted to track down this random context crash anyway
22:52imirkin: maybe some ext is exposed that shouldn't be
22:52imirkin: or maybe the test is bogus, assuming something that's no longer true
22:52karolherbst: probably
22:52imirkin: (e.g. core-only for GL 3.2+)
22:52imirkin: where is it?
22:52imirkin: i'll have a look when i get back
22:53karolherbst: imirkin: _mesa_is_valid_prim_mode
22:54karolherbst: and mode is set to GL_QUAD