00:02RSpliet: (don't think I've ever seen it clocked below 33MHz, so just assume I'm talking to myself :-) )
00:07linkmauve1: Hmm, I actually plugged it as DP-1, is there another option there instead?
00:07linkmauve1: I still only see 720×480 at 60 Hz as the maximum.
00:10imirkin: is it DP -> HDMI -> monitor, or what?
00:11linkmauve1: Oh, when putting a much bigger value (380), it works!
00:12linkmauve1: Yes, but with an active DP → HDMI device it seems.
00:12imirkin: if it's an *active* adapter, it should present itself as a regular dp sink
00:12linkmauve1: Yes, it does.
00:12imirkin: and hdmimhz shouldn't matter for anything
00:12linkmauve1: Hmm, I’ll try again without the option.
00:12imirkin: more likely a plug/unplug cycle would fix it right up
00:14linkmauve1: Yup, it works always now, no idea why it was still limited earlier.
00:14imirkin: probably the adapter lied and didn't send a new hpd signal when it got updated info
00:14imirkin: or nouveau misprocessed the hpd
00:15imirkin: (hpd = hot plug detect)
00:15linkmauve1: Ah right, in both cases where it was working I had the display plugged in from boot, and in the case it wasn’t I plugged it in later.
00:19linkmauve1: Alright, it does depend on whether it was plugged in at boot or not.
00:19linkmauve1: If it was, then all fine.
00:19linkmauve1: If it wasn’t, or if the DP was plugged but the adapter didn’t get any power, then it will only show the low resolution.
00:22linkmauve1: So aside from this two small bugs, everything works perfectly, thanks!
00:23imirkin: i think we just show what the adapter is showing :)
00:25linkmauve1: I think it’s not the case, from the adapter’s point of view it doesn’t change anything whether it was plugged in before or after being attached to a computer, or does it?
00:26imirkin: i didn't design the hw
00:26imirkin: i could definitely imagine things being various weird
04:27AndrewR: imirkin, hi! you fixed GL Excess demo on nv43, but now most 3d demos/benches dies with "nouveau_buffer.c:761: nouveau_buffer_migrate: Assertion `new_domain != old_domain' failed."
04:29imirkin: can't win 'em all
04:29imirkin: AndrewR: what did i do to fix that demo?
04:29imirkin: can you point me to something concrete?
04:30AndrewR: imirkin, you pushed fix into mesa preventing gallium nv3/4 driver from creating mismacthed color/depth configs (moment will point at bug #)
04:30imirkin: oh hehe
04:30imirkin: no, i remember that change
04:31imirkin: wasn't sure it'd actually fix anything
04:31imirkin: but seemed like it could help
04:31imirkin: the buffer migration thing is odd... that shouldn't be getting called if the domains are the same =/
04:32AndrewR: so, for now it renders most scenes ok and hangs at last one (city lights)...but....this assertion from above kills 3d mark 2011, Final Reality, Half-Life (1)..... anything using ddraw via wine, i think
04:33AndrewR: i'm still running old kernel - 4.2.0 may be this matters
04:33AndrewR: ow, 3dmark2001 (not 2011)
04:34imirkin: kernel doesn't matter
04:42imirkin: i wonder if skeggsb was right
04:42imirkin: i don't know that i saw his comment
04:42imirkin: about the restriction only mattering for swizzled surfaces
05:19imirkin: AndrewR: i just pushed another patch to relax the original restriction a bit... i think it may be helpful
05:36AndrewR: imirkin, GLExcess demo still renders correctly, but assertion kills it at last scene, and other direct3d apps too :/
05:37imirkin: AndrewR: can you get a backtrace, and also tell me what new_domain is?
05:46AndrewR: imirkin, for some reason it dosn't work (I tried to set tracepoint with function name nouveau_buffer_migrate
05:46imirkin: AndrewR: just run it in gdb
05:46imirkin: and when it asserts, type "bt"
05:46imirkin: and pastebin the results
05:52imirkin: that's ... not extremely useful :(
05:52imirkin: oh, there's a "winegdb" thing iirc
05:52imirkin: which works better
05:52imirkin: er, make that "winedbg"
05:53imirkin: and you can use --gdb to get it to play nice with gdb? i dunno all the details
05:58AndrewR: https://paste.fedoraproject.org/417678/ - this is what it prints by default to console ...
05:58imirkin: perfect thanks
05:59imirkin: it's the one i thought it might have been
06:00imirkin: still not seeing it, but ... thinking :)
06:02imirkin: ok, i see what's going on
06:02imirkin: give me a few to figure out the right way to fix it
06:06imirkin: yeah, i have all the info i need
06:06imirkin: i just need to think for a bit
06:06AndrewR: ok (for some reason I hoped --enable-debug flag for mesa's configure will make debuggable driver ..but many vars optimized out ...
06:06imirkin: --enable-debug enables assertions
06:06imirkin: but it keeps optimizations on
06:06imirkin: but i see exactly what's going on...
06:08imirkin: AndrewR: http://hastebin.com/evevepufol.pl
06:08imirkin: does that fix it?
06:11imirkin: i'll have a better fix in just a minute
06:11imirkin: er actually, i guess i won't
06:18imirkin: AndrewR: i'm off for now, but let me know if it works, and i'll push it out
06:29AndrewR: imirkin, with this patch HL1 starts, 3Dmark2001SE starts but dies after splashscreen with different backtrace, GL excess seems to work at final scene, Final reality just quietly quits now ....so, situation improved!
14:40imirkin: AndrewR: happy to look at other backtraces
14:40imirkin: AndrewR: just pushed the other fix
15:00karolherbst: do I have to understand those?
15:02AndrewR: imirkin, other segfault looks more like wine bug? https://paste.fedoraproject.org/418353/
15:16imirkin: AndrewR: maybe maybe not
15:17imirkin: no clue what rlmfc is
15:21AndrewR: imirkin, but it starts (main window at least) with classical swrast (LIBGL_ALWAYS_SOFTWARE=1)
21:49waltercool: guys, question, I know there is currently lot of work for Maxwell family, but there is some way can I help? I mean, some trace or something? Or currently the Maxwell issues are just because Nvidia blobs are awful related with overclocking?
21:49karolherbst: waltercool: desktop gpu? and gmx20x?
21:49imirkin_: waltercool: are there specific issues you're concerned about?
21:53waltercool: karolherbst: 980m, laptop
21:54karolherbst: waltercool: seperated gpu fan?
21:54karolherbst: waltercool: I guess with such a GPU you are pretty much concerned about performance?
21:54waltercool: imirkin_: not really, I think since 4.7 and mesa 12 is currently good, but I think it performs kinda bad for some tasks, so, if I can help, I'm glad of doing it
21:55karolherbst: waltercool: do you happen to know if your GPU fan is EC controlled?
21:55waltercool: karolherbst: I know I will not get for now a very high performance, but improving it may be helpful
21:55waltercool: karolherbst: hmmm no idea
21:55karolherbst: we _can_ fully reclock those gpus, but the issue with maxwell2 is, no fan support
21:55karolherbst: if you don't mind roasting your gpu that is
21:56karolherbst: but on laptops the fans are usually controlled by the EC and not by the GPU
21:57waltercool: oh I see, so basically you can overclock, but is not wanted for now until you can guarantee any gpu being burnt, right?
21:57karolherbst: well if you don'T mind, we could try to reclock your gpu to higher states, but the current way to do this is a little messy
21:57waltercool: let me check if my fan have split functionality
21:57karolherbst: something like that, yes
21:57karolherbst: nouveau doesn't clock down on high temp
21:58waltercool: oh, so will go always overclocked?
21:59karolherbst: that wasn't the point
21:59imirkin_: waltercool: overclock is the wrong term to use
21:59karolherbst: nvidia lowers the clock if you hit a special threshold my a rather big amount
22:00imirkin_: waltercool: changing clocks is what we need to do... a gpu will have many perf levels
22:00karolherbst: I think "boost" is the current term usually
22:00imirkin_: waltercool: right now we don't change between them, and they boot into the lowest ones.
22:00karolherbst: DVFS more technical
22:00imirkin_: waltercool: so you're getting the perf of the *lowest* perf level of the gpu, whcih is generally 1/10th or lower of the "max" perf.
22:00waltercool: imirkin_: I know, something like nvidia does? I saw some perf levels on nvidia tools I think
22:01waltercool: karolherbst: DVFS is done by the firmware or the module?
22:02karolherbst: and it is pretty much figured out for the most parts for kepler and maxwell
22:02waltercool: I see
22:02karolherbst: just those maxwell2 gpus need signed firmware
22:02karolherbst: otherwise we can't control the fans
22:03waltercool: I heard about that, pro-cons with that, the sad story is no fully opensource module, the good story is the firmware (official one) should be at par with the closedsource module
22:03waltercool: I meant, reverse engineering wouldn't be good for that
22:04karolherbst: we can write our own firmware
22:04karolherbst: and we do actually
22:04waltercool: so? Will not be compatible with maxwell2 gpu?
22:05karolherbst: we can't use it
22:05imirkin_: waltercool: if you're looking for good open-source support, look at AMD and Intel
22:05imirkin_: waltercool: they have teams of engineers, supporting fully-open-source drivers
22:05waltercool: Hahahha I know, but AMD isn't great on laptops, I used to have AMD for some years
22:06imirkin_: sure, but nvidia is bad everywhere (if you're looking at open-source support)
22:06waltercool: I know, I mean, isn't bad support into opensource world, I think you been doing a very good job
22:06waltercool: is just Nvidia not helping em' all
22:07waltercool: they are doing just the opposite
22:07imirkin_: not enough people, no docs, and nvidia is making hw without providing redistributable firmware required to operate it
22:07imirkin_: so ... no ... nouveau isn't exactly in a great position.
22:07karolherbst: mhh, I really need to RE that sw thermal downclock table :/
22:08karolherbst: with that we could indeed just enable reclocking even if we can't control the fans, even though that would be rather painfull
22:09waltercool: what do you mean with sw thermal? Just using the hwmon interfaces
22:09karolherbst: the vbios contains a table which indicates when (temperature) the gpu has to be downclocked and how fast
22:10karolherbst: like if you go above 96°C reduce clocks by 30MHz every 0.2s or so
22:11karolherbst: waltercool: what happens if you press Fn+1 ?
22:13waltercool: let me check
22:13waltercool: fan goes very fast
22:13karolherbst: also the gpu one?
22:13waltercool: ha! I never know about that
22:13waltercool: let me check, I'm not quite sure
22:14karolherbst: yep, on clevo laptops Fn+1 is intercepted by uefi and siwtches the fan mode between auto and full
22:14waltercool: I would say yes
22:14waltercool: haha well, I have a clevo based laptop
22:15waltercool: very noisy fan anyways
22:15karolherbst: well, if you don't mind we could checkout how reclocking would work out on your laptop
22:16karolherbst: because the fans are EC controlled, a big issue just disappears
22:16waltercool: yeah, that's true, do I need to compile nouveau in-kernel with some change?
22:16karolherbst: problem is, currently the procedure is quite messy, because there is some signed firmware stuff going on and nouveau doesn't really play well here
22:16karolherbst: *a lot changes
22:16karolherbst: or just out of tree and insmod
22:17karolherbst: waltercool: I guess all your video ports are on your intel gpu as well?
22:18karolherbst: if so, install the dummy X driver and use this Xorg conf for various reasons: https://gist.github.com/karolherbst/1f1bdd1a3822df74097f
22:18karolherbst: 1. you can rmmod nouveau while X is running
22:18karolherbst: 2. X doesn't freeze if nouveau messes up
22:18waltercool: that's nice!
22:19waltercool: I hate when nouveau becomes non-removeable
22:19waltercool: I have current Linux kernel (4.8 rc3), is that fine?
22:19karolherbst: well, you do have your intel gpu enabled, right?
22:20karolherbst: I doubt you can actually disable them anyway on clevo systems
22:20karolherbst: waltercool: uhh, should be, but we can try that stuff out on the weekend, before that I need to sleep :D and look into making the process less messy
22:20waltercool: I can switch between direct nvidia card or muxless one
22:21karolherbst: I see
22:21karolherbst: I guess you do use the muxless one currently?
22:21waltercool: hahah oh yeah
22:21waltercool: with bbswitch is much win
22:21waltercool: but the intel one is very buggy on skylake
22:23waltercool: karolherbst: oh ok, if you want we can try at weekend, no probs from my side
22:24karolherbst: awesome :)
22:28karolherbst: waltercool: by the way, with that X config I can switch from nouveau to using bumblebee in around 5 seconds :)
22:33waltercool: so, you can play between nouveau, intel and nvidia easily?
22:34waltercool: I can play with nvidia and intel, but nouveau keeps loaded as module currently
22:35waltercool: let me try
22:35karolherbst: make sure you installed the dummy driver
22:35waltercool: already compiled :D
22:35waltercool: pro/cons of gentoo
22:37urmet_: pro everything, con chrome/libreoffice takes forever to compile
22:37waltercool: that's why I don't use both ones :D
22:37karolherbst: chromium is like 30mins
22:38waltercool: I use qupzilla, is just a client, and I keep the webengine from Qt
22:38waltercool: qt webengine = snapshot of webkit
22:38karolherbst: yeah, cause qtwebkit compiles much faster!
22:38waltercool: qtwebengine >>> qtwebkit
22:39waltercool: qtwebkit was a port if I'm not wrong, qtwebengine is a snapshot with bindings
22:40waltercool: so, less overhead of maintenance and faster to keep webkit up to date, qtwebkit was quite old in past
22:41waltercool: going to restart X, wait
22:42waltercool: karolherbst: how do you use the dummy against nouveau?
22:43karolherbst: no difference
22:43karolherbst: you just skip the xrandr stuff
22:43karolherbst: because intel is set to dri3 now
22:43karolherbst: I hope you have dri3 enabled on the intel ddx?
22:43waltercool: I meant, I already had dri3 enabled
22:43karolherbst: I see
22:43karolherbst: well same way
22:43karolherbst: you don't need the nouveau ddx with dri3
22:43karolherbst: but if you don't declare dummy, modesetting will grab the card
22:44karolherbst: if you check lsmod, you should see that nouveau has no reference
22:46waltercool: well, modprobing nouveau just killed my X
22:46waltercool: bad idea?
22:47karolherbst: mhh, it should work
22:47karolherbst: something inside x log?
22:48waltercool: oh dumb me, give me a sec
22:48waltercool: works osom, I'm sorry, the problem was with bbswitch
22:48karolherbst: uhh right
22:48waltercool: didn't loaded the dummy module
22:49karolherbst: usually you want to turn off/on your gpu before doing anything
22:49karolherbst: you can keep bumblbee and bbswitch running in the meantimg, shouldn't matter
22:49karolherbst: bad thins are just: use bumblbee while nouveau is loaded
22:49karolherbst: a lot of bad htings can happen
22:50waltercool: with DRI_PRIME?
22:50waltercool: I meant, I tried DRI_PRIME last time with nouveau without any problem
22:50karolherbst: I meant, you can keep the bumblebee daemon on, and still do DRI_PRIME with nouveau
22:50waltercool: also, I think bumblebee is just a bad solution
22:50karolherbst: just have to be a little carefull
22:51waltercool: oh yeah
22:51karolherbst: it is kind of, but currently the best possible
22:51karolherbst: except if you use nouveau
22:51waltercool: I would love in future to see a DRI_PRIME working as bumblebee, but everything at kernel side
22:51waltercool: or graphical server side
22:51karolherbst: mhh? what do you mean
22:52waltercool: DRI_PRIME=1 command -> turn on discrete -> run the command as discrete -> turn off discrete
22:52karolherbst: it actually does that
22:52karolherbst: or, it rather should do that
22:52waltercool: so, bbswitch shouldn't be necessary?
22:52karolherbst: you need some kernel options for that though
22:53waltercool: I thought bbswitch was doing some hardware tuning to keep off the videocard
22:53karolherbst: you need to have VGA_SWITCHEROO enabled
22:53waltercool: that's muxed D:
22:53karolherbst: well, you should still use bbswitch with bumblebee
22:53karolherbst: doesn't matter
22:54karolherbst: you need that if you want nouveau to turn off your gpu
22:55waltercool: but bbswitch does same as VGA_SWITCHEROO, but for muxless, or I'm wrong?
22:55karolherbst: switcheroo is more advanced
22:55karolherbst: and it can do more
22:56karolherbst: you don't have to mess with the switcheroo file though
22:56waltercool: didn't knew that, I will keep switcheroo in my kernel
22:56waltercool: I haven't switcheroo because I just got understood it was only for muxed environment
22:58karolherbst: it was, but it was improved
22:58waltercool: good to know, that's good for mesa team
22:59karolherbst: anyway, bed time :D
22:59waltercool: thanks for everything :)
22:59waltercool: let's see at weekend if I can help with some tests
23:00karolherbst: yeah, I am sure it will just work out and your gpu should stay rather cool (and with cool I mean bellow 80°C :O )
23:00karolherbst: but I actually managed to get my gpu at 102°C at max fans with nouveau
23:00waltercool: not critical enough, but dangerous
23:00waltercool: it depends of the laptop anyways
23:00karolherbst: no idea why, but those clevo laptops come with crappy fans, which are so crappy, that I would rather have no fans
23:01karolherbst: ohh wait
23:01karolherbst: I have another thing for you
23:01waltercool: At least I can play with the nvidia module without roasting my legs, that's something good
23:01waltercool: with AMD I got always my legs roasted
23:02karolherbst: waltercool: https://gist.github.com/karolherbst/dc482d83f48ca3f8599c2e30b9f1a450
23:02waltercool: Gentoo user, huh?
23:02waltercool: thanks! I will try it
23:03karolherbst: it should pimped your sensors output a bit
23:03waltercool: I don't have any problem with my keys, but I will check if I can change the colour of the backlight
23:03karolherbst: allthough I am sure the gpu temperature reported is broken
23:03karolherbst: ohh wait :D
23:03karolherbst: ohh right, that moduole does that too
23:04waltercool: you should share that ebuild into zugaina :)
23:04karolherbst: waltercool: from "sensors": https://gist.github.com/karolherbst/00606bd060e3e0324dc56cdce7ef952a
23:04karolherbst: yeah I should :/
23:04waltercool: beautiful, quite better than my current temp monitor
23:05karolherbst: well, you want to have coretemp anyway
23:05waltercool: I hate using nvidia-smi to have knowledge of the temperature
23:05karolherbst: as I said: I am sure the gpu temperature is wrong
23:05karolherbst: for me it is 6°C above the real one
23:05karolherbst: for others, other amounts
23:05karolherbst: I've added the hwmon bits to that module, and this little issue bugs me somewhat
23:06waltercool: I don't have anything to measure that jeje
23:06karolherbst: nouveau also adds hwmon stuff
23:06waltercool: but is quite better to have all the info in a single module :)
23:06karolherbst: my full "sensors" output: https://gist.github.com/karolherbst/76b84907764cd5a38c41db91fbf0de50
23:06waltercool: there is no problem if I add your ebuild into my overlay?
23:07karolherbst: no issues with that
23:07waltercool: which command gives you that info?
23:07karolherbst: you don't need the daemon
23:08waltercool: woah, great
23:08karolherbst: sensors just reads the files from hwmon
23:08waltercool: didn't knew that
23:08waltercool: is easy and beuatiful, I like the idea of knowing the GPU power consumption
23:08waltercool: and again, I think nvidia-smi is a nasty one
23:08karolherbst: nouveua before 4.9 will have issues reading the power
23:09karolherbst: with 4.9 it is considered "reliable", at least by me :D
23:09waltercool: 4.9? D:
23:09waltercool: 4.9 alpha?
23:09karolherbst: well, ain't there yet
23:09waltercool: I know, should be trunk
23:11karolherbst: anywaym I am off to bed for sure now
23:12waltercool: jaja OK, I will stress the GPU with ARK
23:12waltercool: but I don't really believe in: power1: 16.63 W
23:14waltercool: thanks for everything