03:49xpue: Hello nouveau devs. I edited bios of fermi card with nibitor and changed all pstate clocks to max, but it seems boot clocks are taken from other place. Is there way to edit bios that nouveau will work with max clocks?
04:20RSpliet: xpue: that's not going to work
04:20RSpliet: unless you know all the register values and are able to hack up the initialisation script
04:20RSpliet: and well... if you do, you might as well implement it in nouveau :p
04:25xpue: RSpliet: Thanks. No, nibitor is standart overclocking tool, I dot know how it works internally.
04:26RSpliet: xpue: I'm not talking about nibitor
04:28RSpliet: drivers are responsible for changing the clocks to whatever PState is appropriate. NVIDIA can do it, nouveau can't
04:29RSpliet: the BIOS has a script that sets the default script, and nouveau executes that the same way NVIDIA does
04:29RSpliet: so: alter the script, and you alter the clocks in nouveau
04:29RSpliet: _but_, we have no idea* what's in the script, so can't help you with that
04:29RSpliet: *default script->default clock
04:30RSpliet: (plus, it's not so easy to alter scripts, as the length will likely change; and since we don't have a full VBIOS compiler that's bound to fail)
08:14jgarrett: I think I need to set my performance mode on my GF119...
08:15imirkin_: jgarrett: no code exists to enable that, unfortunately
08:16imirkin_: you get whatever it boots into
08:16jgarrett: well maybe there is another way to solve my issue.
08:16imirkin_: than "step 1: spend 6 months to make reclocking work on fermi"? hopefully.
08:17jgarrett: I have a Intel card running with 3 monitors, and I want to use the GF119 as a 4rth, but the refresh is like 10 minutes.
08:17imirkin_: how are you configuring things?
08:18imirkin_: GF119 at even the slowest clock speed is more than capable of scanning out an image
08:18imirkin_: to even the largest of screens :)
08:18jgarrett: That's what I thought...
08:19imirkin_: pastebin xorg log + xrandr output
08:19jgarrett: and that screen is only 1280x1024... that card should be yawning....
08:19imirkin_: oh, and dmesg, just in case
08:20imirkin_: although due warning -- it may well be that the limitation is in the reverse prime mechanism
08:20imirkin_: i guess you have at least a haswell if you're scanning out 3 screens off the intel?
08:20jgarrett: xrandr -q -- http://pastebin.com/JeMg3QgP
08:21jgarrett: intel card is running the i915 module.
08:21jgarrett: 00:02.0 VGA compatible controller: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics Controller (rev 06)
08:22jgarrett: 01:00.0 VGA compatible controller: NVIDIA Corporation GF119 [GeForce GT 620 OEM] (rev a1)
08:22imirkin_: if i didn't know better, i'd say that your screen is hooked up via DP *and* VGA??
08:22jgarrett: Nope, VGA only.
08:23imirkin_: good thing i knew better :)
08:23jgarrett: well, the Intel, is using 2 dp ports (detected as HDMIs) and VGA.
08:23jgarrett: the GF119, is VGA only though.
08:23jgarrett: I did try DP first... same issue.
08:24imirkin_: yeah, i'm sure that has nothing to do with it
08:24imirkin_: the way reverse prime works
08:24imirkin_: is that the primary gpu (intel in this case) renders the image
08:24imirkin_: adn then the offloaded-to gpu scans it out to the relevant output
08:24jgarrett: maybe the Intel card is too taxed to render 4 screens?
08:24imirkin_: then all 4 screens would be slow
08:25imirkin_: do note that it uses a sw cursor for offloaded screens
08:25jgarrett: good point.
08:25imirkin_: (it == Xorg)
08:25jgarrett: well, that is nice to know, artifacts worsen when the mouse is on the screen and refresh slows
08:27jgarrett: xorg log http://pastebin.com/5Tn3duZA
08:30jgarrett: from this conversation, i think i've found a bug in reverse prime... awesome :)
08:31imirkin_: hm, i see you're also using dri3
08:32imirkin_: you might try disabling that in the intel ddx...
08:32imirkin_: that's your xorg log?
08:32imirkin_: there is no mention of the nvidia controller in there
08:32imirkin_: (iow, that's not the right log)
08:33jgarrett: /etc/Xorg.0.log... only one i've got...
08:33imirkin_: usually /var/log/Xorg.0.log
08:33imirkin_: but if you're using some new-fangled user-confusing systemd-based system, it's in ~/.local somewhere
08:34imirkin_: from that file: [ 18.185] (==) Log file: "/var/log/Xorg.0.log", Time: Thu Jul 17 15:59:58 2014
08:34imirkin_: so... probably not the current one, unless date is way off on your system
08:34jgarrett: yeah... just noticed that too.
08:34jgarrett: hold on a sec... stupid systemd...
08:37jgarrett: fyi it was in ~/.local/share/xorg
08:37imirkin_: that seems more plausible
08:38imirkin_: [ 471.783] adjust shatters 0 6560
08:40mlankhorst: that's a lot :p
08:41jgarrett: my 3 monitors aren't small...
08:41jgarrett: same issue if I move the screen to '--above' my other 3.
08:42imirkin_: oh wow, gentoo finally landed the ABI_X86 stuff by default. awesome!
08:53jgarrett: is 6560 too big?
08:54imirkin_: no, i was mostly remarking on the word 'shatters'
08:58jgarrett: I didn't think so... but thought i would ask lol
09:09jgarrett: i'm chalking it up to reverse Prime weirdness... think it might work with an older card?
09:09jgarrett: all i have is other NV cards though.
09:11imirkin_: it's unlikely to be connected to the fact that it's a NV card...
09:12imirkin_: reverse prime is something a lot of people have various trouble with
09:12imirkin_: the other solution, btw, is to use ZaphodHeads to split up the multi-screen devices into individual single-screen devices, at which point you can use Xinerama to recombine them
09:12imirkin_: that loses you things like xrandr and direct rendering though
09:13mlankhorst: works for me, partly. :P
09:13imirkin_: well, lots of people come in complaining of various issues
09:13imirkin_: sadly i know little about prime or reverse prime
09:14imirkin_: but i do know xinerama -- outside of the obvious drawbacks it works great
09:14imirkin_: but things like direct rendering don't matter to a lot of people, nor does the ability to dynamically reconfigure screens
09:15jgarrett: well dang.. all those drawbacks are enough for me to say "geez i guess 3 screens are enough for now."
09:15imirkin_: your other alternative is go get a single gpu that can do 4 screens
09:16imirkin_: any nvidia kepler or semi-recent amd board should work
09:17jgarrett: yeah... that requires buying more 'stuff' i already have a stack of old nvidia cards... tis a shame.
09:17imirkin_: 'tis indeed
09:17jgarrett: Thanks for the help though... at least i'm not i'm not doing it wrong.
09:17imirkin_: it's weird though... reverse prime works just fine for a bunch of people
09:17imirkin_: oh, are you using a compositor?
09:18imirkin_: things might improve if you use one
09:18jgarrett: just bare i3 and xorg
09:18imirkin_: or enable TearFree in your xorg.... oh but then reverse prime won't work
09:18imirkin_: i dunno if xcompmgr is enough
09:18imirkin_: you might need something fancier... mlankhorst probably knows the right term for the fanciness required
09:21jgarrett: i had thought about compton
09:22jgarrett: but never actually tried because there really isn't anything for it to render... i have almost 0 ui.
09:23imirkin_: yeah, but it'd be reducing the number of images forwarded on for rendering
09:25Karlton: jgarrett: I use dwm and compton but I also don't multi-monitors. I just use it to prevent screen tearing and also to have transparent windows
09:27jgarrett: I'll give compton a shot, will see how it goes. Thanks for all the help. I will return with an update if all goes well... if not, I'm just yanking the NV card for now.
09:38jgarrett: actually, compton compiled faster than expected. -- no change.
09:39imirkin_: i guess it's a question of getting someone with the proper knowledge to try out the specific configurations and figure out wtf is going on
09:39imirkin_: unfortunately the quantity of those people is small, and they tend to have other things to do
09:40jgarrett: truer words never spoken, you can't bribe with booze over the interwebz
09:40imirkin_: you could bribe with booze futures
09:42jgarrett: yeah, i don't see that as working well.
10:35mmturk: i'm using xf86-video-intel and xf86-video-nouveau on an optimus laptop
10:36mmturk: with dri3 nvidia card should get turn off but it doesn't
10:36mmturk: also DRI_PRIME=0 and DRI_PRIME=1 both shows intel as opengl vendor
10:36imirkin_: mmturk: see http://nouveau.freedesktop.org/wiki/Optimus/ for some debug steps
10:38mmturk: how do i do that? i don't have vgaswitcheroo but i can show you my xrandr output
10:39imirkin_: while switcheroo isn't required
10:39imirkin_: it would certainly be good to turn it on
10:39imirkin_: as well as PM_RUNTIME
10:39mmturk: this is xrandr output: https://gist.github.com/anonymous/181c77179a05b8fbcbb2
10:39imirkin_: mmturk: did you do "xrandr --setprovideroffloadsink nouveau Intel"?
10:40mmturk: imirkin_, no i didn't
10:40imirkin_: well like i said, read the page.
10:40imirkin_: the page talks about how to set it all up and how to debug it
10:40mmturk: i followed steps on that exact page actually
10:40imirkin_: skip the vgaswitcheroo bits, they don't apply to you
10:40mmturk: i thought offloadsink didn't apply to dri3 setup
10:40imirkin_: does your intel ddx support dri3?
10:40imirkin_: it's turned off by default...
10:41mmturk: well i'm not sure, but i'm on archlinux which always uses latest stable packages
10:42mmturk: well setting output provider offload sink makes DRI_PRIME=1 work
10:43imirkin_: build that and run it -- should tell you what your dri3 situation is
10:44imirkin_: this is how i just built it: gcc -lX11 -lX11-xcb -lxcb-dri3 -ldrm -I /usr/include/libdrm -o dri3info dri3info.c
10:45mmturk: "Unable to connect to DRI3 on display ':0'"
10:45imirkin_: so... no DRI3 :)
10:46whompy: Note the lack of DRI3 being enabled in the configure line.
10:47whompy: I think the mesa-git repo has it configured that way, but that's getting into distro talk.
10:48mmturk: i'll request dri3 enabled package
10:49imirkin_: heh, well there's a reason it's off by default :)
10:49imirkin_: dri3 is not ready for prime-time
10:49whompy: Yeah, it's buggy. You can always rebuild and try it out
10:50mmturk: ahh i see, i'll switch off nvidia card from bios until then
10:51mmturk: thanks for the help guys
10:52imirkin_: i dunno that dri3 will ever become a thing tbh
10:52imirkin_: dri2 works fine though...
11:38mmturk: how do i check the power state of nvidia card? i guess issueing lscpi turns the card on
11:39imirkin_: mmturk: it's all in the wiki i pointed you at
11:39imirkin_: please read that entire page
11:40mmturk: ok, sorry
11:40imirkin_: (you need vgaswitcheroo)
11:44buhman: 17:52:29 imirkin_ i dunno that dri3 will ever become a thing tbh
11:45imirkin_: buhman: it's broken and i dunno that anyone's working towards fixing it
11:45buhman: imirkin_: you can fix it
11:45buhman: I believe in you
11:45imirkin_: i have no idea what dri is in the first place, so unlikely
12:03joi: imirkin_: it seems I misunderstood what is the meaning of screen->fence.current - it's a fence which is *going* to be emitted in the near future, so I can attach to it instead of creating a new one
12:03imirkin_: joi: i'd rather you create a new one than use the curernt fence :)
12:03imirkin_: the current one has funny semantics
12:03joi: so... take a look at http://people.freedesktop.org/~mslusarz/scratch/ and tell me which one do you like more
12:05joi: but it's going to be emitted in the future, it doesn't matter when exactly
12:07imirkin_: fun, didn't know you could declare a struct inside a struct like that and still have it be globally visible
12:17imirkin_: joi: i like v3, but just change the scratch.runout to always use REALLOC? perhaps it's not easy, in which case it's fine as-is
12:18imirkin_: joi: also s/unref_bos/nouveau_scratch_unref_bos/ or something
12:18joi: REALLOC has stupid 2nd parameter, which is not easy to calc
12:18imirkin_: well it always used REALLOC before...
12:18joi: which is not even used now
12:19imirkin_: so just hand it 0 and move on ;)
12:21imirkin_: #define os_realloc( _ptr, _old_size, _new_size ) \
12:21imirkin_: debug_realloc( __FILE__, __LINE__, __FUNCTION__, _ptr, _old_size, _new_size
12:21imirkin_: what a waste of energy
12:56joi: imirkin_: v4 at the same address
12:58imirkin_: joi: perfect. i haven't actually tested it... will do that tonight, and assuming no major explosions, will push
12:58imirkin_: [or you probably have push access... run it against a few things... i'm particularly interested in whether it fixes Heaven]
13:01joi: I can push it
13:01joi: imirkin_: "reviewed by"?
13:02imirkin_: joi: yep
13:02imirkin_: joi: also cc stable maybe?
13:19joi: it does not fix Heaven
13:20imirkin_: o well. long shot.
13:27bonbons: pmoreau: I updated bug #82714, 3.19 behaves as earlier kernels, 4.0-rc6 crashes already in nouveau worker thread (within evo_wait)
13:29bonbons: there definitely are things that should be initialized to sane values but are not (32bit offset with a value of 0xffffffff/4 seems bad and fails)
13:42joi: imirkin_: anything else to test?
13:42imirkin_: joi: UE4 would be nice if you have it nearby, if not, then wtvr
13:42joi: i'll rerun piglit in a moment
13:43imirkin_: joi: UE4 == https://wiki.unrealengine.com/Linux_Demos
13:45joi: imirkin_: any particular one?
13:45imirkin_: joi: nope
13:46imirkin_: they're visually pretty awesome though... good heavy test of all sorts of stuff
13:46imirkin_: dunno if it'd hit the runout case though
23:20airlied: imirkin_: I jusst looked at trying to make xorg.conf slave devs work again, I nearly understand why I have up the first few times
23:22airlied: gave up