01:42 Guest44736: mhhhh
01:43 Guest44736: I can't connect to IRC anymore with my client :/
01:45 pq: Guest44736, yes, freenode is breaking up. Maybe someone is DoS'ing.
01:47 Guest44736: mhhh
01:47 Guest44736: connect by IP works
01:47 Guest44736: I think dns is just messed up
01:48 karolherbst1: and why can't I use my "normal" nick :/
01:48 karolherbst1: meh
01:49 pq: existing connections lag too
01:50 pq: have nickserv kick your ghost out - though until the network weather clears, not much point
01:50 karolherbst1: well, but I am already logged in and I can't ghost the nick :/
01:50 karolherbst1: yeah I guess we have to wait then
03:19 karolherbst: well at least webchat works without issues
03:36 Tom^: karolherbst: i just need to complete SC 2 Void of Legacy campaign then im installing arch again, and most likely getting nouveau with your repo. so uh saturday or sunday and im up for various test , changes.
03:36 Tom^: even if my 780ti burns up its for the greater good. ! =D
03:42 karolherbst: yeah it won't :D
03:42 karolherbst: your gpu will be run at +0.05V at most
03:42 karolherbst: and this doesn't change that much
03:44 Tom^: karolherbst: i studied this a bit but i guess you already know all of this but i found it quite interesting http://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review/4
03:44 Tom^: karolherbst: explains a bit about how the gpu boost works from their point of view and findings
03:44 karolherbst: I saw it
03:44 karolherbst: mupuf has means of generating nice tables for stuff like that though
03:45 Tom^: so to me it sounds like if i just smack on a beefier water cooler on the gpu it can boost well above what it currently is as long as its under the power budget
03:45 karolherbst: but it is useless on my gpu
03:45 karolherbst: Tom^: yeah, this is well known
03:45 karolherbst: Tom^: you can "fake" a temperature with envytools
03:45 Tom^: ah ok
03:45 karolherbst: and then you see the clock going down the nearer you come to 97°C
03:45 karolherbst: and if you hit 97° there is an emergency clock down to lowest speed
03:47 Tom^: its a quite interesting concept, and i mean wouldnt this be quite possible to implent on other gpus that didnt even have it in the first place
03:48 Tom^: just not use the vbios steps but rather have your own self made table for it :P
03:48 karolherbst: this is implemented in the driver anyway
03:48 karolherbst: mhh
03:48 karolherbst: we should respect the vbios somehow
03:48 karolherbst: but also some stuff is just calculated
03:49 karolherbst: basically what you want to have (as a user) is the gpu clocking to the highest _needed_ speed with stable setup (voltage, other parameters)
03:49 karolherbst: but without draining too much power
03:49 karolherbst: so you have somehow respect the temperature and used power of the gpu
03:49 Tom^: mm
03:51 karolherbst: okay, the article also says that the "base" clock we found in that table is the clock used when at full load, but not boosting
03:52 karolherbst: Tom^: also important: chip quatliy
03:52 karolherbst: as the article states
03:52 karolherbst: thing is, my chip seems to have a really good quality
03:52 karolherbst: so I can't test stuff I do in the "normal" case
03:53 karolherbst: my "base clock" is 705MHz, my "boost clock" is 797 MHz, but even nvidia drivers that gpu at 862MHz all the time at full load
03:53 karolherbst: and 862 MHz is the highest clock stated in my vbios
03:54 karolherbst: but even that, I can overclock this card by +135MHz to 997, and it still works fine most of the time
03:54 Tom^: indeed
03:57 karolherbst: "Accordingly, the boost clock is intended to convey what kind of clockspeeds buyers can expect to see with the average GTX 680"
03:57 karolherbst: this is was I already figured
03:57 karolherbst: RSpliet: maybe I should say "expected boost clock" in the table?
03:57 karolherbst: but we should really try to find out what that number does
03:58 karolherbst: ohhhh
03:58 karolherbst: nvidia says what it is
03:58 karolherbst: The “Boost Clock” is the average clock frequency the GPU will run under load in many typical non-TDP apps that require less GPU power consumption. On average, the typical Boost Clock provided by GPU Boost in GeForce GTX 680 is 1058MHz, an improvement of just over 5%. The Boost Clock is a typical clock level achieved running a typical game in a typical environment
03:59 karolherbst: so yeah, marketing stuff
03:59 karolherbst: :D
03:59 karolherbst: and TDP clock means then when the hit the power budget
04:04 karolherbst: mhh but I feel a bit bad about pushing a change which will cap nouveau to the "base clock" and then get serious performance regressions, but I think there is no other way around that for now
04:21 RSpliet: karolherbst: that's fine
04:22 RSpliet: safety > performance
04:24 karolherbst: yeah I know
04:24 karolherbst: this just means I will hurry up with the boost stuff :D
04:25 karolherbst: but for this we need the power sensors changes
04:25 karolherbst: but this is partly ready anyway
04:25 Tom^: cant you expose a variable in debugfs that lets us set the clock to various boost levels manually for now?
04:25 Tom^: :P
04:27 RSpliet: Tom^: well, we can always expose source code that you can change to set the clock to various boost levels manually ;-)
04:27 karolherbst: Tom^: I think I will print the clocks at load time
04:27 karolherbst: RSpliet: yeah, I have already patches where you can manually set the cstate
04:27 karolherbst: and clock to whatever you want
04:48 karolherbst: RSpliet: there is no problem displaying stuff read out from the vbios in a subdev_ctor, right?
05:32 jarnos: Hello, I get occasional display corruption with Nvidia Quadro FX570M when using single display; the card has 256MB video memory and turbocache. Can driver utilize Turbocache and is the corruption due to running out of memory?
05:38 karolherbst: jarnos: what do you mean by turbocache?
05:39 karolherbst: ohh is turbocache this thingy, where you get also systemram?
05:39 Tom^: yea
05:40 jarnos: karolherbst, https://en.wikipedia.org/wiki/TurboCache
05:40 Tom^: old card tho :p
05:43 jarnos: When using NVIDIA binary driver, it sometimes can not use Mythfrontend while Chrome is running. Chrome takes a lot of video memory. I have not experienced such issues with any other hardware. Even some old mini notebook with Intel atom does not complain about not enough resources, but it is slower of course.
05:44 karolherbst: jarnos: and yes, nouveua doesn't handle the situation nicely yet where there is not enough vram
05:44 jarnos: karolherbst, :(
05:45 jarnos: How do you know how much of the video ram is in use? Can nouveau use Turbocache?
05:45 RSpliet: oh god, never expected to hear that term again
05:46 karolherbst: uhhh there is a patent for turbocache :D
05:47 jarnos: karolherbst, meaning it can not be utilized by open source driver?
05:47 karolherbst: it can, it is in fact easier to RE in such a case
05:47 RSpliet: TurboCache is neither turbo nor a cache... well, we can discuss that latter one, but meh
05:47 karolherbst: :D
05:47 karolherbst: right
05:47 karolherbst: a lot of marketing bs
05:48 RSpliet: jarnos: I expect nouveau not to implement stolen system ram
05:49 jarnos: RSpliet, oh, but there is much more system ram than video ram in my laptop.
05:49 RSpliet: that's true on every machine
05:50 jarnos: http://www8.hp.com/h20195/v2/getpdf.aspx/c04142133.pdf?ver=10 advertises it has "512 MB TurboCache"
05:50 RSpliet: yeah, sorry, I'm not the most knowledgable on TurboCache... I don't quite understand the difference between GART and TurboCache
05:54 pq: GART is system memory made available to the gfx card, and Turbocache is... system memory made available to the gfx card - but in the latter case CPU access to that memory is more... difficult? *shrug* :-P
05:56 RSpliet: pq: so conceptually they're the same thing I'd say, but does TC have additional architectural support on the GPU side compared to GART?
05:56 pq: https://en.wikipedia.org/wiki/Graphics_address_remapping_table talks about an IOMMU
05:57 pq: but https://en.wikipedia.org/wiki/TurboCache does not
05:57 pq: FWIW, how much you trust wikipedia here
05:57 RSpliet: jarnos: you might want to share your issues on the mailinglist rather, the two developers most likely to be of assistance here are 1) an Australian and 2) someone who doesn't like IRC :-P
06:00 karolherbst: :D
06:01 karolherbst: pq I bet there is some sort of memory controller of the gpu, which just maps addresses to local memory and system memory
06:01 karolherbst: *on
06:02 pq: karolherbst, so an IOMMU with the GPU (TC) vs. IOMMU with the motherboard chipset (GART)?
06:03 karolherbst: maybe
06:04 pq: makes me wonder if TC memory must be physically continuous and maybe even on a fixed address... GART didn't have to be physically continuous, did it?
06:04 karolherbst: pq: as it seems with TC the total system memory reported seems to be lower
06:04 karolherbst: at least for windows
06:05 pq: right, stolen away for good, not re-usable for anything else
06:05 karolherbst: and as it seems, the controller can do both at the same time (handle requests to local memory and system memory)
06:05 karolherbst: which would explains performance benefits
06:06 pq: makes TC feel either like a more crude and simple solution, or maybe trying to avoid AGP insanity :-P
06:06 karolherbst: its pcie
06:06 pq: oh, like using VRAM as a cache for the TC memory?
06:06 pq: what, AGP GART?
06:06 karolherbst: mhh?
06:07 karolherbst: those gpus are pcie gpus
06:07 karolherbst: what does it have todo with agp?
06:07 pq: oh you meant as TC advantage
06:07 pq: but doesn't GART exist also on pcie?
06:07 karolherbst: I don't know, you started to mention AGP ;)
06:07 karolherbst: there are AGP 6200 gpus though
06:08 karolherbst: but a "GeForce 6200 TC" is PCIe only
06:08 pq: alright
06:08 karolherbst: and it seems the onboard memory amount is a bit lower
06:10 pq: so one might say that the turbocache is actually the VRAM, and stolen system RAM is the "normal memory" for the GPU ;-D
06:10 karolherbst: don't think so
06:10 karolherbst: but might be
06:11 pq: too small but faster memory - isn't that the definition of cache? :-)
06:11 karolherbst: I don't know if it's faster
06:12 karolherbst: but it might
06:12 karolherbst: how fast is 5.6GB/s for nv44 era?
06:13 pq: not just throughput but latency counts too
06:13 karolherbst: yeah right
06:13 karolherbst: but I think those TC gpus had DDR
06:13 pq: and I've no idea, I didn't even think pcie was a thing yet at that time
06:13 karolherbst: ohh the FX 570M seems to have gddr3
06:13 karolherbst: pq it was the start
06:14 karolherbst: most of the gpus where available with AGP and PCIe
06:14 karolherbst: there are even 5xxx gpus with pcie
06:14 karolherbst: okay and a NV18B from 2004
06:15 pq: and the funny agp-pcie bridges going both ways depending
06:49 Tom^: karolherbst: which of your branches are the one most ahead ?
06:49 karolherbst: Tom^: depends on how stable you want it
06:49 karolherbst: master_karol_no_touchy is bleeding edge stuff
06:50 karolherbst: but well
06:50 karolherbst: there is a bit missing
06:50 karolherbst: wait a sec
06:50 Tom^: il stick to that then. :p
06:50 karolherbst: or not?
06:50 karolherbst: mhhh
06:50 karolherbst: well
06:51 karolherbst: this branch can be totally unstable
06:51 karolherbst: there is dynamic reclocking stuff in it
06:51 karolherbst: and it may mess up your gpu
06:51 Tom^: yea i dont want dynamic
06:51 Tom^: yet :p
06:51 karolherbst: mhhh
06:51 karolherbst: then I don't have anything that good sadly :/
06:52 karolherbst: though
06:52 karolherbst: you could just revert this commit: https://github.com/karolherbst/nouveau/commit/3e4741302fe518d3d323a8c578699d5b5bfafe92
06:52 Tom^: 3$ for noveau=dynamyreclock=0
06:52 karolherbst: mhhh
06:52 Tom^: *dynamic
06:53 karolherbst: this will be difficult
06:53 karolherbst: possible, but a bit difficult
07:07 Tom^: dont you respect max voltage anyways?
07:07 Tom^: doubt it will messup my gpu then or am i missing something
07:10 john_cephalopoda: Hi
07:10 Tom^: i guess il stick with my very own branch until you feel ready for burn testing my 780 :p
07:17 karolherbst: Tom^: I respect max voltage, but this is only because we can't go over it
07:17 karolherbst: reclocking just fails
07:17 karolherbst: well you can test this branch
07:17 karolherbst: you should just revert the dyn reclocking commit
07:18 Tom^: well then, going above stable mhz would simply just freeze the card imo :p
07:18 Tom^: but thats just my from my overclocking experiences ^_^
07:19 Tom^: but fine il revert it
07:22 karolherbst: yeah it will just freeze the system
07:30 gryffus_: any news on the https://bugs.freedesktop.org/show_bug.cgi?id=93004 bug?
08:03 nchauvet_: karolherbst, I have this ACPI Warning when I try to load bbswitch with load_state=-1 unload_state=1
08:04 nchauvet_: http://fpaste.org/295186/48640104/
08:09 nchauvet_: I still only have one provider when rebooting to a intel/nouveau only setup
08:21 pmoreau: nchauvet_: Looks like the regular ACPI warning from Nouveau. Nothing to fear about it. :-)
08:23 nchauvet_: hum, I'm not using bbswitch with nouveau, but with intel/nvidia, now the problem is that xrandr --listproviders doesn't output a nouveau provider (only the intel one) on my intel/nouveau setup
08:24 pmoreau: I have no idea how bbswitch is working, but could you try to provide the output from dmesg please?
08:25 nchauvet_: http://paste.fedoraproject.org/295192/41541144
08:27 nchauvet_: pmoreau, I only have one provider here:
08:28 pmoreau: It doesn't look like Nouveau is listed when the card is sleeping
08:30 pmoreau: Or rather, if the card was put to sleep before X loaded.
08:32 nchauvet_: well, the card isn't supposed to be sleeping, bbswitch report the card is ON at least
08:33 pmoreau: From your dmesg, the card has been oscillating between sleeping and resuming.
08:34 nchauvet_: you mean here: [ 12.915024] nouveau 0000:01:00.0: DRM: suspending kernel object tree...
08:34 nchauvet_: [ 17.995099] nouveau 0000:01:00.0: DRM: resuming kernel object tree..
08:34 nchauvet_: ?
08:34 pmoreau: Yes
08:35 nchauvet_: because that was during boot, not something like systemctl suspend or else
08:37 pmoreau: My guess is: Nouveau auto-suspends by default as you have an Optimus setup "pci 0000:01:00.0: optimus capabilities: enabled, status dynamic power", and something (X, bbswitch?) pings it from time to time, which causes it to wakeup.
08:44 pmoreau: You could boot with "nouveau.runpm=0" to prevent this from happening / be sure this is the problem (or not).
08:56 karolherbst: nchauvet_: ignore this warning
08:57 karolherbst: nchauvet_: and you shouldn't use bbswitch nouveau, actually you can't, because bbswitch won't do anything if it's detect a driver being loaded for the gpu
08:59 nchauvet_: karolherbst, yeah, I'm only using bbswitch with nvidia, well understood, but once I've rebooted to a intel/nouveau only setup, xrandr --listproviders only output my intel provider and not nouveau anymore
09:01 karolherbst: nchauvet_: does dmesg look suspicous?
09:01 nchauvet_: so something like DRI_PRIME=1 vdpauinfo doesn't use the nouveau vdpau backend (which is correctly installed) but the intel one
09:02 karolherbst: imirkin: "With the Unigine Valley tech demo the Nouveau performance is reported to be greater than the closed-source NVIDIA driver, but there was a difference in rendering quality. With NVIDIA's Linux driver the rendering quality was much greater than shown by the Nouveau driver." mhhh
09:02 nchauvet_: karolherbst, pmoreau http://paste.fedoraproject.org/295202/44864372 (with runpm=0) which indeed prevent the suspend/resume
09:05 karolherbst: nchauvet_: okay that looks fine
09:05 karolherbst: mhh
09:05 karolherbst: then the Xorg log
09:09 nchauvet_: http://paste.fedoraproject.org/295205/14486441
09:11 nchauvet_: Unknown chipset: NV117, weird, my chipset was known previously
09:12 nchauvet_: lspci -d 10de:*
09:12 nchauvet_: 01:00.0 3D controller: NVIDIA Corporation GM107M [GeForce GTX 850M] (rev a2)
09:12 nchauvet_: even: 01:00.0 3D controller [0302]: NVIDIA Corporation GM107M [GeForce GTX 850M] [10de:1391] (rev a2)
09:14 nchauvet_: this is xf86-drv-nouveau 1.0.12 (pre-release from October 8th this year)
09:14 karolherbst: mhhh
09:14 karolherbst: ahh right
09:14 karolherbst: you can't use the nouveau ddx
09:14 karolherbst: you have to use modesetting ddx
09:15 karolherbst: cause thats first gen maxwell
09:15 nchauvet_: okay, so this mean the discard mechanism doesn't work ? because I don't see modetting been loaded here ?
09:16 nchauvet_:reboot with nouveau ddx removed
09:31 nchauvet_: it doesn't seem to show modesetting been enabled
09:33 karolherbst: nchauvet_: do you have a file "modesetting_drv.so" installed?
09:33 nchauvet_: karolherbst, yes, bundled in Xorg server 1.18
09:35 karolherbst: okay, nice
09:35 karolherbst: do you have any xorg.conf file?
09:36 nchauvet_: karolherbst, no, only xorg.conf.d files related to input
09:36 karolherbst: mhhhhhhh
09:37 karolherbst: nchauvet_: would you like to enable dri3 on the intel ddx?
09:37 karolherbst: because then you can use prime offloading without all that xrandr crap
09:38 karolherbst: but then reverse prime doesn't work :/
09:40 imirkin: gnurou: ah, that's interesting. if you make me an mmt trace of the blob, i can try to stare at it to see what it does differently from nouveau.
09:41 karolherbst: I nexer used prime offloading with the modesetting ddx myself though
09:43 nchauvet_: karolherbst, with DRI3 enabled, DRI_PRIME=1 glxinfo outputs the approprite nouveau backend but now vdpauinfo!
09:44 nchauvet_: that been said, I don't have extracted the video firmware for maxwell, so I expect it won't work because of that (and maybe others reason)
09:45 imirkin: nchauvet_: vdpau doesn't do dri3 for now, i think
09:45 imirkin: nchauvet_: oh, also there's no vdpau for maxwell
09:46 imirkin: no one's even looked at it afaik
09:46 imirkin: beyond determining that it's totally changed from earlier gens
09:47 mlankhorst: maybe not, depends a bit..
09:48 karolherbst: nchauvet_: yes, you need the firmware files for video accell
09:48 karolherbst: ohh
09:48 karolherbst: then it even doesn't work :D
09:50 karolherbst: imirkin: did you meant to answer me or did I miss something gnurou told you?
09:51 imirkin: nchauvet_ was talking about vdpau for maxwell, that's what i was answering.
09:54 mlankhorst: we haven't really looked at maxwell, maybe fifo is the same but changed slightly, then again maybe not..
09:54 karolherbst: imirkin: I meant this: "gnurou: ah, that's interesting. if you make me an mmt trace of the blob, i can try to stare at it to see what it does differently from nouveau."
09:55 imirkin: karolherbst: yes, that was in reference to something he said earlier
09:55 imirkin: mlankhorst: well, the engines are all different... there's one engine now instead of 3
09:55 imirkin: mlankhorst: but yeah, the actual encoding of stuff could be identical
09:57 karolherbst: imirkin: k
11:13 imirkin: skeggsb_: did you actually test nouveau_vieux with your patches?
13:05 karolherbst: anybody any idea what those unk values could mean? hex and dec parsing of the same data: https://gist.github.com/karolherbst/28fcfc36013873249077
13:05 karolherbst: could be mW * 10, but well
13:06 imirkin_: clearly a high/low of some sort
13:07 karolherbst: yeah
13:07 karolherbst: and something related to the clocks
13:07 karolherbst: they decrease as well
13:07 imirkin_: what are those clocks? core?
13:07 karolherbst: yeah
13:07 karolherbst: values like in the cstate table
13:07 karolherbst: so /2 => real clock
13:07 karolherbst: but I think only I have such values there
13:08 karolherbst: all other vbios have only 0 for both
13:08 karolherbst: ohh no
13:08 karolherbst: found a second one
13:10 karolherbst: added output from the second bios: https://gist.github.com/karolherbst/28fcfc36013873249077
13:11 karolherbst: and the other card has a power budget of 100W
13:11 karolherbst: mine has 80W
13:11 imirkin_: could they be clocks for e.g. vdec?
13:11 imirkin_: /10 of course
13:11 karolherbst: mhhh
13:11 karolherbst: don't think so
13:11 karolherbst: this table is in like every kepler vbios
13:11 karolherbst: but why only two of them have these?
13:12 imirkin_: ah... dunno
13:12 karolherbst: it looks power related
13:12 karolherbst: mine highest entry: 74.99W
13:12 karolherbst: power budgets: 80W
13:12 karolherbst: the other card: 92.40W, budget: 100W
13:12 karolherbst: could make sense
13:13 imirkin_: could :)
13:13 karolherbst: could be the estimated power consumption
13:13 karolherbst: first: everything at full load, second: only core at full load
13:13 karolherbst: or something like that
13:14 karolherbst: the other is also a gk104
13:24 imirkin_: jkucia: btw, you might be interested to know that your program *totally* doesn't work on i965
13:24 imirkin_: jkucia: haven't checked into why. i think it thinks that the texture is incomplete... width = 1, height = 0, levels = 0
13:24 imirkin_: jkucia: both on mesa 10.3.7 and master
13:58 Tom^: karolherbst: hm yea the dyn clock commit doesnt revert because of your respect max voltage commit
13:59 Tom^: clk->ucstate is the first thing i noticed that is added in dynclock and used in respect volt :p
13:59 karolherbst: ohh right
13:59 karolherbst: meh, crap
14:00 karolherbst: Tom^: then checkout 4e2ef5f700bf2912f71eefe877e9e9d1d4f40e73
14:00 karolherbst: and cherry-pick a2d3c9bd8988118acfd2f569c90336651aeb0297
14:06 karolherbst: I think over the weekend I will clean up my repository :D
14:06 karolherbst: there is a lot out of sync
14:07 Tom^: and a Tom branch that isnt needed any more :p
14:08 jkucia: imirkin_: I know that it doesn't work, thanks
14:08 jkucia: imirkin_: and thanks for reverse bisecting Bug 93110
14:08 imirkin_: np. should hopefully be in the next 11.0.x release
14:12 jkucia: imirkin_: also I have yet another problem on radeonsi but it is not directly related to this test program ;)
14:12 imirkin_: jkucia: try #radeon for that :) but having a sample prog/trace is going to be equally helpful there.
14:14 jkucia: I will do but I haven't time to investigate the exact source of the issue yet.
14:15 imirkin_: well, we do on occasion debug entire games, so even if you don't have a *trivial* repro, that may be good enough.
14:16 imirkin_: [that's where apitraces come in _really_ useful]
14:17 Tom^: karolherbst: can noveau monitor gpu temp yet?
14:17 imirkin_: Tom^: run 'sensors'
14:18 Tom^: hm ok good, gonna have to keep a look on it since im a bit above the normal clocks :p
14:20 jkucia: imirkin_: I hope to find the time to investigate the issue. Otherwise I will file a bug with an apitrace.
14:21 imirkin_: jkucia: sounds good. there are actual paid devs working on radeonsi, so you should be able to get competent answers :)
14:21 jkucia: I expect it to be something less surprising than the bug with textureSize() but who knows :)
14:25 imirkin_: let's hope so
14:25 koz_:just purchased himself a GTX 680.
14:25 koz_: Let's see what happens...
14:29 Tom^: karolherbst: i think im on to low gpu core voltage on 0f , according to sensors its 1.14v while i recall it being 1.175 on blob and windows, it also made X freeze https://gist.github.com/anonymous/adc869aab263add1c7b7
14:30 karolherbst: Tom^: which clock?
14:30 Tom^: 1177 core, 6999 mem
14:31 Tom^: i guess it could be the clock being to high too also, but the voltage according to sensors is low too.
14:31 karolherbst: let me check
14:32 karolherbst: Tom^: core clock being too high doesn't really matter
14:32 karolherbst: it just depends on the voltage
14:33 Tom^: mk
14:33 karolherbst: okay, it is min 1.14V according to the vbios
14:33 karolherbst: min voltage
14:33 Tom^: or does the volt dynamicly change?
14:34 Tom^: because im reading it at idling
14:34 karolherbst: and 1.28V max voltage
14:34 karolherbst: no
14:34 karolherbst: but the right value could be also 1.23 max voltage
14:35 karolherbst: and 1.175 is somewhere in the middle of that
14:35 karolherbst: mhhh
14:35 Tom^: 1.175 was also on the 1097 clock
14:35 karolherbst: ohh right
14:35 Tom^: so i would assume 1177 has a higher volt according to the vbios
14:36 karolherbst: 1.03V min and 1.27 max voltage then
14:37 karolherbst: has 1097 MHz
14:37 karolherbst: nouveau will drive at 1.03V with 1097 MHz
14:37 karolherbst: cstate 35
14:37 karolherbst: Tom^: do you want to test if the 35th cstate is stablier?
14:38 Tom^: sure
14:39 Tom^: still sounds its gonna run quite lean :p
14:41 Tom^: ya it froze
14:42 karolherbst: k
14:42 karolherbst: then we really need a higher voltage
14:42 karolherbst: but without temperature monitoring and stuff I don't think we should do it yet
14:42 karolherbst: there is a way to downclock the gpu automatically when the gpu hits a specific temperature automatically
14:43 karolherbst: but I don't know how to do that
14:43 Tom^: ah ok
14:43 karolherbst: mupuf_: knows :p
14:44 karolherbst: Tom^: you could try something out though
14:44 karolherbst: Tom^: this is the function which calculates the real voltage out of the voltage map table id: https://github.com/karolherbst/nouveau/blob/master_karol_no_touchy/drm/nouveau/nvkm/subdev/volt/base.c#L69-L88
14:45 karolherbst: vmap evaulates to true for your gpu
14:45 karolherbst: so you could again replace info.min with info.max
14:45 karolherbst: or do something smart: (info.min + info.max) / 2
14:45 Tom^: or just set it to a value myself?
14:45 Tom^: :P
14:45 karolherbst: this will have an influence of the available cstates
14:45 karolherbst: no
14:45 karolherbst: because then every cstate evaluates to the same voltage
14:46 karolherbst: also the real high ones
14:46 karolherbst: which aren't possible on your gpu
14:46 karolherbst: yeah, maybe change this line: "return info.min;"
14:46 karolherbst: and just return info.max
14:46 karolherbst: and then above info.min +ret has to be info.max + ret
14:47 karolherbst: this should work as we already tried that I guess
14:47 Tom^: but wont that set the volt to the max volt that table exposes?
14:47 karolherbst: no
14:47 karolherbst: info is a row entry
14:47 karolherbst: check nvbios vbios.rom
14:47 karolherbst: Voltage map table
14:47 karolherbst: the ids are the voltage values in the cstates
14:48 Tom^: will do
14:48 karolherbst: so if a cstate has voltage = 20, look at voltage map table id 20
14:48 karolherbst: there you have the min/max values
14:48 karolherbst: -- ID = 20, link: 47, voltage_min = 825000, voltage_max = 887500 [µV] --
14:48 karolherbst: link means, you have to add the values of id = 0x47 (hex value)
14:48 karolherbst: so this one: -- ID = 71, link: 6c, voltage_min = 0, voltage_max = 12500 [µV] --
14:48 karolherbst: and so on
14:49 karolherbst: anyway, I will be off for today ;)
14:49 karolherbst: I would try out the /min/max/ replacement first
14:49 Tom^: roger.
14:49 karolherbst: otherwise you might get mapping issues later and volting just fails
14:49 karolherbst: you will get a much lower clock then
14:50 karolherbst: maybe
14:50 karolherbst: maybe you get the same one
14:50 karolherbst: but with 1.21V then
14:50 karolherbst: 1.22
14:50 karolherbst: but that doesn't matter
14:50 karolherbst: okay
14:50 karolherbst: will be gone now :p
14:50 karolherbst: cya
19:35 mupuf_: tadam! The jetson finally has an ssd!
19:36 mupuf_: that was a lot of work as I had to move all the data out of it to my new drives on the desktop
19:36 mupuf_: but hopefully, I will now get a decent IO performance!
19:38 mupuf_: hmm, not sure it is much better :s
19:39 mupuf_: well, it is much better, but it still is bad. Once it is cached, everything is fine
19:41 mupuf_: to give you a sense of scale, make on nouveau OOT spends about 1 to 2 minutes checking all the dependencies
19:47 gnurou: imirkin: I will try and get permission to send a fix for this, we found the cause to be the texture descriptor format that has changed.
19:49 mupuf_: well, 3m20.908s vs 0m6.886s on my i7-4790K to compile nouveau :D
19:49 gnurou: gm1 can still use Kepler's, but gm2 has its own
19:50 mupuf_: gnurou: I wonder why we never saw the problem before ... oh right :D
19:50 gnurou: mupuf_: don't get me started :)
19:51 mupuf_: I wonder if we should try to find the pdaemon command that initiates the DMA transfer
19:51 gnurou: but hey, we are making progress with the fw too
19:52 gnurou: even though only the final result will be visible to the community
19:52 mupuf_: this way, devs could at least get access to the fw before nvidia's lawyers feel like risking the exposure
19:52 mupuf_: but good to hear!
22:00 koz_: OK, I seem to be able to get my 680 to run at 0a, but nothing higher.
23:09 koz_: If my 680 will only reclock to 0a, is there anything I can do to find out what's going wrong at higher pstates?