01:43Guest44736: I can't connect to IRC anymore with my client :/
01:45pq: Guest44736, yes, freenode is breaking up. Maybe someone is DoS'ing.
01:47Guest44736: connect by IP works
01:47Guest44736: I think dns is just messed up
01:48karolherbst1: and why can't I use my "normal" nick :/
01:49pq: existing connections lag too
01:50pq: have nickserv kick your ghost out - though until the network weather clears, not much point
01:50karolherbst1: well, but I am already logged in and I can't ghost the nick :/
01:50karolherbst1: yeah I guess we have to wait then
03:19karolherbst: well at least webchat works without issues
03:36Tom^: karolherbst: i just need to complete SC 2 Void of Legacy campaign then im installing arch again, and most likely getting nouveau with your repo. so uh saturday or sunday and im up for various test , changes.
03:36Tom^: even if my 780ti burns up its for the greater good. ! =D
03:42karolherbst: yeah it won't :D
03:42karolherbst: your gpu will be run at +0.05V at most
03:42karolherbst: and this doesn't change that much
03:44Tom^: karolherbst: i studied this a bit but i guess you already know all of this but i found it quite interesting http://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review/4
03:44Tom^: karolherbst: explains a bit about how the gpu boost works from their point of view and findings
03:44karolherbst: I saw it
03:44karolherbst: mupuf has means of generating nice tables for stuff like that though
03:45Tom^: so to me it sounds like if i just smack on a beefier water cooler on the gpu it can boost well above what it currently is as long as its under the power budget
03:45karolherbst: but it is useless on my gpu
03:45karolherbst: Tom^: yeah, this is well known
03:45karolherbst: Tom^: you can "fake" a temperature with envytools
03:45Tom^: ah ok
03:45karolherbst: and then you see the clock going down the nearer you come to 97°C
03:45karolherbst: and if you hit 97° there is an emergency clock down to lowest speed
03:47Tom^: its a quite interesting concept, and i mean wouldnt this be quite possible to implent on other gpus that didnt even have it in the first place
03:48Tom^: just not use the vbios steps but rather have your own self made table for it :P
03:48karolherbst: this is implemented in the driver anyway
03:48karolherbst: we should respect the vbios somehow
03:48karolherbst: but also some stuff is just calculated
03:49karolherbst: basically what you want to have (as a user) is the gpu clocking to the highest _needed_ speed with stable setup (voltage, other parameters)
03:49karolherbst: but without draining too much power
03:49karolherbst: so you have somehow respect the temperature and used power of the gpu
03:51karolherbst: okay, the article also says that the "base" clock we found in that table is the clock used when at full load, but not boosting
03:52karolherbst: Tom^: also important: chip quatliy
03:52karolherbst: as the article states
03:52karolherbst: thing is, my chip seems to have a really good quality
03:52karolherbst: so I can't test stuff I do in the "normal" case
03:53karolherbst: my "base clock" is 705MHz, my "boost clock" is 797 MHz, but even nvidia drivers that gpu at 862MHz all the time at full load
03:53karolherbst: and 862 MHz is the highest clock stated in my vbios
03:54karolherbst: but even that, I can overclock this card by +135MHz to 997, and it still works fine most of the time
03:57karolherbst: "Accordingly, the boost clock is intended to convey what kind of clockspeeds buyers can expect to see with the average GTX 680"
03:57karolherbst: this is was I already figured
03:57karolherbst: RSpliet: maybe I should say "expected boost clock" in the table?
03:57karolherbst: but we should really try to find out what that number does
03:58karolherbst: nvidia says what it is
03:58karolherbst: The “Boost Clock” is the average clock frequency the GPU will run under load in many typical non-TDP apps that require less GPU power consumption. On average, the typical Boost Clock provided by GPU Boost in GeForce GTX 680 is 1058MHz, an improvement of just over 5%. The Boost Clock is a typical clock level achieved running a typical game in a typical environment
03:59karolherbst: so yeah, marketing stuff
03:59karolherbst: and TDP clock means then when the hit the power budget
04:04karolherbst: mhh but I feel a bit bad about pushing a change which will cap nouveau to the "base clock" and then get serious performance regressions, but I think there is no other way around that for now
04:21RSpliet: karolherbst: that's fine
04:22RSpliet: safety > performance
04:24karolherbst: yeah I know
04:24karolherbst: this just means I will hurry up with the boost stuff :D
04:25karolherbst: but for this we need the power sensors changes
04:25karolherbst: but this is partly ready anyway
04:25Tom^: cant you expose a variable in debugfs that lets us set the clock to various boost levels manually for now?
04:27RSpliet: Tom^: well, we can always expose source code that you can change to set the clock to various boost levels manually ;-)
04:27karolherbst: Tom^: I think I will print the clocks at load time
04:27karolherbst: RSpliet: yeah, I have already patches where you can manually set the cstate
04:27karolherbst: and clock to whatever you want
04:48karolherbst: RSpliet: there is no problem displaying stuff read out from the vbios in a subdev_ctor, right?
05:32jarnos: Hello, I get occasional display corruption with Nvidia Quadro FX570M when using single display; the card has 256MB video memory and turbocache. Can driver utilize Turbocache and is the corruption due to running out of memory?
05:38karolherbst: jarnos: what do you mean by turbocache?
05:39karolherbst: ohh is turbocache this thingy, where you get also systemram?
05:40jarnos: karolherbst, https://en.wikipedia.org/wiki/TurboCache
05:40Tom^: old card tho :p
05:43jarnos: When using NVIDIA binary driver, it sometimes can not use Mythfrontend while Chrome is running. Chrome takes a lot of video memory. I have not experienced such issues with any other hardware. Even some old mini notebook with Intel atom does not complain about not enough resources, but it is slower of course.
05:44karolherbst: jarnos: and yes, nouveua doesn't handle the situation nicely yet where there is not enough vram
05:44jarnos: karolherbst, :(
05:45jarnos: How do you know how much of the video ram is in use? Can nouveau use Turbocache?
05:45RSpliet: oh god, never expected to hear that term again
05:46karolherbst: uhhh there is a patent for turbocache :D
05:47jarnos: karolherbst, meaning it can not be utilized by open source driver?
05:47karolherbst: it can, it is in fact easier to RE in such a case
05:47RSpliet: TurboCache is neither turbo nor a cache... well, we can discuss that latter one, but meh
05:47karolherbst: a lot of marketing bs
05:48RSpliet: jarnos: I expect nouveau not to implement stolen system ram
05:49jarnos: RSpliet, oh, but there is much more system ram than video ram in my laptop.
05:49RSpliet: that's true on every machine
05:50jarnos: http://www8.hp.com/h20195/v2/getpdf.aspx/c04142133.pdf?ver=10 advertises it has "512 MB TurboCache"
05:50RSpliet: yeah, sorry, I'm not the most knowledgable on TurboCache... I don't quite understand the difference between GART and TurboCache
05:54pq: GART is system memory made available to the gfx card, and Turbocache is... system memory made available to the gfx card - but in the latter case CPU access to that memory is more... difficult? *shrug* :-P
05:56RSpliet: pq: so conceptually they're the same thing I'd say, but does TC have additional architectural support on the GPU side compared to GART?
05:56pq: https://en.wikipedia.org/wiki/Graphics_address_remapping_table talks about an IOMMU
05:57pq: but https://en.wikipedia.org/wiki/TurboCache does not
05:57pq: FWIW, how much you trust wikipedia here
05:57RSpliet: jarnos: you might want to share your issues on the mailinglist rather, the two developers most likely to be of assistance here are 1) an Australian and 2) someone who doesn't like IRC :-P
06:01karolherbst: pq I bet there is some sort of memory controller of the gpu, which just maps addresses to local memory and system memory
06:02pq: karolherbst, so an IOMMU with the GPU (TC) vs. IOMMU with the motherboard chipset (GART)?
06:04pq: makes me wonder if TC memory must be physically continuous and maybe even on a fixed address... GART didn't have to be physically continuous, did it?
06:04karolherbst: pq: as it seems with TC the total system memory reported seems to be lower
06:04karolherbst: at least for windows
06:05pq: right, stolen away for good, not re-usable for anything else
06:05karolherbst: and as it seems, the controller can do both at the same time (handle requests to local memory and system memory)
06:05karolherbst: which would explains performance benefits
06:06pq: makes TC feel either like a more crude and simple solution, or maybe trying to avoid AGP insanity :-P
06:06karolherbst: its pcie
06:06pq: oh, like using VRAM as a cache for the TC memory?
06:06pq: what, AGP GART?
06:07karolherbst: those gpus are pcie gpus
06:07karolherbst: what does it have todo with agp?
06:07pq: oh you meant as TC advantage
06:07pq: but doesn't GART exist also on pcie?
06:07karolherbst: I don't know, you started to mention AGP ;)
06:07karolherbst: there are AGP 6200 gpus though
06:08karolherbst: but a "GeForce 6200 TC" is PCIe only
06:08karolherbst: and it seems the onboard memory amount is a bit lower
06:10pq: so one might say that the turbocache is actually the VRAM, and stolen system RAM is the "normal memory" for the GPU ;-D
06:10karolherbst: don't think so
06:10karolherbst: but might be
06:11pq: too small but faster memory - isn't that the definition of cache? :-)
06:11karolherbst: I don't know if it's faster
06:12karolherbst: but it might
06:12karolherbst: how fast is 5.6GB/s for nv44 era?
06:13pq: not just throughput but latency counts too
06:13karolherbst: yeah right
06:13karolherbst: but I think those TC gpus had DDR
06:13pq: and I've no idea, I didn't even think pcie was a thing yet at that time
06:13karolherbst: ohh the FX 570M seems to have gddr3
06:13karolherbst: pq it was the start
06:14karolherbst: most of the gpus where available with AGP and PCIe
06:14karolherbst: there are even 5xxx gpus with pcie
06:14karolherbst: okay and a NV18B from 2004
06:15pq: and the funny agp-pcie bridges going both ways depending
06:49Tom^: karolherbst: which of your branches are the one most ahead ?
06:49karolherbst: Tom^: depends on how stable you want it
06:49karolherbst: master_karol_no_touchy is bleeding edge stuff
06:50karolherbst: but well
06:50karolherbst: there is a bit missing
06:50karolherbst: wait a sec
06:50Tom^: il stick to that then. :p
06:50karolherbst: or not?
06:51karolherbst: this branch can be totally unstable
06:51karolherbst: there is dynamic reclocking stuff in it
06:51karolherbst: and it may mess up your gpu
06:51Tom^: yea i dont want dynamic
06:51Tom^: yet :p
06:51karolherbst: then I don't have anything that good sadly :/
06:52karolherbst: you could just revert this commit: https://github.com/karolherbst/nouveau/commit/3e4741302fe518d3d323a8c578699d5b5bfafe92
06:52Tom^: 3$ for noveau=dynamyreclock=0
06:53karolherbst: this will be difficult
06:53karolherbst: possible, but a bit difficult
07:07Tom^: dont you respect max voltage anyways?
07:07Tom^: doubt it will messup my gpu then or am i missing something
07:10Tom^: i guess il stick with my very own branch until you feel ready for burn testing my 780 :p
07:17karolherbst: Tom^: I respect max voltage, but this is only because we can't go over it
07:17karolherbst: reclocking just fails
07:17karolherbst: well you can test this branch
07:17karolherbst: you should just revert the dyn reclocking commit
07:18Tom^: well then, going above stable mhz would simply just freeze the card imo :p
07:18Tom^: but thats just my from my overclocking experiences ^_^
07:19Tom^: but fine il revert it
07:22karolherbst: yeah it will just freeze the system
07:30gryffus_: any news on the https://bugs.freedesktop.org/show_bug.cgi?id=93004 bug?
08:03nchauvet_: karolherbst, I have this ACPI Warning when I try to load bbswitch with load_state=-1 unload_state=1
08:09nchauvet_: I still only have one provider when rebooting to a intel/nouveau only setup
08:21pmoreau: nchauvet_: Looks like the regular ACPI warning from Nouveau. Nothing to fear about it. :-)
08:23nchauvet_: hum, I'm not using bbswitch with nouveau, but with intel/nvidia, now the problem is that xrandr --listproviders doesn't output a nouveau provider (only the intel one) on my intel/nouveau setup
08:24pmoreau: I have no idea how bbswitch is working, but could you try to provide the output from dmesg please?
08:27nchauvet_: pmoreau, I only have one provider here:
08:28pmoreau: It doesn't look like Nouveau is listed when the card is sleeping
08:30pmoreau: Or rather, if the card was put to sleep before X loaded.
08:32nchauvet_: well, the card isn't supposed to be sleeping, bbswitch report the card is ON at least
08:33pmoreau: From your dmesg, the card has been oscillating between sleeping and resuming.
08:34nchauvet_: you mean here: [ 12.915024] nouveau 0000:01:00.0: DRM: suspending kernel object tree...
08:34nchauvet_: [ 17.995099] nouveau 0000:01:00.0: DRM: resuming kernel object tree..
08:35nchauvet_: because that was during boot, not something like systemctl suspend or else
08:37pmoreau: My guess is: Nouveau auto-suspends by default as you have an Optimus setup "pci 0000:01:00.0: optimus capabilities: enabled, status dynamic power", and something (X, bbswitch?) pings it from time to time, which causes it to wakeup.
08:44pmoreau: You could boot with "nouveau.runpm=0" to prevent this from happening / be sure this is the problem (or not).
08:56karolherbst: nchauvet_: ignore this warning
08:57karolherbst: nchauvet_: and you shouldn't use bbswitch nouveau, actually you can't, because bbswitch won't do anything if it's detect a driver being loaded for the gpu
08:59nchauvet_: karolherbst, yeah, I'm only using bbswitch with nvidia, well understood, but once I've rebooted to a intel/nouveau only setup, xrandr --listproviders only output my intel provider and not nouveau anymore
09:01karolherbst: nchauvet_: does dmesg look suspicous?
09:01nchauvet_: so something like DRI_PRIME=1 vdpauinfo doesn't use the nouveau vdpau backend (which is correctly installed) but the intel one
09:02karolherbst: imirkin: "With the Unigine Valley tech demo the Nouveau performance is reported to be greater than the closed-source NVIDIA driver, but there was a difference in rendering quality. With NVIDIA's Linux driver the rendering quality was much greater than shown by the Nouveau driver." mhhh
09:02nchauvet_: karolherbst, pmoreau http://paste.fedoraproject.org/295202/44864372 (with runpm=0) which indeed prevent the suspend/resume
09:05karolherbst: nchauvet_: okay that looks fine
09:05karolherbst: then the Xorg log
09:11nchauvet_: Unknown chipset: NV117, weird, my chipset was known previously
09:12nchauvet_: lspci -d 10de:*
09:12nchauvet_: 01:00.0 3D controller: NVIDIA Corporation GM107M [GeForce GTX 850M] (rev a2)
09:12nchauvet_: even: 01:00.0 3D controller : NVIDIA Corporation GM107M [GeForce GTX 850M] [10de:1391] (rev a2)
09:14nchauvet_: this is xf86-drv-nouveau 1.0.12 (pre-release from October 8th this year)
09:14karolherbst: ahh right
09:14karolherbst: you can't use the nouveau ddx
09:14karolherbst: you have to use modesetting ddx
09:15karolherbst: cause thats first gen maxwell
09:15nchauvet_: okay, so this mean the discard mechanism doesn't work ? because I don't see modetting been loaded here ?
09:16nchauvet_:reboot with nouveau ddx removed
09:31nchauvet_: it doesn't seem to show modesetting been enabled
09:33karolherbst: nchauvet_: do you have a file "modesetting_drv.so" installed?
09:33nchauvet_: karolherbst, yes, bundled in Xorg server 1.18
09:35karolherbst: okay, nice
09:35karolherbst: do you have any xorg.conf file?
09:36nchauvet_: karolherbst, no, only xorg.conf.d files related to input
09:37karolherbst: nchauvet_: would you like to enable dri3 on the intel ddx?
09:37karolherbst: because then you can use prime offloading without all that xrandr crap
09:38karolherbst: but then reverse prime doesn't work :/
09:40imirkin: gnurou: ah, that's interesting. if you make me an mmt trace of the blob, i can try to stare at it to see what it does differently from nouveau.
09:41karolherbst: I nexer used prime offloading with the modesetting ddx myself though
09:43nchauvet_: karolherbst, with DRI3 enabled, DRI_PRIME=1 glxinfo outputs the approprite nouveau backend but now vdpauinfo!
09:44nchauvet_: that been said, I don't have extracted the video firmware for maxwell, so I expect it won't work because of that (and maybe others reason)
09:45imirkin: nchauvet_: vdpau doesn't do dri3 for now, i think
09:45imirkin: nchauvet_: oh, also there's no vdpau for maxwell
09:46imirkin: no one's even looked at it afaik
09:46imirkin: beyond determining that it's totally changed from earlier gens
09:47mlankhorst: maybe not, depends a bit..
09:48karolherbst: nchauvet_: yes, you need the firmware files for video accell
09:48karolherbst: then it even doesn't work :D
09:50karolherbst: imirkin: did you meant to answer me or did I miss something gnurou told you?
09:51imirkin: nchauvet_ was talking about vdpau for maxwell, that's what i was answering.
09:54mlankhorst: we haven't really looked at maxwell, maybe fifo is the same but changed slightly, then again maybe not..
09:54karolherbst: imirkin: I meant this: "gnurou: ah, that's interesting. if you make me an mmt trace of the blob, i can try to stare at it to see what it does differently from nouveau."
09:55imirkin: karolherbst: yes, that was in reference to something he said earlier
09:55imirkin: mlankhorst: well, the engines are all different... there's one engine now instead of 3
09:55imirkin: mlankhorst: but yeah, the actual encoding of stuff could be identical
09:57karolherbst: imirkin: k
11:13imirkin: skeggsb_: did you actually test nouveau_vieux with your patches?
13:05karolherbst: anybody any idea what those unk values could mean? hex and dec parsing of the same data: https://gist.github.com/karolherbst/28fcfc36013873249077
13:05karolherbst: could be mW * 10, but well
13:06imirkin_: clearly a high/low of some sort
13:07karolherbst: and something related to the clocks
13:07karolherbst: they decrease as well
13:07imirkin_: what are those clocks? core?
13:07karolherbst: values like in the cstate table
13:07karolherbst: so /2 => real clock
13:07karolherbst: but I think only I have such values there
13:08karolherbst: all other vbios have only 0 for both
13:08karolherbst: ohh no
13:08karolherbst: found a second one
13:10karolherbst: added output from the second bios: https://gist.github.com/karolherbst/28fcfc36013873249077
13:11karolherbst: and the other card has a power budget of 100W
13:11karolherbst: mine has 80W
13:11imirkin_: could they be clocks for e.g. vdec?
13:11imirkin_: /10 of course
13:11karolherbst: don't think so
13:11karolherbst: this table is in like every kepler vbios
13:11karolherbst: but why only two of them have these?
13:12imirkin_: ah... dunno
13:12karolherbst: it looks power related
13:12karolherbst: mine highest entry: 74.99W
13:12karolherbst: power budgets: 80W
13:12karolherbst: the other card: 92.40W, budget: 100W
13:12karolherbst: could make sense
13:13imirkin_: could :)
13:13karolherbst: could be the estimated power consumption
13:13karolherbst: first: everything at full load, second: only core at full load
13:13karolherbst: or something like that
13:14karolherbst: the other is also a gk104
13:24imirkin_: jkucia: btw, you might be interested to know that your program *totally* doesn't work on i965
13:24imirkin_: jkucia: haven't checked into why. i think it thinks that the texture is incomplete... width = 1, height = 0, levels = 0
13:24imirkin_: jkucia: both on mesa 10.3.7 and master
13:58Tom^: karolherbst: hm yea the dyn clock commit doesnt revert because of your respect max voltage commit
13:59Tom^: clk->ucstate is the first thing i noticed that is added in dynclock and used in respect volt :p
13:59karolherbst: ohh right
13:59karolherbst: meh, crap
14:00karolherbst: Tom^: then checkout 4e2ef5f700bf2912f71eefe877e9e9d1d4f40e73
14:00karolherbst: and cherry-pick a2d3c9bd8988118acfd2f569c90336651aeb0297
14:06karolherbst: I think over the weekend I will clean up my repository :D
14:06karolherbst: there is a lot out of sync
14:07Tom^: and a Tom branch that isnt needed any more :p
14:08jkucia: imirkin_: I know that it doesn't work, thanks
14:08jkucia: imirkin_: and thanks for reverse bisecting Bug 93110
14:08imirkin_: np. should hopefully be in the next 11.0.x release
14:12jkucia: imirkin_: also I have yet another problem on radeonsi but it is not directly related to this test program ;)
14:12imirkin_: jkucia: try #radeon for that :) but having a sample prog/trace is going to be equally helpful there.
14:14jkucia: I will do but I haven't time to investigate the exact source of the issue yet.
14:15imirkin_: well, we do on occasion debug entire games, so even if you don't have a *trivial* repro, that may be good enough.
14:16imirkin_: [that's where apitraces come in _really_ useful]
14:17Tom^: karolherbst: can noveau monitor gpu temp yet?
14:17imirkin_: Tom^: run 'sensors'
14:18Tom^: hm ok good, gonna have to keep a look on it since im a bit above the normal clocks :p
14:20jkucia: imirkin_: I hope to find the time to investigate the issue. Otherwise I will file a bug with an apitrace.
14:21imirkin_: jkucia: sounds good. there are actual paid devs working on radeonsi, so you should be able to get competent answers :)
14:21jkucia: I expect it to be something less surprising than the bug with textureSize() but who knows :)
14:25imirkin_: let's hope so
14:25koz_:just purchased himself a GTX 680.
14:25koz_: Let's see what happens...
14:29Tom^: karolherbst: i think im on to low gpu core voltage on 0f , according to sensors its 1.14v while i recall it being 1.175 on blob and windows, it also made X freeze https://gist.github.com/anonymous/adc869aab263add1c7b7
14:30karolherbst: Tom^: which clock?
14:30Tom^: 1177 core, 6999 mem
14:31Tom^: i guess it could be the clock being to high too also, but the voltage according to sensors is low too.
14:31karolherbst: let me check
14:32karolherbst: Tom^: core clock being too high doesn't really matter
14:32karolherbst: it just depends on the voltage
14:33karolherbst: okay, it is min 1.14V according to the vbios
14:33karolherbst: min voltage
14:33Tom^: or does the volt dynamicly change?
14:34Tom^: because im reading it at idling
14:34karolherbst: and 1.28V max voltage
14:34karolherbst: but the right value could be also 1.23 max voltage
14:35karolherbst: and 1.175 is somewhere in the middle of that
14:35Tom^: 1.175 was also on the 1097 clock
14:35karolherbst: ohh right
14:35Tom^: so i would assume 1177 has a higher volt according to the vbios
14:36karolherbst: 1.03V min and 1.27 max voltage then
14:37karolherbst: has 1097 MHz
14:37karolherbst: nouveau will drive at 1.03V with 1097 MHz
14:37karolherbst: cstate 35
14:37karolherbst: Tom^: do you want to test if the 35th cstate is stablier?
14:39Tom^: still sounds its gonna run quite lean :p
14:41Tom^: ya it froze
14:42karolherbst: then we really need a higher voltage
14:42karolherbst: but without temperature monitoring and stuff I don't think we should do it yet
14:42karolherbst: there is a way to downclock the gpu automatically when the gpu hits a specific temperature automatically
14:43karolherbst: but I don't know how to do that
14:43Tom^: ah ok
14:43karolherbst: mupuf_: knows :p
14:44karolherbst: Tom^: you could try something out though
14:44karolherbst: Tom^: this is the function which calculates the real voltage out of the voltage map table id: https://github.com/karolherbst/nouveau/blob/master_karol_no_touchy/drm/nouveau/nvkm/subdev/volt/base.c#L69-L88
14:45karolherbst: vmap evaulates to true for your gpu
14:45karolherbst: so you could again replace info.min with info.max
14:45karolherbst: or do something smart: (info.min + info.max) / 2
14:45Tom^: or just set it to a value myself?
14:45karolherbst: this will have an influence of the available cstates
14:45karolherbst: because then every cstate evaluates to the same voltage
14:46karolherbst: also the real high ones
14:46karolherbst: which aren't possible on your gpu
14:46karolherbst: yeah, maybe change this line: "return info.min;"
14:46karolherbst: and just return info.max
14:46karolherbst: and then above info.min +ret has to be info.max + ret
14:47karolherbst: this should work as we already tried that I guess
14:47Tom^: but wont that set the volt to the max volt that table exposes?
14:47karolherbst: info is a row entry
14:47karolherbst: check nvbios vbios.rom
14:47karolherbst: Voltage map table
14:47karolherbst: the ids are the voltage values in the cstates
14:48Tom^: will do
14:48karolherbst: so if a cstate has voltage = 20, look at voltage map table id 20
14:48karolherbst: there you have the min/max values
14:48karolherbst: -- ID = 20, link: 47, voltage_min = 825000, voltage_max = 887500 [µV] --
14:48karolherbst: link means, you have to add the values of id = 0x47 (hex value)
14:48karolherbst: so this one: -- ID = 71, link: 6c, voltage_min = 0, voltage_max = 12500 [µV] --
14:48karolherbst: and so on
14:49karolherbst: anyway, I will be off for today ;)
14:49karolherbst: I would try out the /min/max/ replacement first
14:49karolherbst: otherwise you might get mapping issues later and volting just fails
14:49karolherbst: you will get a much lower clock then
14:50karolherbst: maybe you get the same one
14:50karolherbst: but with 1.21V then
14:50karolherbst: but that doesn't matter
14:50karolherbst: will be gone now :p
19:35mupuf_: tadam! The jetson finally has an ssd!
19:36mupuf_: that was a lot of work as I had to move all the data out of it to my new drives on the desktop
19:36mupuf_: but hopefully, I will now get a decent IO performance!
19:38mupuf_: hmm, not sure it is much better :s
19:39mupuf_: well, it is much better, but it still is bad. Once it is cached, everything is fine
19:41mupuf_: to give you a sense of scale, make on nouveau OOT spends about 1 to 2 minutes checking all the dependencies
19:47gnurou: imirkin: I will try and get permission to send a fix for this, we found the cause to be the texture descriptor format that has changed.
19:49mupuf_: well, 3m20.908s vs 0m6.886s on my i7-4790K to compile nouveau :D
19:49gnurou: gm1 can still use Kepler's, but gm2 has its own
19:50mupuf_: gnurou: I wonder why we never saw the problem before ... oh right :D
19:50gnurou: mupuf_: don't get me started :)
19:51mupuf_: I wonder if we should try to find the pdaemon command that initiates the DMA transfer
19:51gnurou: but hey, we are making progress with the fw too
19:52gnurou: even though only the final result will be visible to the community
19:52mupuf_: this way, devs could at least get access to the fw before nvidia's lawyers feel like risking the exposure
19:52mupuf_: but good to hear!
22:00koz_: OK, I seem to be able to get my 680 to run at 0a, but nothing higher.
23:09koz_: If my 680 will only reclock to 0a, is there anything I can do to find out what's going wrong at higher pstates?