07:16mupuf: quick count, pgraph itself has 8k clock gating domains
07:16mupuf: and 1.4k power gating domains
07:16mupuf: that is seriously fine!
07:17mupuf: specing: yes
07:19specing: what is that?
07:19mupuf: oh, the main engine doing the 2d/3d/compute acceleation
07:19mupuf: what people would define as: THE_GPU(tm)
07:20specing: Im not that suprised, since marketing tells use they have 1000+ "cores"
07:20mupuf: Intel has 8 cores and only one clock for all the cores
07:21mupuf: I am talking about the cpu here
07:27mupuf: I guess I am just surprised because there is one counter per domain
07:27mupuf: 8 bits counters, most probably
07:27mupuf: but still
07:30specing: Intel doesen't have 8 cores...
07:31specing: plus I am sure each core is gated seperately
07:32mupuf: specing: there is one with 8 cores
07:32mupuf: worth $999
07:32imirkin: *worth* $999?
07:32imirkin: or that's how much they're charging?
07:32imirkin: also iirc there was an 18-core one...
07:33imirkin: but i bet all the cores run off the same clock with an independent clock multiplier
07:33mupuf: the xeon Phy is also crazy (60 IIRC)
07:33imirkin: and it's also a pentium 1 architecture :)
07:33imirkin: with AVX512 thrown onto it -- really anachronistic :)
07:33mupuf: off the same clock with an independent clock multiplier --> That's a bit stupid since all computers do that
07:33mupuf: and this is called a PLL
07:34mupuf: all the clocks are derived from one crystal
07:34mupuf: two if you include the GPU's
07:34specing: lol "worth"
07:34specing: Intel's margin is like 1000%
07:34imirkin: specing: dunno if *that*'s true, you have to amortize it over the R&D costs
07:35imirkin: those 14nm processes don't develop themselves
07:35mupuf: and the hw engineers designing the hw
07:35specing: AMD's 8-cores are $200, so...
07:35mupuf: and the sw engineers writing the drivers
07:35imirkin: i pay your salary!
07:36mupuf: imirkin: when was the last time you bought an intel product?
07:36imirkin: directly from intel? never
07:36imirkin: i suspect the smallest quantity they're even willing to talk about is a "tray", aka 1000 units
07:36imirkin: indirectly about a month ago
07:37mupuf: oh, so you did contribute to my salary then
07:37mupuf: I have not contributed to my salary yet
07:37imirkin: not really, i'm sure the parts had been bought like a year ago
07:37imirkin: (this is from dell)
07:37mupuf: I doubt dell buy stuff that much in advance
07:38mupuf: that would be uterly stupid
07:38mupuf: especially when they have such a high volume
07:38imirkin: yeah, dunno
07:38imirkin: nor do i really care tbh
07:39mupuf: anyway, to come back to the original topic, nvidia really have a very fine-grained cg
07:39mupuf: but pg is also impressive!
07:39mupuf: especially since pg means losing the context
07:40mupuf: maybe not at this scale though
07:41mupuf: or it just does not pg while processing, which makes a lot of sense
07:43mupuf: or maybe I am counting too many times the same thing
10:42mlankhorst: clock gating and power gating?
11:15mupuf: mlankhorst: you wonder what this means?
11:16mupuf: clock gating = not allowing the clock to reach a certain hw block
11:16mupuf: power gating = cutting the supply voltage to a certain hw block
11:16imirkin_: the difference of an electrified fence ("power gate"), and one that just opens at certain times ("clock gate")
11:17mupuf: the first one makes sure that there is no unneeded changes in the block which would consume power needlessly
11:17mupuf: and the latter is to prevent the transistor from leaking their charges
11:18mupuf: as in, electrons teleporting themselves from one side of the transistor to the other, even when the transistor is blocked
11:18mupuf: and also from the gate to the drain
11:18imirkin_: silly quantum mechanics
11:19imirkin_: newtonian is so much better... should just use that one
11:20mwk: mmm newtonian GPUs
11:20mwk: heh, actually nv probably *does* plan a Newton GPU after Volta sometime :p
11:22glennk: put enough newtons in it and it goes FTL
11:22imirkin_: first two layers? :p
11:25mupuf: glennk: nice :D
11:33mupuf: ok, so, my count was stupid since there is more register space allocated than real units
12:12gremlink: Which kernel does the GM206 work with?
12:12imirkin_: 4.0-rcN should provide basic modesetting for it
12:13gremlink: imirkin_: No version before that?
12:14imirkin_: gremlink: you could apply patches that enable support for it to earlier versions that already have GM204 support
12:14imirkin_: btw, just want to make sure -- you understand what "basic modesetting" means, right?
12:14gremlink: imirkin_: OK. I take it that modesetting isn't supported before 4.0?
12:15gremlink: imirkin_: Basically. Don't know the technical details, but I know what it means. :-)
12:15imirkin_: GM206 support was added in a patch that was only merged in 4.0-rc... 2 or so
12:15imirkin_: commit 7e547adcea7b
12:16imirkin_: (with a follow-on fix to DCB4.1 processing)
12:16gremlink: OK. I'll have to decide between playing with that and using the Nvidia blob with a non-rt kernel.
12:17imirkin_: modesetting, btw, is enough to light up the screens, but you get no acceleration
12:18gremlink: Yeah. I don't really care about acceleration. This GPU does its real work in Windows. On Linux, I just need it to throw up a display.
12:18Yoshimo: users of nouveau should be used to no acceleration
12:19imirkin_: Yoshimo: accel works fine on all gpu lines except GM20x...
12:20imirkin_: (out-of-the-box for GM107 is not mainline yet, but it does work if you supply your own firmware)
13:39joe9: I am trying to configure a 3 monitor setup. 2 monitors (HDMI and DVI) on a radeon card and another VGA on a GeForce 5200 (nouveau) card. I stumbled upon this thread. But, Could not figure out what "tyler_K" meant by "turn on the output of the second adapter". What is the xrandr command to turn on the output of an adapter?
13:39joe9: would this commit make it easier to setup monitors on different gpu's? http://cgit.freedesktop.org/xorg/app/xrandr/commit/?id=d06730e94320175d40ff6f2bb38dce55312f2e54
13:40imirkin_: joe9: the monitor stuff is to deal with multi-tile displays
13:40imirkin_: like some 4K displays and also some DSI panels
13:40joe9: imirkin: oh, cool, not for multi monitor setup then. Thanks for the response.
13:41joe9: http://codepad.org/PdplhkxF is xrandr --listproviders on my system
13:41imirkin_: joe9: your options for multi-card in X are the provider offload thing, and ZaphodHeads + xinerama
13:41imirkin_: joe9: so you could run 'xrandr --setprovideroutputsource 1 0'
13:42joe9: imirkin: yes i did that.
13:42imirkin_: and then the nvidia card's outputs should appear in 'xrandr' output
13:43imirkin_: [do they not?]
13:43joe9: I did that and now when I do xrandr --query, I get a lot of output. How do I check "xrandr" output?
13:43imirkin_: pastebin 'xrandr' output
13:43joe9: I mean is there an option to check xrandr output?
13:44imirkin_: not sure what you mean by "check"
13:44joe9: ok, will do.
13:45joe9: http://dpaste.com/1CBT3D4 imirkin
13:46imirkin_: great, so i guess the VGA monitor is plugged in on the nvidia card's DVI port?
13:46imirkin_: xo you can just do like
13:47imirkin_: xrandr --output DVI-I-1-2 --left-of DVI-0 --auto
13:47joe9: imirkin: the GeForce Fx5200 is an old PCI card. It has only VGA outputs.
13:47joe9: imirkin: it does not have a physical DVI port
13:47imirkin_: well, nouveau thinks it's a DVI port :)
13:48joe9: imirkin: oh, ok. thanks.
13:49imirkin_: did that command have the desired effect?
13:49joe9: imirkin: When I run the command, xrandr --output DVI-I-1-2 --left-of DVI-0 --auto, the current DVI monitor screen flickers. Nothing on the VGA monitor screen though.
13:50joe9: I tried with s/DVI-0/HDMI-0/ and the same thing happened.
13:50imirkin_: can you pastebin the output of 'xrandr' again?
13:51joe9: http://dpaste.com/2GBTV93 this is xrandr --query
13:51imirkin_: well, nouveau *thinks* that the screen is on
13:51joe9: imirkin: xrandr has the screen resolution of the VGA / DVI-I-1-2 correct.
13:51imirkin_: you're sure it's off? :)
13:52buhman: imirkin_: how do you know 'DVI-I-1-2' is a different card?
13:52buhman: or nouveau specifically
13:52joe9: imirkin, good question, I do not.
13:52imirkin_: buhman: the extra -1 in there indicates it's a GPU screen
13:52buhman: what's a not-GPU screen?
13:53imirkin_: buhman: soooo... each driver names its screens differently
13:53imirkin_: so through various clues, you can determine which drivers runs which screen
13:53imirkin_: for example radeon numbers things from 0
13:53imirkin_: and calls its outputs e.g. DVI
13:53imirkin_: instead of DVI-I
13:53imirkin_: intel doesn't put a - between the connector type and the number
13:53buhman: I didn't realize xrandr enumerated all screens in no particular order from all devices
13:54buhman: imirkin_: that sounds annoying yet useful
13:54imirkin_: buhman: if you've set up the provider stuff, it's all there
13:54joe9: This is my xrandr --listproviders http://codepad.org/NKkQcAuK
13:54imirkin_: joe9: wait, you're not sure whether the screen is on or not?
13:55joe9: imirkin:Yes, I do not whether the screen is on or not? I know that it is powered on. But, it is just blank.
13:55imirkin_: hmmm... usually you can tell if it's getting a signal or not
13:55joe9: and the mouse is not moving to that screen.
13:56imirkin_: is this an LCD panel or CRT?
13:56joe9: LCD panel
13:56buhman:wants to see imirkin_'s A+ randr config
13:56imirkin_: buhman: huh?
13:56joe9: I can see the light is bright green. so, I think it is getting something from the card. When I switch off the cpu, the led turns to a dim green.
13:57imirkin_: joe9: hm ok... can you plug it into the other port on the nvidia card?
13:57imirkin_: that one's detected as VGA so might work better
13:58joe9: ok, will do. I will also restart my system and will be back in a few minutes. Will you be around? imirkin>
14:02joe9: imirkin: <joe9> This is my xrandr --listproviders http://codepad.org/NKkQcAuK
14:02joe9: <imirkin_> joe9: wait, you're not sure whether the screen is on or not?
14:02joe9: <joe9> imirkin:Yes, I do not whether the screen is on or not? I know that it is powered on. But, it is just blank. [16:55]
14:02joe9: <imirkin_> hmmm... usually you can tell if it's getting a signal or not
14:02joe9: <joe9> and the mouse is not moving to that screen.
14:02joe9: <imirkin_> is this an LCD panel or CRT? [16:56]
14:02joe9: <joe9> LCD panel
14:02joe9: * buhman wants to see imirkin_'s A+ randr config
14:02joe9: <imirkin_> buhman: huh?
14:02joe9: <joe9> I can see the light is bright green. so, I think it is getting something from the card. When I switch off the cpu, the led turns to a dim green.
14:02joe9: <imirkin_> joe9: hm ok... can you plug it into the other port on the nvidia card? [16:57]
14:02joe9: <imirkin_> that one's detected as VGA so might work better
14:02joe9: <joe9> ok, will do. I will also restart my system and will be back in a few minutes. Will you be around? imirkin> [16:58]
14:02joe9: [#nouveau] imirkin: http://dpaste.com/1FQRZ89 i just moved the VGA connection to the different port
14:02joe9: now, the led light has turned dim green.
14:03joe9: imirkin: should I run some command to ensure that xrandr reads the configuration from the video card again?
14:03imirkin_: joe9: xrandr --output DVI-I-1-2 --off --output VGA-1-2 --left-of DVI-0 --auto
14:04joe9: imirkin: xrandr --output DVI-I-1-2 --off --output VGA-1-2 --left-of DVI-0 --auto
14:04joe9: [#nouveau] imirkin: http://dpaste.com/3PZD1K9
14:04joe9: imirkin, please ignore the first line.
14:05imirkin_: joe9: did that make it work? :)
14:05joe9: this is the output from xrandr --query after running the above command.
14:05joe9: Still nothing on the screen.
14:05joe9: not even a flicker.
14:05imirkin_: well, nouveau sure *thinks* it's displaying
14:05imirkin_: let's rule out some sort of other fail... what if you do
14:06joe9: the led light has turned bright green, so, it is probably displaying a blank screen or something.
14:06imirkin_: joe9: xrandr --output VGA-1-2 --same-as DVI-0
14:07joe9: imirkin: the last command, I got a flicker on the VGA screen. but, no output though. still blank.
14:07imirkin_: joe9: can you pastebin your dmesg? want to make sure there's nothign funny in there
14:07joe9: could it have something to do with my window manager configuration.
14:08joe9: imirkin: that is a good idea. I will do that. btw, will you be around for a few minutes so I can restart my machine and come back?
14:08joe9: imirkin: dmesg is full of these messages: http://codepad.org/x5F0pUyo
14:09imirkin_: ah yeah, that's bad
14:09imirkin_: the implication is that it doesn't have enough vram to do the thing it's trying to do
14:09joe9: oh, ok.
14:09imirkin_: but i'm not sure why that would happen... FX5200's came with at least like 128MB of ram
14:10imirkin_: anyways, where those messages started would be interesting
14:10joe9: http://codepad.org/f0QKFBcr is from lspci -v
14:10imirkin_: perhaps X managed to wedge itself somehow
14:10joe9: http://codepad.org/UEJieyGW using sudo lspci -v
14:10imirkin_: i'd try restarting at least X
14:11imirkin_: not sure what you want me to look for in that pci output
14:11joe9: just that it has the memory.
14:11joe9: I will restart and be back. Thanks a lot.
14:11imirkin_: that's not visible from there...
14:11imirkin_: you just get the BAR sizes there, which have little to nothing to do with VRAM
14:12joe9: one quick question, I have an onboard HDMI port, radeon PCIE card with HDMI, DVI and VGA ports. and another PCI GeForce 5200 with 2 VGA ports.
14:12joe9: Instead of messing around with the PCI GeForce card, Can i just use the onboard HDMI port + DVI and VGA ports of the radeon card.
14:13imirkin_: what gpu powers the onboard hdmi port?
14:13joe9: When I tried it by setting the onboard video card as the default in BIOS, I could see output on the onboard screen until linux started. and then it just stopped
14:14joe9: imirkin: this is the onboard gpu: http://codepad.org/CXN9CEIf
14:14imirkin_: joe9: yeah, that should work fine... way better than the fx5200 :)
14:15imirkin_: (and also like 10 years newer)
14:15joe9: oh, cool. Thanks a lot.
14:16joe9: for the intel onboard graphics card above, I would need this kernel option: CONFIG_DRM_I915, correct?
14:17joe9: or, is there something else?
14:17imirkin_: yes, you want the i915 drm driver
14:18joe9: How about CONFIG_DRM_I915_KMS?
14:18joe9: ok, thanks.
14:18imirkin_: how is that even an option... it should be required
14:18imirkin_: are you using an ancient kernel?
14:19joe9: Linux master 3.12.38 #117 SMP PREEMPT Wed Apr 8 12:33:28 EDT 2015 x86_64 Intel(R) Pentium(R) CPU G620 @ 2.60GHz GenuineIntel GNU/Linux
14:19joe9: is my uname -a
14:19imirkin_: not that ancient, but probably old enough to still support non-kms intel :)
14:21joe9: i do not want to get into more recent kernels as I heard horror stories of how even the stable kernels can mess up things pretty badly. I am sticking with the 2nd longterm-stable release.
14:22imirkin_: skeggsb: did nv3x support the NV20_3D class?
14:22imirkin_: skeggsb: if so, that might be a nice way of testing nouveau_vieux's nv20 support :)
14:45joe9: imirkin: I think this is a bug somewhere. I keep getting these messages in dmesg: http://codepad.org/U3lU7n8D again.
14:46joe9: imirkin: http://codepad.org/X078xu4X are more messages from my dmesg related to nouveau
14:47imirkin_: joe9: i thought you said you were going to unplug the fx5200 and use onboard
14:47joe9: imirkin: yes, just wanted to try to see if I can get this card to work, So, I can add another HDMI monitor to what I currently have.
14:48joe9: [ 13.463466] nouveau W[ PTIMER][0000:06:00.0] unknown input clock freq -- could this be the issue?
14:48imirkin_: no. please include your full dmesg until the repeating messages start up
14:48joe9: imirkin: Do you think this might be something simple to debug?
14:53joe9: imirkin: http://dpaste.com/3BA1VVV does this help?
14:53joe9: I saw a linux kernel setting to increase the debugging level of nouveau. Please let me know if that could help.
14:55imirkin_: nothing odd in there :(
14:56imirkin_: it's having trouble getting the fb into vram
14:56imirkin_: that might be just because pre-nv50 cards don't support the fb sharing stuff that you need for optimus to work
14:56imirkin_: yeah.... that sounds familiar.
14:57imirkin_: i guess your only option if you want to use that card is to use ZaphodHeads + Xinerama
14:57joe9: oh, ok. Thanks a lot for your help. I will try the onboard HDMI port now.
15:00joe9: imirkin: one last question, could it have anything to do with the memory that I allocate in BIOS to video ram.
15:00joe9: ok, thanks.
16:13joe9: imirkin: sorry, this is not a nouveau question. but, I am wondering if you could be kind enough to direct me in the correct direction given your knowledge of this stuff. http://dpaste.com/2SB5N17 is my xrandr --query now.
16:13joe9: I have HDMI - onboard video, DVI and VGA - radeon card
16:14joe9: When I did the setprovider source and enabled the monitors on VGA and DVI, X crashed.
16:15joe9: http://dpaste.com/0EQ8Q5F is the Xorg.0.log when X crashed.
16:16joe9: Just want to check if you might have any suggestions. Sorry for the bother.
16:19joe9: http://dpaste.com/0RD7X2Q this is xrandr --query after --setprovidersource 1 0
16:33joe9: imirkin: sorry, X crashed again and hence I got logged out.
16:33joe9: imirkin: I am not sure if you replied while I was away.
16:33joe9: I removed the GeForce 5200 card and still X crashed.
16:34joe9: Xorg.0.log when X crashed : http://dpaste.com/2BY483T
22:54gnurou: I am giving a first review to Deepak's PMU v2 btw
22:55gnurou: I wonder if a lot of this code would not be better if moved to base.c - suggestions for how to handle this are welcome
22:56gnurou: ... or maybe not since Nouveau's PMU firmware is not necessarily compatible with NVIDIA's?
22:59imirkin: well, i see no issue with nouveau adopting the nvidia abi -- i presume the nvidia one is better thought out and covers more use-cases than the nouveau one
23:03gnurou: problem with the NVIDIA ABI is that it is potentially changing
23:03imirkin: well, that won't work for upstream kernel
23:03imirkin: you'll have to version your firmware with abi versions
23:04imirkin: so that you can bump it up as you change things around
23:04imirkin: but i suspect they tend to be minor changes
23:04imirkin: like "add field X"
23:04imirkin: rather than "redo how the whole thing works"
23:04gnurou: and that will likely make it difficult to share PMU code
23:05gnurou: yeah, I expect changes to be rare and backward-compatible
23:05gnurou: but having no control over them, that's not something I can guarantee
23:05imirkin: in which case it may be reasonable to either (a) use the current one as a base or (b) adjust nouveau one when nvidia makes changes
23:05imirkin: basically.... don't make the foolish assumption that the current nouveau ppwr code is well-thought-out
23:06imirkin: it works for what it currently does, but there's lots more stuff it could do, and i presume the nvidia code does a lot of it
23:07gnurou: yeah, as Deepak explained, power management, and more importantly, secure firmware loading
23:07imirkin: well, the secure firmware loading is less important for the gpu's that nouveau supports :)
23:07gnurou: not for Maxwell+ :)
23:08imirkin: right, which nouveau doesn't support... coz of secure firmware :p
23:08gnurou: that's the main point for pushing this
23:08gnurou: although since the code initially comes from nvgpu, it might require some thinking to fit within Nouveau
23:09gnurou: we removed a lot of crap already, still
23:09imirkin: yes, i glanced over this submission briefly
23:09imirkin: seems a lot more reasonable
23:11gnurou: glad to read that :)