09:19skeggsb: so, i just pushed the atomic/mst code to my tree
09:19skeggsb: master is now based on airlied's current drm-next, and i'm not 100% sure my porting to that tree was successful
09:19skeggsb: a tiled 4k dell monitor no longer works on my laptop, but can't be sure if it was my rebase or some other change that busted it yet
09:20skeggsb: i've also pushed another branch, devel-kms, which is based on 4.9 still, and confirmed to work on another system with the same monitor (and its mst dock, with a few other monitors)
09:20skeggsb: any testing/feedback of (either) branch would be very very welcome
09:21pmoreau: Might be worth sending a mail about it on the ML, as it could attract some more users to give it a try.
09:24skeggsb: i'd planned on sending it to airlied for drm-next already (plenty of time to fix issues before the merge window, and, i don't expect there'll be any major ones), but, i want to figure out what's going on on my laptop first
09:24skeggsb: that can wait until monday
09:24skeggsb:didn't much feel like carrying a monitor home on the train with him
09:26karolherbst: skeggsb: I had issues with my tesla and my 4k display too, even on fullhd
09:26skeggsb: well, at some point, this monitor *did* work with my laptop
09:26karolherbst: ohh wait, was also using 4.8 nouveau… I thought I was using master
09:26skeggsb: i'm currently uncertain if some fixes/cleanups i did busted it, or my rebase on drm-next did
09:26karolherbst: will test older kernels then
09:27karolherbst: but it was also over an active DP1.2 -> HDMI 2.0 adapter and the tesla can only do DP 1.0 or 1.1a
09:28karolherbst: wasn't able to get a 4k resolution running, allthough that card should support 4k@30Hz…
09:28karolherbst: (in theory)
09:29skeggsb: hm, that monitor i mention provides a 30hz mode in sst mode, and it JustWork(tm)s
09:29skeggsb: *though* if it is a mst monitor, i've seen some of those come up half-configured for mst, so nouveau won't work
09:29karolherbst: well, my tesla is also a mcp79 ;)
09:29skeggsb: that'll be fixed with current code too
09:29karolherbst: ohh nice
09:29karolherbst: so I should test master before anything else?
09:29pmoreau: Oh wait, I do have a 4k monitor sitting in a box, which I could also use for the testing. I don’t think it is an MST one though; I’ll have to check.
09:30karolherbst: how can I check if a display supports mst?
09:30skeggsb: devel-kms would be fine too, that's more tested anyway as there were some changes for drm-next that i did somewhat hastily.. no idea if i've missed anything
09:30skeggsb:is using is currently on his laptop though, seems ok so far...
09:31karolherbst: the display I am talking about is HDMI only
09:35skeggsb: well, the entire kms part of the driver was rewritten for atomic basically, so, *everything* is worth testing, not just DP
09:37karolherbst: I see
09:38karolherbst: I can only test on my Tesla anyway and it has a miniDP and a miniDVI port
09:39karolherbst: skeggsb: mind picking up the two patches to fix a bunch of nouveau compile warnings when building with W=1?
09:40karolherbst: you did already
09:41karolherbst: skeggsb: still want to build some CI system for nouveau, in fact, I have something working already, would be nice if I can simply enable W=1 without printing to much crap out
09:42karolherbst: skeggsb: https://github.com/skeggsb/nouveau/commit/44297cd88e324e013144b2349b487f928559cbf8#diff-94e18c7e8b5bab8a1215265cdb5ca1f4
09:42karolherbst: you might want to remove that
09:47skeggsb: karolherbst: ?
09:49karolherbst: your sed commands
09:50karolherbst: the Makefile potentially modify source files, which is bad
09:50karolherbst: ohhh wait
09:52karolherbst: no, it is fine,my mistake
09:53karolherbst: skeggsb: by the way, I have some patches to parse out the max and crit power cap on like 75% of all kepler and newer gpus, I would like to get this merged for 4.10, so that we can catch wrong reports before actually depending on it later
10:09mupuf: karolherbst: sounds ogod
11:02karolherbst: mupuf: maybe you have an idea about the reqPower and reqSlowdownPower fields in the vpstate table?
11:05mupuf: karolherbst: what do you mean by idea?
11:05mupuf: it looks perfectly fine by me
11:05karolherbst: I meant what nvidia might do with those fields
11:05mupuf: reqPower == power envelope for selected base clock
11:05mupuf: they enforce it
11:06karolherbst: so you mean that, current power should be always above reqPower otherwise the vpstate isn't enforced?
11:06mupuf: and reqSlowdownPower, as imirkin_ said, this is when we have over temperature, we need to slow down the gpu (done by the FSRM) but we also need to make sure the power goes down too
11:06mupuf: to the wanted value
11:06mupuf: no, absolutely not
11:07mupuf: I think this is a cap
11:07karolherbst: this is my table: https://gist.github.com/karolherbst/43880879d8b02bb4330923778f19f11f
11:08karolherbst: I would agree with you, if the values would be swapped for reqPower and reqSlowdownPower
11:09karolherbst: because even with nvidia the gpu might consume more power
11:09mupuf: when slowdown is asserted, our budget goes down from 34 to 23.3W when on battery
11:09mupuf: does not sound too crazy to me
11:09karolherbst: or our power reading is flawed
11:09karolherbst: under idle nvidia clocks down on battery already
11:09mupuf:thinks this is a cap, the driver needs to enforce it
11:09karolherbst: so it doesn't matter how much power is consumed
11:10karolherbst: ohh, wait
11:10karolherbst: I think I see what you mean, mhh
11:10mupuf: to me, this is how I understand this table
11:10mupuf: on battery, select entry 15
11:10karolherbst: this is the easy part
11:11mupuf: entry 15 says that the base clock (minimum clock) will be 810 MHz
11:11karolherbst: nvidia caps the clock to this on battery
11:11mupuf: but we have a budget of 34W, so you may increase the clock until you reach this power usage
11:11karolherbst: nope, nvidia doesn't do that
11:11mupuf: ok, then all the values are cap
11:11mupuf: up to 810 Mhz or up to 34W
11:11karolherbst: I could retest against that, maybe I missed something
11:12karolherbst: thing is, under full load, nvidia doesn't care if the GPU consumes more than 75W
11:12mupuf: or up to 23.3W if it is overheating
11:12karolherbst: the power budget of the GPU is 80W
11:12karolherbst: and it caps to 80W
11:13karolherbst: but well, I wasn't really able to have a reliable enough test for this and I did this in a different context
11:13mupuf: yeah, try again your tests
11:13karolherbst: well, the other thing is, those values are filled on like <5% of all vbios
11:14mupuf: yes, so?
11:15karolherbst: I would rather spend more time on the power budget things and only try to understand those values, but not really implement those things inside nouveau (for now)
11:15mupuf: yes, I fully agree with this plan
11:15karolherbst: there are other nice fields in the header though
11:15karolherbst: but every single one is optional....
11:15karolherbst: I think only boost is set for every gpu
11:16karolherbst: and then there are funny things like "mid_point"
11:16karolherbst: no clue what this means
11:16mupuf: well, this makes sense
11:16mupuf: at worst, you only care about the maximum speed
11:16karolherbst: there is also a "over_current" one, which sounds nice at first
11:17karolherbst: nvidia has a list of caps with activation criterias
11:17karolherbst: and priorities
11:18mupuf: yep, which is a really nice design
11:18mupuf: at least in principle, no idea if they screwed it up
11:18karolherbst: but to understand this table will help me a lot with the power budget stuff, cause I can eliminate this one to have nvidia do less capping
11:18mupuf:is so happy about how much we learnt about all this
11:19mupuf: and thanks a lot for your work!
11:19karolherbst: yeah, this is helpful
11:19karolherbst: well Lekensteyn was the main reason we got to this :p
11:19karolherbst: I really hope we can get a bit more information out of nvidia-smi on quadro cards
11:19karolherbst: maybe on a titan too
11:19karolherbst: mupuf: I need your titan again :p
11:20mupuf: yeah, I can do that
11:20mupuf: Lekensteyn: I agree, this is wonderfully valuable!
11:20karolherbst: and I think we slowly need a real quadro card
11:20karolherbst: now that nvidia-smi becomes a useful tool for REing
11:21mupuf: or we need to work around nvidia's limitation
11:22karolherbst: right, there was this hack for older dirvers
11:22karolherbst: but then again, maybe the quadro vbios exposes a lot more stuff, who knows
11:30mupuf: possibly ;)
11:40karolherbst: mupuf: ohh, by the way, interested in looking into a 4k@30Hz issue I have with my hsw gpu :p
11:44mupuf: karolherbst: no, sorry
11:44mupuf:already is interested by too many things :D
11:44mupuf: and right now, my #1 interest is this fan issue
11:44karolherbst: ahh right
11:44karolherbst: but I meant more like if you get bored at work :p
11:46mupuf: ah ah ah ah
11:46mupuf: never gonna happen
11:46mupuf: or not in the coming months, at least!
11:46karolherbst: it's about an intel gpu :p
11:46karolherbst: but if you got no time :(
11:55mupuf: oh, it is an intel bug
11:55mupuf: then #intel-gfx
11:56karolherbst: yeah I already opened a bug and everything, but this is maybe the kind of bug where you need to test what windows does or just need to know the right poeple to answer questions...
11:57karolherbst: vsyrjala looked into it, but didn't found out anything useful
11:58karolherbst: mupuf: I just thought, maybe you know somebody who has some knowledge about HDMI1.4 ports and how to get the max pixelclock supported from the hardware, if the adapter doesn't expose it. It's a on motherboard adapter from DP to HDMI
12:00mupuf: karolherbst: if ville looked into it, then I cannot do anything better than summon the great daemon, mlankhorst
12:01karolherbst: fine by me
12:02karolherbst: I think mine is the worst case you can get, because the adapter doesn't advertise anything, but the hardware can do 540MHz actually. drm itself falls back to 165MHz, so yeah, that's the issue basically
12:06mupuf: are you sure you cannot force the max pixel clock in a kernel parameter?
12:07karolherbst: well, maybe, I can also hack drm to set the max to 300MHz, but that isn't exactly user friendly
12:07karolherbst: if there is a way to detect that 300MHz is supported, the best way would be to know how to detect this
12:08karolherbst: I can also add the rejected modelines manually through xrandr and they get picked up and enabled
12:08karolherbst: and they work without problems
12:11mupuf: well, don't you think the problem lies in the fact that your hw contains an external encoder of which intel knows nothing about?
12:11mupuf: so it defaults to the safest possible clock
12:11karolherbst: something like that
12:11karolherbst: it's basically a DP to HDMI adapter on the board
12:12mupuf: anyways, no idea how these are detected
12:12karolherbst: the display also get's listed as HDMI in xrandr (not so much with my active adapter, where the HDMI display is listed as DP)
12:12mupuf: and I will leave it to the kernel guys
12:13mupuf: this is hw stupidity, isn't it?
12:14karolherbst: nope, spec not enforcing stuff
12:15karolherbst: well, also hw stupidity, but the hw is stupid, because it doesn't need to be smart...
12:19karolherbst: but drm limits all Type 1 adapters (mine is HDMI Type 1) to 165MHz anyway without even trying to detect what the adapter can do
12:19mupuf: can it?
12:20karolherbst: what can it?
12:20pmoreau: *maybe* I can get my hands on a quadro card. What information would be useful?
12:21karolherbst: pmoreau: use Lekensteyn stuff to dump the nvidia-smi debug log
12:21pmoreau: Do you have a link to it please?
12:22karolherbst: if the irc log page would work, yes…
12:22Lekensteyn: pmoreau: https://gist.github.com/Lekensteyn/c8d41c02d118aa40bc100020efde3696
12:22pmoreau: Lekensteyn: Thanks
12:22Lekensteyn: I've some friends here with quadro GPUs too, does the model matter?
12:23karolherbst: it has to be fermi or newer I think
12:23karolherbst: but the more the better
16:09imirkin_: skeggsb: did you consider DP-MST + audio?
16:27pmoreau: imirkin_: Just render the audio waves as an overlay! :-D
16:28imirkin_: pmoreau: heh. well you can do audio over DP, just like you can with HDMI. it becomes more fun when MST is involved, i think
16:29pmoreau: But who needs to hear the audio when you can see it! :-p
16:30pmoreau: imirkin_: BTW, any updates for the MUL/MAD 64bit -> 32bit split patch?
16:31pmoreau: One day maybe? :-D
16:32imirkin_: it feels like people want me to do an increasing amount of stuff, coinciding with me having less time to do that stuff =/
16:33pmoreau: No problem, I’ll wait and see if anyone has time to review it
16:34imirkin_: the (unfortunate) fact is that the patch isn't *that* important right now
16:35pmoreau: True, and won’t any time before Ian’s patches land
16:45pmoreau: I was hoping that Phoronix would relay, that apart from Ben patches landing in a tree, he was also seeking testers for those…
19:29karolherbst: mhhh: https://github.com/karolherbst/nouveau/commit/49e82d31fac08a6874c944f6652bb51338ecefcc#diff-9090ba340763465baa0e723aca0cb125
19:29karolherbst: either in line 161 the author wanted to use chan or that call is indeed not used at all
19:31barteks2x: I'm trying to get this working: https://nouveau.freedesktop.org/wiki/Optimus/ but DRI_PRIME=1 has no effect for me. Is this the right place to ask for helpl with that?
19:32karolherbst: barteks2x: it sure is
19:32karolherbst: barteks2x: I guess your system doesn't use DRI3
19:32karolherbst: so you have to either enable dri3 or do that xrandr thing
19:32karolherbst: otherwise dmesg would be helpful
19:32barteks2x: I do the xrandr thing, because I saw that it doesn't use dri3
19:32karolherbst: do you have a 900m series gpu?
19:33barteks2x: I have nvidia geforce gt 740M
19:33karolherbst: mhh okay
19:33karolherbst: then output of dmesg please
19:33barteks2x: this is output of dmesg from right now
19:34karolherbst: okay, this looks fine
19:34karolherbst: then /var/log/Xorg.0.log
19:35karolherbst: check DRI_PRIME=0 glxinfo
19:35barteks2x: which part of it exactly?
19:36barteks2x: or this? DRI_PRIME=0 glxinfo | grep "OpenGL vendor string"
19:36barteks2x: it shows OpenGL vendor string: Intel Open Source Technology Center
19:36barteks2x: same ass with DRI_PRIME=1
19:37karolherbst: what does xrandr --listproviders print?
19:38karolherbst: are you 100% you did "xrandr --setprovideroffloadsink nouveau Intel" ?
19:39barteks2x: actually, I have xrandr --setprovideroutputsource nouveau modesetting, I'm 100% sure xranrd --listproviders showed modesetting before
19:39karolherbst: well, it doesn't anymore
19:39barteks2x: starting another X session with that changed to Intel
19:40barteks2x: it's still the same
19:40imirkin_: that xorg log has driver intel
19:40imirkin_: whereas you're saying you're using driver modesetting
19:41imirkin_: barteks2x: LIBGL_DEBUG=verbose DRI_PRIME=1 glxinfo > /dev/null
19:43barteks2x: this is the output I get
19:43imirkin_: ok, so it's not even *trying* to load nouveau
19:43imirkin_: xrandr --setprovideroffloadsink nouveau Intel
19:43imirkin_: you ran that?
19:43barteks2x: I have it in .xinitrc
19:43imirkin_: can i see the exact command?
19:44barteks2x: I literally have this line in .xinitrc: xrandr --setprovideroutputsource nouveau Intel
19:44imirkin_: that's output source.
19:44imirkin_: not the command i gave.
19:44imirkin_: that's for offloading outputs that you might have on the nvidia to the intel gpu
19:44barteks2x: I probably got it confused from previous setup with nvidia drivers
19:45barteks2x: which one should I have?
19:45imirkin_: depends what you want to do
19:45imirkin_: read the instructions :)
19:45imirkin_: i'm guessing the "Offloading 3D" section is particularly relevant to you
19:46barteks2x: it works now after I changed --setprovideroutputsource to --setprovideroffloadsink, I woudl have never found the difference
19:46imirkin_: yeah, it's a common typo... the keys are like right next to each other :)
19:47imirkin_: set provider something something
19:47barteks2x: as I said, I had that from previous setup with nvidia driver and the command looked exactly the same
19:47imirkin_: i agree :)
19:47imirkin_: and i have a hard time remembering which one does which thing
19:48barteks2x: anyway, thanks for help
19:48karolherbst: the should have named them rcruostuptuoredivorptes and knisdaolfforedivorptes obviously
19:48barteks2x: doing something liek setProvrOutputSource/Sink would be enough
19:49barteks2x: it would be much easier to see the difference
19:49karolherbst: now are even less letters different
19:52barteks2x: is it normal that glxgears says "Running synchronized to the vertical refresh" while getting >2000 fps with DRI_PRIME=1?
19:53karolherbst: with dri2
19:53karolherbst: basically dri2 sucks if you prime offload :p
19:54barteks2x: how could I switch to dri3? based on what the wiki says I have everything required for that
19:54karolherbst: (I don't base that on _any_ technically reasons, I just say it, because I use dri3 and don't like dri2, because 2 < 3)
19:54karolherbst: uhh, you have to enable it for the intel ddx
19:54karolherbst: with a little xorg config file
19:55barteks2x: it's the xf86-video-intel? (it's gentoo and I think all I need to do is set use flag for it)
19:56karolherbst: then it is easy
19:56karolherbst: just enable the dri3 use flag
20:00barteks2x: I already had it enabled for mesa, just didn't know I also need that for intel
20:00imirkin_: karolherbst: does that just build support for it, or does that cause it ot get flipped on by default?
20:00karolherbst: flipped on by default
20:01karolherbst: I think I even reported that one :D
20:01karolherbst: somebody removed that at some point and I had a local ebuild like for ever
20:01karolherbst: yeah, it still defaults to dri3 now
20:02karolherbst: imirkin_: and with dri set, dri3 support is compiled in already
20:02karolherbst: so the dri3 USE flag changes the default thing
20:03barteks2x: so with dri3 I can remove that part of .xinitrc?
20:03imirkin_: barteks2x: you may want to avail yourself of the reclocking functionality if you want that GK208 to be faster than the intel chip
20:04karolherbst: well, he might want to wait for 4.10 for that for now
20:04barteks2x: I think read about that somewhere while trying to solve other issue
20:04imirkin_: he has a DDR3 GK208. i think it'll be fine.
20:04imirkin_: mine works like a charm, at least
20:04karolherbst: well, you are lucky then
20:04imirkin_: [after i added the timings in...]
20:05karolherbst: mhh there is nothing really bad which can happen for yours
20:05karolherbst: at most, 5% undervolting
20:05karolherbst: which is like nothing
20:05imirkin_: well, i had 2 actually
20:05imirkin_: i had a GK208 prior, and now i have a GK208B
20:06karolherbst: sure, but with my vbios, the wrong nouveau code undervolted by more than 10%
20:06imirkin_: it's desktop
20:06imirkin_: i think that's less of an issue there
20:06karolherbst: same thing
20:07barteks2x: if it can't premanently destroy anything, I can try it
20:07karolherbst: my gpu was also fine with that much of undervoltage for most things
20:07karolherbst: but it still crashed occasionally
20:08karolherbst: it really depends on the _real_ chip speedo and not the one nvidia things your card has
20:08karolherbst: uhh, what an idea, speedo overwriting for a smart boost based OC functionality :D
20:08karolherbst: barteks2x: nah, never heard of anybody destroying an nvidia gpu here
20:08karolherbst: actually, I am quite sure somebody managed, but never told anybody!
20:11barteks2x: did I find the right thing? http://www.phoronix.com/scan.php?page=news_item&px=Linux-4.5-Nouveu-PState-HowTo
20:11karolherbst: barteks2x: you really just need to go into the debugfs directory
20:11karolherbst: one thing though
20:11karolherbst: you can't change the clocks while the gpu is suspended
20:11karolherbst: barteks2x: https://nouveau.freedesktop.org/wiki/KernelModuleParameters/
20:11karolherbst: see pstate
20:12karolherbst: last sentence
20:12karolherbst: this is confusing...
20:15barteks2x: There seems to be emptiness in /sys/kernel/debug
20:15karolherbst: you need to mount debugfs first
20:15barteks2x: oh, that would explain it
20:17barteks2x: none is already mounted or /sys/kernel/debug busy -> I don't think it shoudl show when I mount it
20:17karolherbst: well you shouldn't have your shell in it
20:17barteks2x: I don't?
20:18karolherbst: no idea, why is it busy else?
20:18barteks2x: the second line is none is already mounted on /run/user/1000
20:18barteks2x: What I did is based on this https://www.kernel.org/doc/Documentation/filesystems/debugfs.txt
20:22barteks2x: debugfs seems to be already mounted
20:22karolherbst: maybe you didn't enable it in the kernel?
20:22barteks2x: and the directory dtri=ucture is there somehow. It wasn';t there before
20:23barteks2x: so i found it
20:25karolherbst: mupuf: mhh, storing the power budget table in the subdev makes the entire thing a little bit ugly. I would need to store the power_budget struct inside the iccsense struct + a bool flag for parsing success or I call kmalloc and have a pointer
20:26mupuf: ah ah
20:27mupuf: why not store the conditions, initialized to -1?
20:27mupuf: and then store the list of entries
20:27karolherbst: well, I need also stuff from the table header
20:27mupuf: this way, you get your "parsed correctly"
20:27mupuf: yeah, they are the conditions, aren't they?
20:27karolherbst: and the tables are usually parsed into stack stored structs
20:28karolherbst: conditions? more like entry pointers
20:28mupuf: yeah, they are entry pointers ... selected on a certain condition
20:28mupuf: aren't they?
20:29karolherbst: well, the only condition is, they are in the vbios table or not
20:29karolherbst: or what do you mean?
20:31karolherbst: I could set the table pointer to the cap entry to -1
20:34karolherbst: anyway, I am not aware of any bios parse function, which deals with the situation, that the parsing of the table fails, they just return an error an leave the struct alone
20:37karolherbst: mhh, will think about it and maybe I find a way which is not ugly and deals with your suggestions
21:35barteks2x: I'm getting some weird graphical glitches when running with DRI_PRIME=1 and vblank_mode=0, but so far I saw it only with minecraft. It seems like sometimes it shows some old frame instead of newly rendered one. Any idea what may be wrong?
21:39karolherbst: yeah, it is a knownn problem
21:39karolherbst: barteks2x: try to enable vsync inside the game as well
21:39karolherbst: and let your compositor do vsyncing too
21:39karolherbst: this helps with most things
21:39barteks2x: the problem is with vsync disabled, not enabled
21:40karolherbst: ohh right, you said it
21:40karolherbst: yeah, it is a problem with the prime offloading
21:40imirkin_: is disabling vsync a good idea?
21:41barteks2x: when you are getting <60fps, limiting fps with vsync doesn't seem good for preformance (and yes, sometimes I may get less than 60fps)
21:41barteks2x: unless I'm wrong and vsync doesn't affect performance at all
21:41karolherbst: it doesn't matter
21:41imirkin_: did you reclock btw?
21:42barteks2x: no, I didn't do thhat yet
21:42karolherbst: vsyncing has like no impact on perf if you do prime offloading
21:42imirkin_: reclocking should get you maor fps
21:42karolherbst: with prime offloading you have to sync anyway, otherwise you get your graphical glitches
21:42imirkin_: (usually a lot more... at least 2x)
21:43barteks2x: it's not screen tearing, that problem I would understand and expect with no vsyn
21:43karolherbst: it's the same thing basically
21:43karolherbst: the rendered image has to be copied from the nvidia GPU to the intel GPU
21:43karolherbst: and if the buffer access isn't synced -> glitches
21:44barteks2x: the problem is it looks more like some weird jitter, the camera jumping around
21:44karolherbst: right, there is still no other solution than to enable vsyncing
21:45barteks2x: that's really weird issue...
21:46barteks2x: I have no idea what code could cause that kind of effect
21:46karolherbst: silly buffer locking?
21:46karolherbst: anyway, it is kind of a known problem
21:47barteks2x: and I guess it's also low priority because it's only when not using vsync, right?
21:47karolherbst: well, the code is being pretty much reworked anyway
21:48karolherbst: there is no benefit in not vsyncing on modern systems
21:48karolherbst: and I don't mean compositor vsyncing with this
21:48karolherbst: if the compositor syncs stuff, this still is able to impact performance, if your display GPU would render something heavy, which it doesn't
21:50barteks2x: one reason to not vsync: to see max fps you can get. But if it doesn't really affect performance than ok, I can leave it on.
21:51karolherbst: well, going above what your display is able to display is a waste of energy anyway ;) but yeah, for benchmarking it makes sense
21:51karolherbst: also hotter GPU lead to lower performance, cause the driver will use slightly lower clocks
21:51barteks2x: a lot of people would argue otherwise (about wasting energy with higher fps)
21:52karolherbst: well, if your display can only display 60 fps
21:52karolherbst: you can let your GPU render one or two frames which never get displayed at all, but I don't see the point
21:53barteks2x: there was a quite convincing argument (no idea if true) but I can't remember exact explanation and I don't think I can easily find it now
21:58imirkin_: karolherbst: poorly written software can end up blocking while it's waiting to display a frame
21:58karolherbst: right, but that's the software being silly
21:59karolherbst: and I indeed see some of them
21:59imirkin_: doesn't mean that it doesn't exist ;)
21:59karolherbst: there is even a worse kind of software
21:59karolherbst: limiting to 30 fps if it detects the driver is too slow for 60 fps :O
22:00karolherbst: and then drop to 15, if 30 is too much
22:22pmoreau: I know about the `Instruction::fixed` attribute, but is there a way to specify in NV50 IR that some operations should not be reordered?
22:24imirkin_: pmoreau: what does that mean :)
22:24imirkin_: e.g. if you have 2 instructions
22:24imirkin_: a and b
22:24imirkin_: and a can't be reordered but b can
22:24imirkin_: then what are the valid orders
22:24pmoreau: ab or ba
22:25imirkin_: in practice, we don't reorder
22:25imirkin_: so things like 'fixed' have limited meaning
22:25imirkin_: but in theory, i believe 'fixed' is for the 'no touch!' situation
22:25pmoreau: That was my understanding as well for fixed
22:26pmoreau: I was looking at pointer aliasing, which I have been completely ignoring in my code, but since I am going through the memory management code, I thought, why not give it some thought
22:27imirkin_: i believe MemoryOpt assumes that pointers don't alias
22:27imirkin_: which could be problematic for buffers, but ... o well
22:27imirkin_: it'll never happen =]
22:28pmoreau: And, still some time left before seeing it happen
22:28imirkin_: who in their right mind would bind 2 overlapping buffers, and write via one and read via the other.
22:28pmoreau: No clue!
22:29imirkin_: anyways, bbl
22:45quiliro: I am looking for solution for freezing nVidia 9800gt. it freezes only when I connect a generic projector model lz-h80 to the second video card output. when I disconnect the projector, it unfreezes
22:46quiliro: what should I look for?
22:46quiliro: I have searched the web but cannot find anything related to my problem
23:00quiliro: will this help?
23:05quiliro: but I am running gnome
23:06quiliro: that section says to use that configuration or gnome-control-center
23:06quiliro: so I have gnome-control-center
23:06quiliro: what do you suggest?
23:13Lekensteyn: quiliro: it could be a bug, have you observed anything strange in your dmesg or /var/log/Xorg.0.log?