01:34imirkin: ptx0: pastebin dmesg + xorg log
01:36ptx0: i've actually given up in the end but i can provide you these things
01:36ptx0: http://sprunge.us/ihFQ here is my Xorg log
01:36ptx0: http://sprunge.us/hiJC http://sprunge.us/bPFO
01:37ptx0: that's all of it
01:37imirkin: ah. GTX 280 device?
01:37ptx0: yeah exactly
01:37imirkin: over dual-link dvi? or hdmi?
01:37ptx0: that's why i gave up, its HDMI output doesn't want to push this clock rate but it took forever to figure this out
01:37imirkin: well, you can force it to try if you want
01:38ptx0: i am open to the idea but i'm sending the monitor back anyway
01:38ptx0: this 21:9 thing is kind of stupid
01:38imirkin: you can boot with e.g. nouveau.hdmimhz=300
01:38ptx0: the 25UM58 monitor from LG.. not sure who its target audience is. people who don't do anything but look at documents side by side, i guess.
01:38imirkin: which will allow up to a 300mhz pixclock over hdmi
01:39ptx0: imirkin: will that need add'l tweaking for discovery of the mode? or modeline etc needed
01:39imirkin: now, that won't magically make the hardware support it, but it'll let you try :)
01:39imirkin: no - the modelines were just being pruned out since they were > 165MHz
01:39imirkin: which is the max pixclk of hdmi 1.2 (or 1.3? i forget)
01:39imirkin: Modeline "2560x1080" 185.58 2560 2624 2688 2784 1080 1083 1093 1111 -hsync -vsync #(66.7 kHz eP)
01:39imirkin: that's a 185MHz pixclock
01:40imirkin: so it might just work
01:40imirkin: maybe do nouveau.hdmimhz=200 :)
01:41pmoreau: airlied: Hum, yeah, I misread the quote.
01:46imirkin: hakzsam: how hard will it be to add bindless to nouveau?
03:37orbea: i was using retroarch and my system froze up, keyboard and all. So far seems like a one time occurance, but i dont really understand this dmesg trace, is it something bad? http://dpaste.com/38MCW2C
03:39imirkin: it's definitely not good ...
03:39imirkin: should be fixed in 4.11 i believe
03:39orbea: okay ,cool, Im on 4.10.0 still, I'll update soon
03:43orbea: btw, if you are interested, that pcsx2 issue from the other day is a confirmed and now fixed gcc regression. https://gcc.gnu.org/bugzilla/show_bug.cgi?id=80799
03:44imirkin: nice one
04:13ptx0: imirkin: is 144Hz too much for this card?
04:13imirkin: ptx0: the issue is the pixel clock, not the refresh rate
04:13ptx0: i have no idea how they relate
04:14imirkin: see that modeline?
04:14imirkin: the first number (after the name) is the pixel clock in mhz
04:14imirkin: the max supported by older hdmi adapters is 165mhz
04:15imirkin: but it's not a hard limit - you can always try to drive more, it's just that the hdmi electronics weren't designed for it, so you may get some signal lossage
04:15ptx0: what /is/ the pixel clock? is it how much bw is there?
04:15imirkin: it's a digital connection
04:15imirkin: you send 1's and 0's
04:15imirkin: it's the rate at which those 1's and 0's are sent
04:15ptx0: frequency you mean
04:16imirkin: frequency is an analog concept, and e.g. having a transition from 0 to 1 can generate a lot of high-frequency noise
04:16imirkin: even if that transition is happening back and forth at 1hz
04:16ptx0: right, the line is analogue in the end
04:16ptx0: it's electricity
04:16imirkin: indeed it is
04:17imirkin: but everything i'm talking about is about digital rate of bits, not the analog frequency
04:17ptx0: fair enough
04:17ptx0: it is late here :)
04:20imirkin: you actually end up with all kinds of filters to get rid of the high-frequency noise as well
04:21imirkin: to meet FCC regulations and such
04:21imirkin: but trying to reason about digital bitrate in terms of analog frequency will drive anyone to madness
04:31ptx0: well, not anyone
04:31ptx0: just most people
05:14ptx0: if 165MHz is the limit for HDMI original revision then why does 1080p@60Hz = 172.80 ?
05:15imirkin: there's a reduced blanking mode which gets you under
05:16imirkin: you can fit 1920x1200 into 165MHz
05:16imirkin: e.g. Modeline "1920x1200R" 154.00 1920 1968 2000 2080 1200 1203 1209 1235 +hsync -vsync
05:16imirkin: most monitors support such modes
05:16ptx0: what's that mean in practice? reduced quality?
05:16imirkin: more like "won't work on a CRT"
05:17imirkin: the electron gun needs time to move back to top-right
05:17imirkin: (or something)
05:18ptx0: how can i find out what version of hdmi my gpu supports
05:19imirkin: it's the one before they upped the frequency
05:19imirkin: either way, nouveau limits it to 165mhz on your gpu, unless you set nouveau.hdmimhz which takes that limit instead
05:29ptx0: i have some weird video tearing in vlc, hm
05:31ptx0: only on my 2nd monitor though
05:31imirkin: vsync is hard. X tears.
05:32ptx0: this LG monitor's colour is actually quite nice, the contrast ratio is very deep even without dynamic stuff enabled.. but its small size totally ruins it
05:39ptx0: imirkin: is there a way to force use of composition pipeline?
05:41imirkin: what composition pipeline?
05:41ptx0: like this https://bbs.archlinux.org/viewtopic.php?id=199445
05:43imirkin: that's a nvidia driver feature
05:43ptx0: yeah, i know
05:43ptx0: this happens with xfwm4 composition enabled
05:44ptx0: if i disable it, the video doesn't tear so horribly
05:44imirkin: do you have any non-default settings in xorg.conf?
05:44imirkin: ok, so ... solution ... "don't do that"
05:44ptx0: glxvblank => true
05:44ptx0: i need composition enabled
05:44imirkin: don't set params you don't fully understand
05:45ptx0: GL rendering uses vblank
05:45imirkin: other than some kind of default rotation for monitors, or relative monitor placements, your xorg.conf should be empty.
05:45ptx0: no, it gives me two 1024x768 screens then
05:46imirkin: that means you have something going horribly wrong
05:46imirkin: a blank xorg.conf should Just Work (tm)
05:46ptx0: yeah my GPU sucks at handling EDID
05:46ptx0: the AMD one works fine but i use it for VFIO
05:46imirkin: do you have full resolution on the console?
05:47ptx0: well, on one of the monitors
05:47ptx0: it actually blanks when it switches, if they are the same (with a proper xorg.conf) then it doesn't change resolution on the one monitor and i can access X right away
05:48ptx0: i've been trying to make it work all day
05:48imirkin: ok, well if you want me to help with stuff, happy to. also happy to leave it in whatever setup you have now.
05:49ptx0: now i'm just beginning to recognize the limits of this GTX280 even for simple 2D work
05:49imirkin: try clocking it up
05:49imirkin: reclocking on the G200 chip inside your board should work fine
05:49imirkin: cat /sys/kernel/debug/dri/0/pstate to see the available levels
05:49imirkin: echo the relevant level into the file to switch to it
05:50ptx0: i'd rather just replace it with something that can handle higher pixclock rates, i don't like how it smells when it runs hot for too long
05:50ptx0: any suggestions on a cheap replacement?
05:51imirkin: anything made by amd :)
05:51ptx0: i would but i don't know how it will play along with my other card and driver conflicts with VFIO.. dont want to deal with it
05:52skeggsb: if you're ok with nvidia, and prefer to use nouveau, gm10x would be the best bet i reckon
05:54imirkin: i don't think there are any fanless ones, unfortunately
05:54imirkin: but they do make some with small fans, single slot, etc
05:55ptx0: i need a 1kw GPU just for word processing though!
05:55ptx0: (I type very fast)
06:13imirkin: you need to clock those GPUs up for the higher resolution stuff... i think G200 default clocks can be low
06:35imirkin: skeggsb: i'll try to give your patches a whirl over the weekend, although tbh i don't really have a gaming setup here either. you should be able to ask feral to give you access to a ton of games, i suspect they'd give it to you.
06:36imirkin: skeggsb: https://www.feralinteractive.com/en/news/752/
06:36imirkin: skeggsb: and there's a similar one for Valve's games
06:47ptx0: well i put an order in for an RX580 8GB
06:47imirkin: skeggsb: https://hastebin.com/halejihuhe.pas =/
06:47ptx0: i'll swap my 460 for that and use the 460 for the host OS i guess
06:51imirkin: skeggsb: last-second edit error, i'm guessing. or you were building without debug...
06:54imirkin: glretrace: codegen/nv50_ir_emit_gk110.cpp:783: void nv50_ir::CodeEmitterGK110::emitSHLADD(const nv50_ir::Instruction*): Assertion `imm' failed.
06:54imirkin: didn't someone have a fix for that?
06:55ptx0: thanks for all the help and wisdom imirkin
06:59imirkin: oh hm, right, patches were posted but i guess nothing ever got pushed. perhaps coz no one ever implemented it properly... oh well. it's a 1-line fix.
07:09imirkin: [544280.884869] nouveau 0000:02:00.0: X: Unknown handle 0x01895f08
07:09imirkin: skeggsb: what does that mean?
07:09imirkin: (in validate_init)
07:10imirkin: i guess it means we've already GEM_CLOSE'd the handle before we send the pushbuf? :(
07:10imirkin: [this is with glretrace, so concurrency is out of the question]
12:17mupuf: karolherbst: mini.karolherbst.de’s server DNS address could not be found.
12:21karolherbst: mupuf: it's IPv6 only
12:22mupuf: ah, I disabled ipv6 on my modem. I was getting more problems than anything
12:22karolherbst: yeah, that IPv6 only thing is a problem
12:22karolherbst: but anything else is super painful as well
12:23mupuf: no, it is not ipv56, it's just the modem that randomly crashes when it is enabled
12:23karolherbst: I meant more, it's a problem, that this machine is only accessable through a 6to4 tunnel
12:23mupuf: karolherbst: as for the architecture for reator and all, I have been working at work on the multi node support for ezbench
12:23karolherbst: like letsencrypt was already annoyed
12:24mupuf: you can queue work from the command line on the machines you want
12:24karolherbst: sounds interessting
12:24mupuf: and query the state, and fetch the report through git
12:24AndrewR: strange, but after updating mesa to git-61d8f33 I can't force mplayer to use old mpeg2 hw on my nv92 like I did in the past (NOUVEAU_PMPEG=1 mplayer). And default run produces those lines in dmesg, never saw them before: [260548.595043] nouveau 0000:01:00.0: fb: trapped read at 0000000000 on channel -1 [17864000 unknown] engine 08 [PMSPPP] client 06 [PMSPPP] subclient 04  reason 0000000f [DMAOBJ_LIMIT]
12:25mupuf: karolherbst: with this in place, it will be really easy to turn on the machines when necessary, make the particular run and then turn off the machines :)
12:25karolherbst: mupuf: can it be also done in a way, that every node fetches the job? (so that more machines can be plugged without the client having to know)
12:25mupuf: then we can couple this to patchwork, and have automated testing of mesa patch series
12:26mupuf: karolherbst: yes, I am thinking about how to do this. But yes, adding machines to a job should be fine. I think I would like to have in the configuration file of the DUT what topics it would like to test (mesa only, kernel)
12:27mupuf: and the access control model is also going to be interesting ;)
12:33karolherbst: kind of, allthough there is not much damage done if random servers just "register" and fetch tasks and make them available to the main server
12:35karolherbst: I think it would be nice to have something like that, where everybody can just add their own servers
12:52mupuf: karolherbst: yes, it is the goal :)
12:52mupuf: I would like to have an instance running on fd.o
14:50dboyan_: hakzsam: Do the blob's profiling tools on linux have measurements for instruction issuing efficiency equivalent to those perf counters in nouveau?
14:51dboyan_: I haven't managed to use its "visual profiler" on my laptops
14:53hakzsam: dboyan_: yes, LGD https://developer.nvidia.com/linux-graphics-debugger
14:54hakzsam: imirkin: probably not easy :)
14:54hakzsam: I could look into it at some point if you don't plan to add support for it
14:55karolherbst: LGD has/had a broken ssh implementations afaik, did they fix it?
14:56imirkin: AndrewR: odd, i'll take a look. i'm updating my kernel now, but i have a nv92 plugged in.
14:57imirkin: AndrewR: fyi, the PMSPPP is an artifact of how printing works, but it's really PMPEG.
14:57imirkin: (they both map to the same index, and the printing function doesn't distinguish based on chipset)
14:57AndrewR: imirkin, thanks ...
14:59pmoreau: karolherbst: I think so, I managed to use it, though there was some things I needed to tweak on the ssh config-side.
14:59karolherbst: I see
14:59karolherbst: last time I looked at it, they only supported the v1 protocol or so
15:00pmoreau: Maybe that was it, I don't remember.
15:01karolherbst: I hope not
15:01pmoreau: The main problem I had when I initially tried it, was that it wouldn't recognise the OpenGL libs from the blob I had installed.
15:02hakzsam: this was an issue related to libglvnd
15:02hakzsam: I have never hit the SSH issue
15:02karolherbst: hakzsam: then your openssh probably still supports the v1 protocol?
15:04hakzsam: no, 2
15:04karolherbst: well, they all support v2
15:04karolherbst: maybe they fixed it indeed
15:05karolherbst: I told it to somebody from nvidias and that person was like "the hell?"
15:05hakzsam: are you sure the issue wan't on your side? ;)
15:05karolherbst: I've hard disabled v1
15:06dboyan_: well I failed to use lgd on my laptop, with optimus setup
15:07dboyan_: not sure if it's about libglvnd
15:07hakzsam: this issue has been reported like one year ago
15:07hakzsam: problably fixed now, dunno
15:08hakzsam: dboyan_: for perf counters you can also use cupti
15:09dboyan_: I tried lgd just days before, and it said it couldn't recognize my gl lib.
15:09hakzsam: still unfixed so...
15:10hakzsam: you have to downgrade your blob version
15:10hakzsam: cupti is easier to use though, and it will work for sure
15:10dboyan_: Anyway, I have access to a desktop setup now, which doesn't have glvnd, maybe I can try it there
15:12dboyan_: I'm just hoping to find a way to compare nouveau with the blob under similar situations, so i guess graphics will be more suitable here
15:14pmoreau: hakzsam: They did fix the libglnvd issue, I did manage to use it, a few months ago
15:15pmoreau: could be a new issue though
15:16pmoreau: I haven't tried the latest version, 2.1: https://developer.nvidia.com/linux-graphics-debugger-21-released-will-support-fedora-25
15:24hakzsam: dboyan_: well, yes and no. You can also write simple compute shaders with CUDA/CL, translate them to GL compute shaders and use cupti/AMD_perf_monitor
15:26dboyan_: yeah, that's possible
15:27hakzsam: the main problem is that LGD is a GUI, and NVIDIA doesn't expose any API like PerfKit on Windows
19:01imirkin: hakzsam: i haven't looked at your patches yet, but are there any that document precisely what's expected of the driver?