04:29haasn: Is 10-bit color (“Depth 30”) support on the roadmap at all?
04:30haasn: If not, which components would be involved in getting working, true 10-bit rendering and output (via DisplayPort) functionality? Are they all open source?
04:31haasn: I bought an nvidia card thinking it would support depth 10, which it does, but a related bug in the nvidia driver prevents me from properly using it (and nvidia's Linux team proves hostile and unwilling to acknowledge the issue). I would be interested in contributing towards any efforts to get this feature working via the free AMD drivers instead
04:37zgreg_: isn't it already supported?
04:39zgreg_: AFAIR there was some work going on to support 30 bpp a few months or a year ago
04:41zgreg_: my memory is hazy, though.
04:41haasn: zgreg_: That would be interesting to try out. Do you happen to know if this on a SI-architecture card as well?
04:41haasn: I have one lying around (HD 7950) that I could plug in and test out
04:42zgreg_: let me actually verify this
04:43haasn: zgreg_: try eg. setting Depth 30 and DefaultDepth 30 in your xorg.conf's screen section, in the past that would just fail to start X
04:47zgreg_: you have to explicitly enable it in the kernel driver. some monitors have trouble with it.
04:48haasn: Both of my monitors are capable of 10-bit operation, although only one of them is plugged in via DisplayPort (the other's via HDMI)
04:48zgreg_: parm: deep_color:Deep Color support (1 = enable, 0 = disable (default)) (int)
04:49zgreg_: yeah, I think the issue is that some monitors announce support for 10 bpc but don't work correctlt with it enabled
04:49haasn: zgreg_: Does this just concern the output (ie. link to the display), or does it actually let me create an OpenGL window with a 30-bit FB format, render a 30-bit gradient in this fashion, and have it output 1:1 to the monitor as a 30-bit scan-out, with no forced dithering or clamping by the driver?
04:50haasn: on nvidia cards, I can enable 30-bit scan-out, and render a 30-bit window, but it gets clamped to 8 bit (not even dithered) somewhere within the display chain..
04:50zgreg_: sorry, I don't know.
04:50haasn: Seems I will have to try it out, then. Does this work for the SI generation or is it only for the newer AMDGPU stuff?
04:51zgreg_: it should work for SI/CIK as far as I can see.
04:53zgreg_: hm, looks like support in the X DDX etc. still isn't complete :(
04:54glennk: thought i saw some incomplete patch for that floating around a while back
04:55haasn: Who's in charge of those and are there any plans to get this into the official releases etc.?
04:55haasn: I would really, really love 30-bit support. It would be the killer feature for me, since right now the only way to get working 30-bit opengl support is to spend 2000€+ on an overpriced Quadro card
04:56glennk: would be interesting to know what sort of application you have that requires 30 bit output?
04:56zgreg_: well, I believe most developers don't even have monitors with deep color support
04:56haasn: glennk: mpv's vo_opengl. I can have it dither to 8-bit depth, but it produces noticeable 8-bit dithering patterns in some types of gradients
04:56zgreg_: but the feature isn't magic
04:57spreeuw: what use is it?
04:57haasn: spreeuw: high quality video output
04:57specing: haasn: hit the used marked?
04:57haasn: specing: Where can I get a used GPU with 30-bit support and high texture performance (ie. at least 100 GT/s fill rate)?
04:57specing: no idea, ebay?
04:57haasn: Most of the ones that would be within budget would have like 1/5th of that at best
04:58spreeuw: so via opengl you cant do 32bpp color now?
04:58zgreg_: haasn: all non-ancient GPUs support deep color / 30 bpp
04:58spreeuw: it owuld be nicer to focus on 4k support
04:59zgreg_: spreeuw: why? it works already
04:59glennk: spreeuw, don't mix up 32 bits per pixel (eg RGBA8888) with 30 bit color depth (RGB10A2)
05:00spreeuw: so the latter doesnt affect the displayed color?
05:01glennk: most systems dither it down to 8 bits or less per component somewhere along the display signal chain
05:03glennk: haasn, bigger issue is probably making sure mpv uses sRGB framebuffer and dithers from 10 bits taking that into account, that would be my guess where you are seeing the banding from
05:06haasn: glennk: I am 100% confident that you are mistaken
05:06haasn: glennk: You can verify my methodology here: https://devtalk.nvidia.com/default/topic/771081/linux/30-bit-depth-with-linux-driver-does-not-produce-30-bit-output-on-monitor/post/4461957/#4461957
05:07haasn: I have done raw captures of the X frame buffer to ensure it actually holds a continuous 30-bit gradient. I have verified to make sure the programs are actually using 10-bit depth fbconfigs. I have tried bypassing the rendering step entirely and directly modifying the GPU's 3x1DLUTs
05:08glennk: the visual numbers are a bit arbitrary, you need to score them based on the component bit counts to pick the right one
05:08haasn: I have used many methods including generating a 30-depth window with raw X11 calls
05:08haasn: The fbconfig they use lists “10 10 10” for the R/G/B depth.
05:08haasn: (as per glxinfo)
05:09haasn: Furthermore, this is actually a regression in nvidia's hardware
05:09haasn: It works fine on some older cards, but anything in the GTX 9xx generation is broken
05:09haasn: But nvidia is ignoring my bug, despite my providing *ample* amounts of information, proof, verification, testing methodology, even contacting customer support
05:09haasn: I am willing to try all of the same on AMD's hardware, if you think the situation will be any better
05:11haasn: glennk: The matter of fact is, nvidia's display hardware doesn't actually dither correctly either - it *clips* the numbers to 8 bit precision, but only in GTX 9xx-series GPUs. If you enable dithering on top of that, it actually clips first, and then dithers the clipped result down to your configured depth (eg. if you configure depth 6, you see a clipped-and-dithered result)
05:11specing: haasn: did you return the card?
05:11haasn: specing: No, it was not a new purchase by that point. Though I may re-sell it to a friend
05:11glennk: what are you using to render the gradient?
05:12haasn: glennk: Have you read my post on the nvidia devtalk forum?
05:12haasn: I pasted the source code of several test programs I have written for this purpose
05:13specing: haasn: ah, used
05:13glennk: haasn, sorry, not seeing the link to your mpv modifications
05:14zgreg_: glamor also seems to lack 30 bpp support
05:15glennk: haasn, this one? http://sprunge.us/PQAX
05:15zgreg_: I guess it shouldn't be too hard to add the new formats, though
05:18haasn: glennk: mpv does not need any modifications, it works fine with 30-bit fbconfigs out of the box...
05:18haasn: glennk: that is a standalone GLUT program, but should work just as well, yes
05:19glennk: one thing i'll note to watch out for with that standalone program is the precision of the interpolants for vertex interpolated colors are not required to be 10 bits
05:20glennk: may be a better test to compute the gradient in a pixel shader manually
05:20glennk: the mpv source video would presumably come straight from the decoder so no interpolation there
05:22glennk: then for mpv, which decoder is being used?
05:26haasn: glennk: One of my tests was using mpv with a fragment shader that sets each position to vec4(gl_fragCoord.xxx, 1.0)
05:27haasn: glennk: Another one of my tests was generating a 16-bit gradient PNG with imagemagick, VERIFYING that it's a 16-bit gradient by inspecting the raw pixel values of the resulting PNG, and then opening this file in mpv (which decodes via libavcodec, generating a 16-bit texture)
05:28glennk: i'll assume you meant gl_fragCoord.xxx * something to normalize the output to 0-1 range
05:29haasn: I have then played with mpv's built-in dither settings. At dither-depth=8 it produces an 8-bit approximation (via dithering) of a smooth gradient. At dither-depth=10 it produces a 10-bit approximation (again via dithering) of a smooth gradient, but I see a banded result on the display. Taking a raw, 16-bit screenshot of this window and inspecting the colors, I have verified that it is actually a 10-bit dither
05:29haasn: pattern that *should*, assuming the nvidia driver was not buggy, result in a smooth gradient on-screen
05:30glennk: okay, great
05:30haasn: glennk: The texture coordinates are in the range [0-1] to begin with
05:30haasn: I'm not sure why it takes such a monumental amount of effort to convice you that 1. yes, I know what the fuck I'm doing, and 2. no, the bug is NOT in my application
05:31haasn: (I think I'm going to take a break for a while, this is making me unexpectedly angry)
05:31haasn: (Sorry if I came across as too negative)
05:32glennk: you are presuming i'm disagreeing with you, i'm not, i'm trying to isolate where the issue is
05:33haasn: Also, one more tidbit of information: I can isolate the monitor and GPU hardware as the cause of the issue because I can get a working 10-bit signal using a full-screen DirectX buffer in exclusive mode on Windows
05:33haasn: And this produces a smooth gradient on-screen
05:33haasn: (With no visible dither pattern - and before you ask, I *can* see dither patterns at 8-bit within certain luminance ranges)
05:34zgreg_: but doesn't nvidia support 10 bpc only on quadros on windows?
05:34zgreg_: maybe it's still dithering down to 8 bpc
05:36glennk: haasn, last question, running a compositor or not and if so which one?
05:37glennk: (so many opportunities across the stack to ruin precision...)
05:39haasn: glennk: I am not running a compositor, other than the nvidia drivers itself. (Just a small tiling WM to spawn windows and stuff) There is a “ForceFullCompositionPipeline” setting, and I have replicated all of my experiments with it set to both On and Off. There *is* a noticeable difference between those two settings (so it was actually getting turned on/off), but those just affected vsync behavior - the
05:39haasn: outcome of these color experiments were identical across the board
05:40haasn: glennk: I have tried other nvidia hardware, including a 9800 GT, and while that card does *not* have a Display-Port output (and thus can't produce a true 30-bit signal), it *does* dither correctly. That is, if I set it to “8 bit” depth in the control panel and generate a 30-bit signal with one of my test programs, then it dithers down to 8 bits
05:40haasn: With the GTX 970, this was *not* the case. It clipped to 8 bits instead
05:40haasn: So I have reason to believe there is some sort of regression here
05:40haasn: I have also tried unplugging my secondary monitor, with no difference
05:40haasn: I have also tried replacing the entire X root window with my rendered window, no difference
05:41haasn: glennk: Also, as a follow-up to “precision of the interpolants for vertex interpolated colors are not required to be 10 bits” <- this is ruled out because I took a raw dump of the X11 screen and inspected the pixel values, which formed a continuous, smooth 10 bit gradient
05:41haasn: In fact, this rules out OpenGL as a possible source completely IMO
05:41haasn: Because the OpenGL bit works fine. I can render a 30-bit image using OpenGL just fine
05:41haasn: eg. if I take a screenshot of the window
05:42haasn: It's only the output bit (something between X.org and the display) that clips
05:43glennk: well, with for instance radeonsi you still have glamor which sits on top of GL and is double buffered, so that could introduce another step which can dither or clip
05:43glennk: i don't know what nvidia do in their ddx blob driver
05:48glennk: maybe you want to talk to the #nouveau devs if they know something about the display pipe on nvidia, and we can discuss 30 bits on radeon here?
05:48haasn: I'm fine with that. I don't care about the nvidia hardware at all anymore, I would prefer just to get it working on AMD :)
05:50haasn: I'm just verifying my methodology here
05:51haasn: (Anyway, I'll try it later, maybe with some of the patches that have been posted)
05:53glennk: there's also catalyst which might support 10 bit output if the patches don't work?
05:54haasn: glennk: The last time I used catalyst it completely failed to start in Depth 30 mode; although there was maybe a similar kernel driver option?
05:54glennk: also i'm not sure if they don't do the same "market segmenting" and only support it on firepro etc
05:55haasn: glennk: They absolutely do
05:55haasn: In fact, for a long time it used to be the case that the output hardware was identical, but 10-bit operation was simply locked via software if you were using the radeon drivers
05:55haasn: Specifically, I have managed to hex edit the firepro drivers in order to get them to think my HD 4890 was a valid FirePro V8700 (iirc on the model numbers)
05:55haasn: And then I got working 10-bit output
05:56haasn: (but since the HD 5xxx series they have diverged too much for that to be an option)
05:56glennk: yeah, similar hacks long ago unlocked AA lines on a geforce2mx :-|
05:56haasn: This whole situation is just milking the cash out of the “pro” video/photo market
05:56haasn: By making them pay 10x as much for a software unlock
05:57haasn: And I guess that's why both AMD and nvidia, at least officially, are so reluctant to even acknowledge these bugs as a valid issue
05:57haasn: Even though they both have started “advertising” 10-bit support on their consumer cards
05:57haasn: (eg. the nvidia linux driver changelogs lists “10-bit mode is now enabled on geforce cards as well” as a change)
05:58haasn: (and both let you choose ‘10 bit’ depth for your display, which does generate a 30-bit scanout. Of course, both fail to mention the fact that this is just placebo since it only affects the output depth, but all rendered images are still 8 bit :))
05:59haasn: The even worse thing is that it works in DirectX fullscreen exclusive mode but not OpenGL, simply because photoshop etc. use OpenGL - but they have nothing to “lose” from supporting it for DirectX
05:59haasn: (which is what eg. the madVR video renderer uses)
05:59haasn: Unfortunately, I am on Linux and have no access to DirectX :)
05:59haasn: and afaik this is the same for both AMD and nvidia cards
06:01glennk: well, looking at the radeon kernel code it looks like it at least recognizes the formats (same patch author)
06:20haasn: something unrelated that I'm wondering about: with the advent of the ‘amdgpu’ kernel driver, does using an AMD card still require non-free code at all? What about microcode blobs?
06:20zgreg_: yes, firmware is still required
07:35sarnex: wow they are actually fixing the bioshock infinite issue
07:57agd5f: haasn, you need to fix glamor and gbm to support 10 bit surfaces
07:58agd5f: I think it always uses 8888 now
08:01fredrikh: that could potentially open a can of worms, since X lets you reinterpret any pixmap as having any format
08:14glennk: fredrikh, not sure how opening a can of worms in a sea of snakes makes things worse :-p
09:06Black_Prince: so, r600 is now renamed to amdgpu in llvm?
09:10agd5f: Black_Prince, yes
09:10Black_Prince: great, thanks
09:13Black_Prince: also, might not be the right place to ask - but I've been comparing llvm-3.6.2 with llvm-3.7.0rc2 and compiler-rt doesn't install address sanitizer runtime
09:13Black_Prince: anyone knows what happened?
11:43Black_Prince: configure: error: LLVM R600 Target not enabled. You can enable it when building the LLVM
11:43Black_Prince: was it renamed or just a new target is added?
11:45Black_Prince: or is it just that mesa-10.6 doesn't support llvm-3.7 yet?
11:49spreeuw: theres presently a compile bug with llvm
11:49spreeuw: for the opencl stuff
13:29tstellar: agd5f: Are there any issues you know of with VCE on Oland?
13:33tstellar: agd5f: hm, I think it might be an issue with my system. I have an Oland and a Tonga. The Tonga isn't even showing up with lspci, and I get VCE init errors when the radeon driver loads for oland.
13:57agd5f: tstellar, not that I know of off hand
14:08spstarr: spreeuw: seems clover breaks a lot with LLVM trunk
14:13spreeuw: I dont mean the mesa llvm driver if that exists
14:13spreeuw: but just r600g with opencl
14:18sarnex: imirkin_: is it expected changing GL_UNIFORM_BUFFER_OFFSET_ALIGNMENT from 4 to 256 to break stuff? BI launches now but the menu is corrupted
14:19imirkin_: sarnex: i would definitely not expect that
14:19imirkin_: anything that's aligned to 256 is also aligned to 4
14:19sarnex: its likely broken someone else too then
14:19imirkin_: did you change the PIPE_CAP?
14:20imirkin_: yeah ok, that should be fine
14:20imirkin_: probably unnecessary to change the texture buffer one but wtvr
14:20sarnex: too lazy lol
14:21imirkin_: yeah, no harm done
14:21sarnex: this is the screen now https://i.imgur.com/ffbr16z.jpg
14:21imirkin_: are you using my copy image thing?
14:21sarnex: the main menu
14:21imirkin_: or are you using the cmdline flag?
14:21imirkin_: almost right ;)
14:22sarnex: did you push your patch?
14:22imirkin_: i did
14:22imirkin_: but you have to use an ext override
14:22sarnex: ok let me find the envvar
14:24sarnex: wow it works
14:25sarnex: thanks for the help
15:12spstarr: mannerov: PlayOnLinux doesnt have recent wine with nine patched in
15:12spstarr: they should have nine for each build they make really
15:12sarnex: spstarr: the POL guy has to manually build wine and upload it. i dont want to annoy him with that every wine release
15:13spstarr: no automated way?
15:13sarnex: no since it requires recent mesa and libx*, and hes on debian stable
15:13sarnex: so i wrote a script to create a chroot and do everything
15:13spstarr: why won't the wine devs just stop being asses and let this be added
15:13spstarr: its getting very stupid now
15:14sarnex: well i guess they dont want the duplicated code or code thats platform specific
15:14spstarr: since wine is Nine platform specific?
15:14spstarr: when is
15:14spstarr: wine runs on *BSD etc and THEY use Mesa too
15:14sarnex: well it wont work on mac
15:14spstarr: can the code be separated into its own file ?
15:14sarnex: yes there were some staging guys working on it but i think its on hold
15:15spstarr: i see
15:15sarnex: i would do it if i knew enough
15:18mannerov: and I think we should find a way to have it compile even if mesa headers are not there
15:18sarnex: that would be ideal
15:18mannerov: duplicating headers is a solution, but there's probably better
15:18sarnex: my POL build hacks the headers into the wine source but i dont know if you can do that
15:19mannerov: Everyone should be able to compile even without Mesa installed
15:20sarnex: i agree
15:20sarnex: but i dont know a real solution
15:25imirkin_: compile what? wine?
15:25sarnex: yeah right now you need mesa headers to compile wine with gallium nine
15:26imirkin_: oh that stinks!
15:26imirkin_: sounds like the wrong API is being exposed
15:27mannerov: sarnex meant mesa gallium nine headers
15:27spstarr: someone else seeing corruption in ARK
15:28spstarr: it is same
15:28mannerov: currently you need mesa installed before compiling patched wine
15:28spstarr: airlied: this is the same texture corruption I noticed
15:28imirkin_: mannerov: you need to ship a public ABI header, installed as part of mesa
15:28imirkin_: mannerov: similar to GL.h
15:29spstarr: if its Catalyst then this means it is not Mesa rendering this wrong
15:29spstarr: unsure yet.. asking them to run glxinfo to tell me renderer being used
15:29imirkin_: mannerov: and then only enable nine in the wine build if that header is found
15:30mannerov: have a separate repo with just the headers ?
15:30mannerov: I see the idea
15:30mannerov: too late for Mesa 11 :-)
15:30imirkin_: mannerov: not a separate repo... a separate install artifact
15:31imirkin_: mannerov: no, mesa 11 is just about to be branched, not released
15:31imirkin_: you can always add it in
15:31mannerov: what do you mean by separate install artifact ?
15:31imirkin_: when you run "make install"
15:32imirkin_: it will put e.g. GL.h into /usr/include/GL/GL.h (or something liek that)
15:32imirkin_: similarly you should have a nine.h
15:32imirkin_: which has your library's API
15:32mannerov: I think that's what happens at make install
15:32imirkin_: and wine should build against that
15:32mannerov: but it still means that you build mesa before having the headers
15:33imirkin_: same as for everything else
15:33mannerov: which is the problem here, we'd like wine to build on any setup (very old mesa, or no mesa at all)
15:33imirkin_: i don't see the issue
15:34imirkin_: you check for the header at configure time
15:34imirkin_: like any other optional dependency
15:35mareko: or copy the header into wine
15:35imirkin_: that seems... very unusual
15:35mannerov: I think copying the header into wine is more sane
15:35imirkin_: you do that for runtime detection of dependencies which is a very rare use-case
15:35mannerov: because if we add a function to the wine - Mesa interface we end up requiring Mesa git to build wine git with nine
15:36imirkin_: mannerov: right, if you make an API, you have to stick to it ;)
15:36mannerov: which means a lot of difficulties to distribute to PlayOnLinux for example
15:36imirkin_: mannerov: you've been in this land where things are done in lock-step, but a grown-up project has to have a versioned API
15:36imirkin_: with so-names, etc
15:37mannerov: our API is has major/minor versions
15:37mannerov: and the headers are installed at mesa make install
15:37imirkin_: ok, so why can't wine just depend on those headers and use them?
15:37mannerov: a practical example is what sarnex was raising
15:37imirkin_: i.e. how is this different than *every other* dependency?
15:38mannerov: PlayOnLinx build its wine old debian stable
15:38mannerov: building patches wine needs recent Mesa
15:38mannerov: recent Mesa needs a lot of recent packets not in debian stable
15:38mannerov: whereas you just need have the headers in wine to solve all that
15:39imirkin_:has lost interest in this conversation
15:39imirkin_: basically you can either do what every other project in the linux ecosystem does
15:39imirkin_: or you can go your own way
15:39imirkin_: doesn't really matter to me, tbh
15:48funfunctor: oh my, we didn't branch yet?
15:49imirkin_: i think xexaxo said at 23:00 GMT
15:52funfunctor: will we be seeing AoA making it in?
15:52imirkin_: do you need AoA for anything?
15:53funfunctor: yea, I was planning just to toy with it in some demo GL apps i'm working on
15:54spstarr: you can still use git master :)
15:54spstarr: releases mean nothing if we use git anyway
15:54imirkin_: funfunctor: then merge it into your local tree and enjoy :)
15:54funfunctor: spstarr: yea but I like to test the app on 'stable' ;)
15:55funfunctor: just wondering how ready AoA is on Intel
15:55mareko: done AFAIK
15:55funfunctor: mareko: GL_ARB_texture_view its just that one piglit test that is holding it up right?
15:55spstarr: and they did the piglits also?
15:55mareko: funfunctor: yes
15:56spstarr: so we just need the radeonsi part
15:56mareko: AoA only needs a few changes in st/mesa
15:56spstarr: for radeonsi to use it?
15:56mareko: for all gallium drivers
15:57funfunctor: mareko: which test was that again?
15:57imirkin_: funfunctor: the unwritten one ;)
15:58funfunctor: it was texture on texture 2d or something like that
15:58mareko: funfunctor: a 2D view into a 2D array texture, first_layer > 0
15:58funfunctor: ah ok thanks, i'll have a go on it *now*
15:59imirkin_: didn't my cube array one do that?
16:00imirkin_: oh no. mine did 2d array on cube array.
16:00mareko: a similar test for cube vs cube arrays would be nice too :)
16:00mannerov: is these arrays of arrays efficient for hardware ? It looks like false good idea
16:00imirkin_: mannerov: it's a purely software feature
16:01mareko: mannerov: it's syntactic sugar in GLSL
16:01imirkin_: it's as good of an idea as an array is -- i.e. not a very good one :)
16:02funfunctor: is there a close piglit test I can fork from?
16:02mannerov: this just seems to hit the common gl concerns that you have many ways of doing things and it's hard to know the good ones for performance
16:03mareko: yeah, vulkan is the solution to all our gl issues
16:03mannerov: haven't seen vulkan, I can't say
16:04imirkin_: unfortunately it'll also create vulkan issues ;)
16:04imirkin_: funfunctor: look at mine... sampling-from-...
16:04mannerov: I think khronos should stop working that much in secret
16:05mannerov: if there are issues in the spec, that way more people could see it and warn
16:07mannerov: but from the slides I have seen it looks cool
16:07fredrikh: we find issues in just about every extension spec we implement in mesa, so that sounds about right
16:07imirkin_: funfunctor: http://cgit.freedesktop.org/piglit/tree/tests/spec/arb_texture_view/sampling-2d-array-as-cubemap.c
16:07mannerov: just do not like promises like 'Yeah it will work with Wayland and with X via DRI3'. There is many issues everytime on details on these two that I do not trust people behind black doors can think of all of them
16:08imirkin_: that one casts a 2darray as cubemap. sounds like mareko wanted 2darray as 2d
16:08funfunctor: imirkin_: thx
16:08nathanhi: hi all. i'm currently trying to get mesa working on my old powerbook (BE PPC), but unfortunately i'm out of luck. i'm getting the following output from glxinfo: http://paste.debian.net/hidden/98461d06/. i already built mesa from git (latest greatest) and applied the patches from http://lists.freedesktop.org/archives/mesa-dev/2013-December/050218.html (patching them directly failed of course after two years, but I patched it manually).. any tips
16:08nathanhi: on how to debug or fix this?
16:09mareko: imirkin_: does it use first_level > 0 ?
16:09imirkin_: nathanhi: there was a patch...
16:09imirkin_: mareko: yeah, but only cubemap
16:09mareko: not first_level
16:09imirkin_: /* the texture view starts at layer 2, so face 1 (-X) will have green */
16:10imirkin_: i actually don't know that i have anything for levels... i think there's stuff in there but i forget
16:10imirkin_: i did it so long ago
16:10nathanhi: imirkin_, you mean the one I already patched or another one?
16:10mareko: khronos don't work in secret, anybody can join them
16:11imirkin_: nathanhi: http://patchwork.freedesktop.org/patch/56756/
16:11imirkin_: nathanhi: i guess it might be the one you patched in? dunno
16:11airlied: mannerov: working with wayland will be driver dependant anyways
16:12imirkin_: nathanhi: it's a bit of a sad situation... from my (passive) observation of the matter, the people who have the hw can't fix it themselves, and the people who can fix it themselves don't have the hw
16:12mareko: I'm really suprised what institutions join khronos these days... companies I've never heard of, schools...
16:12funfunctor: mareko: airlied would it such a bad idea to get that fp64 code in so at least its there?
16:12airlied: it's not like nvidia vulkan driver is suddenly going to start working with dri3/wayland
16:12airlied: when their GL driver doesn't
16:13mareko: funfunctor: what code?
16:14funfunctor: mareko: 1sec, i'll clean up and rebase the branch now
16:14nathanhi: imirkin_, yes, thats a dilemma.. thanks for the patch - that's the one I already patched
16:15mareko: it wouldn't be wise to merge things 5 seconds before it's branched
16:15airlied: yeah that always ends well :-P
16:16imirkin_: it's not like things can't be fixed on a branch
16:17mareko: there is another release in 3 months, it'll be here before you know it
16:19funfunctor: mareko: https://github.com/victoredwardocallaghan/mesa-GLwork/commits/arb_gpu_shader_fp64_cayman
16:19nathanhi: imirkin_, any tips on how to debug the "no matching fbconfigs or visuals found"? i guess the fbconfig list is generated/filled directly by X?
16:19imirkin_: nathanhi: i like to use gdb.
16:19mareko: nathanhi: yes and no
16:20imirkin_: as for the reason... it might be due some highly annoying issues, like there legitimately any appropriate formats supported by the card
16:20mareko: nathanhi: the dri driver determines what the list is (src/gallium/state_trackers/dri)
16:20imirkin_: and it all used to work because everyone agreed on the same wrong color ordering
16:20imirkin_: and now you have to figure out how to continue the lie in a more structured way
16:21nathanhi: sounds like fun
16:21nathanhi: mareko, thanks
16:21imirkin_: but i dunno what gpu you have
16:21mareko: nathanhi: then the x server loads the driver when is starts, later an app loads the driver for itself, and the list of visuals is the intersection of the app's and x's list
16:21imirkin_: nor whether this is really the issue
16:22mareko: nathanhi: so when you change the list in mesa, always restart the x server
16:23nathanhi: thanks, that makes sense. I guess I have something to start with!
16:24spstarr: mannerov: I think there's probably more going on than we know, if Wayland will be Vulkan ready, someone from our community is at the table.. somewhere
16:25mareko: st/dri is spaghetti code too, it interacts with 4 components in various ways, so it's sometimes difficult to tell what's going on there (the components are: libGL/libEGL, st/mesa, dri_utils, the gallium driver)
18:51slicksam: nevermind about the PLL sharing issue I mentioned yesterday, I think it was just the HDMI->VGA adapter being crappy
18:52slicksam: it would cut out with certain patterns on the screen
19:09spstarr:runs piglit on today's LLVM/mesa build
19:37spstarr: " In the news at winfuture they said AMD will gain 200% Performance Boost because Nvidia cannot handle a lot of Drawcallbacks therefore Nvidia users will only gain 5%"
19:38spstarr: does this mean whenever we have Vulkan this will mean Linux will be smokingly fast with AMD and not nvidia? =)
19:59spstarr: piglit so far 4 crashes, 2 warnings doing ALL tests
19:59spstarr: whats 'expected' crashes/warnings right now?
21:16spstarr: funfunctor: piglit can take hours to run full tests
21:17spstarr: im running today's build of LLVM/Mesa, 4 crashes 2 warnings so far 492 failures
21:39funfunctor: spstarr: ok..? slow card/cpu
21:39funfunctor: man, the tgsi API *sucks*
21:39spstarr: the bonaire isn't slow
21:39spstarr: just takes long to run
21:40funfunctor: X is slow
21:40spstarr: glean isn't multithreaded and kwin is eating 100% CPU
21:40funfunctor: so the last few tests do lots of forks() or whatever also
21:40spstarr: kwin is broken right now but glean is using about 20% CPU on one core
21:40spstarr: its on number:
21:40funfunctor: spstarr: do your tests without KDE ;)
21:40spstarr: 29367 of 29775
21:40funfunctor: Xmonad will do as a WM
21:41funfunctor: never crashes ever
21:41spstarr: i have 4 core (8 with hyperthreading
21:41funfunctor: yea, use Xmonad for your piglit tests not KDE/kwin
21:41spstarr: Xmonad ? lemme install it
21:42spstarr: its a whole blob
21:42spstarr: maybe i'll just switch to gnome's WM
21:42funfunctor: tgsi_iterate_shader() is the closest thing to generic
21:43funfunctor: tgsi_scan_shader() seems to not be generic enough for things like r600g unless I am mistaken
21:43funfunctor: spstarr: whole blob what do you mean?
21:43spstarr: running mutter now instead
21:43funfunctor: spstarr: gnome is just as blated as kde, much bigger 'blobs'
21:44spstarr: clean is still using 20-30% CPU now even w/o kwin so its just running test
21:44spstarr: it takes *hours* to run piglit fully
21:46funfunctor: WTF?! https://github.com/shadowsocks/shadowsocks-iOS/issues/124#issuecomment-133630294
21:47funfunctor: spstarr: takes me about 30min
21:47funfunctor: your bloated window manager sucks then
21:47spstarr: im using mutter now
21:47spstarr: 30 mins?
21:47spstarr: im confused why so fast
21:48spstarr: you are Running *ALL* tests?
21:48spstarr: or just GPU
21:48spstarr: maybe im running it wrong?
21:48spstarr: what is your command invocation
21:54funfunctor: i'm just repeating myself to you..
21:56spstarr: i run piglit like this
21:56spstarr: ./piglit run all results/all
21:56spstarr: im using git master piglit too
21:58funfunctor: any way..... any folks around to talk tgsi API with me please?
21:59spstarr: the AMD folks usually take rest on weekends, except sunday when the otherside of globe (yours) resumes work week
22:03funfunctor: spstarr: try awesomewm as a window manager, probably easier for you to configure
22:03funfunctor: spstarr: takes up like ~1M of RAM at most, your tests should run significantly faster
22:07spstarr: its not the WM
22:08spstarr: still on #29367 i can see a pixel in the window changing
22:08spstarr: clean running but this test it taking forever
22:08spstarr: one tiny white pixel in a small window in corner...
23:01spstarr: still running
23:01spstarr: i call fowl
23:02spstarr: looks like ./piglit run tests/all i should be using
23:02spstarr: oh yes
23:02spstarr: 3019.907731] [TTM] Illegal buffer object size
23:02spstarr: [ 3019.907965] [drm:radeon_gem_object_create [radeon]] *ERROR* Failed to allocate GEM object (0, 2, 4096, -22)
23:02spstarr: [ 3021.510028] [drm:radeon_cs_ioctl [radeon]] *ERROR* Failed to parse relocation -12!
23:03spstarr: looks like i stuck GPU
23:03spstarr: and shader_runer coredumped 3 times
23:04spstarr: funfunctor: killed the glean now it's finishing
23:14funfunctor: spstarr: some of the last few tests crash for me also
23:14funfunctor: I only really care about the shader tests tbh
23:15funfunctor: spstarr: I am currently just enjoying people core dumping from the Ashley Madison leak, lol.
23:17funfunctor: all those sorts of people who don't care about the NSA, use Facebook and stay stupid things like "you are a conspiracy theorist, I have nothing to hide" just got d0xed
23:18spstarr: im rerunning with html output so i can report this
23:18funfunctor: spstarr: figure out which test crashes and i'll take a look at it
23:18funfunctor: if you compile with -g you can create a backtrace
23:19spstarr: im looking how you get it to dump to HTML
23:19spstarr: then we'll know which test
23:20spstarr: oh you run it then summary after
23:20spstarr: so piglit run --> piglit summary html results/all
23:26spstarr: i have it capturing a lot now
23:26spstarr: we'll know whats going on
23:27funfunctor: "(16:03:30) spstarr: and shader_runer coredumped 3 times" <~ paste the backtrace of the core file
23:28funfunctor: https://bugs.freedesktop.org/show_bug.cgi?id=91687 could be related for you, no idea?
23:30funfunctor: intel really have been pushing hard on OpenGLES 3.1
23:30funfunctor: its really good
23:32spstarr: we'll find out
23:32spstarr: its processing
23:32spstarr: if it hangs i will kill glean which will start next test mark that as bad
23:35spstarr: currently running glean@pixelformats which is taking time but no dmesg and i see it doing something
23:37spstarr: security@initialized-vbo flagged as warn #1
23:40spstarr: its stuck
23:40spstarr: GEM BUSY shows strace
23:50funfunctor: spstarr: which test exactly
23:51spstarr: waiting for this to finish its going slower now with mutter but its running them properly it seems
23:51spstarr: 1 just threw dmesg error
23:53spstarr: how many failures did you end up with in end?