04:29 haasn: Is 10-bit color (“Depth 30”) support on the roadmap at all?
04:30 haasn: If not, which components would be involved in getting working, true 10-bit rendering and output (via DisplayPort) functionality? Are they all open source?
04:31 haasn: I bought an nvidia card thinking it would support depth 10, which it does, but a related bug in the nvidia driver prevents me from properly using it (and nvidia's Linux team proves hostile and unwilling to acknowledge the issue). I would be interested in contributing towards any efforts to get this feature working via the free AMD drivers instead
04:37 zgreg_: isn't it already supported?
04:39 zgreg_: AFAIR there was some work going on to support 30 bpp a few months or a year ago
04:41 zgreg_: my memory is hazy, though.
04:41 haasn: zgreg_: That would be interesting to try out. Do you happen to know if this on a SI-architecture card as well?
04:41 haasn: I have one lying around (HD 7950) that I could plug in and test out
04:42 zgreg_: let me actually verify this
04:43 haasn: zgreg_: try eg. setting Depth 30 and DefaultDepth 30 in your xorg.conf's screen section, in the past that would just fail to start X
04:47 zgreg_: you have to explicitly enable it in the kernel driver. some monitors have trouble with it.
04:48 haasn: Both of my monitors are capable of 10-bit operation, although only one of them is plugged in via DisplayPort (the other's via HDMI)
04:48 zgreg_: parm: deep_color:Deep Color support (1 = enable, 0 = disable (default)) (int)
04:49 zgreg_: yeah, I think the issue is that some monitors announce support for 10 bpc but don't work correctlt with it enabled
04:49 haasn: zgreg_: Does this just concern the output (ie. link to the display), or does it actually let me create an OpenGL window with a 30-bit FB format, render a 30-bit gradient in this fashion, and have it output 1:1 to the monitor as a 30-bit scan-out, with no forced dithering or clamping by the driver?
04:50 haasn: on nvidia cards, I can enable 30-bit scan-out, and render a 30-bit window, but it gets clamped to 8 bit (not even dithered) somewhere within the display chain..
04:50 zgreg_: sorry, I don't know.
04:50 haasn: okay
04:50 haasn: Seems I will have to try it out, then. Does this work for the SI generation or is it only for the newer AMDGPU stuff?
04:51 zgreg_: it should work for SI/CIK as far as I can see.
04:53 zgreg_: hm, looks like support in the X DDX etc. still isn't complete :(
04:54 glennk: thought i saw some incomplete patch for that floating around a while back
04:54 zgreg_: yeah
04:54 zgreg_: http://people.freedesktop.org/~fredrik/depth30/
04:55 haasn: interesting
04:55 haasn: Who's in charge of those and are there any plans to get this into the official releases etc.?
04:55 haasn: I would really, really love 30-bit support. It would be the killer feature for me, since right now the only way to get working 30-bit opengl support is to spend 2000€+ on an overpriced Quadro card
04:56 glennk: would be interesting to know what sort of application you have that requires 30 bit output?
04:56 zgreg_: well, I believe most developers don't even have monitors with deep color support
04:56 haasn: glennk: mpv's vo_opengl. I can have it dither to 8-bit depth, but it produces noticeable 8-bit dithering patterns in some types of gradients
04:56 zgreg_: but the feature isn't magic
04:57 spreeuw: what use is it?
04:57 haasn: spreeuw: high quality video output
04:57 specing: haasn: hit the used marked?
04:57 specing: market*
04:57 haasn: specing: Where can I get a used GPU with 30-bit support and high texture performance (ie. at least 100 GT/s fill rate)?
04:57 specing: no idea, ebay?
04:57 haasn: Most of the ones that would be within budget would have like 1/5th of that at best
04:58 spreeuw: so via opengl you cant do 32bpp color now?
04:58 zgreg_: haasn: all non-ancient GPUs support deep color / 30 bpp
04:58 spreeuw: it owuld be nicer to focus on 4k support
04:59 zgreg_: spreeuw: why? it works already
04:59 glennk: spreeuw, don't mix up 32 bits per pixel (eg RGBA8888) with 30 bit color depth (RGB10A2)
05:00 spreeuw: so the latter doesnt affect the displayed color?
05:01 glennk: most systems dither it down to 8 bits or less per component somewhere along the display signal chain
05:03 glennk: haasn, bigger issue is probably making sure mpv uses sRGB framebuffer and dithers from 10 bits taking that into account, that would be my guess where you are seeing the banding from
05:06 haasn: glennk: I am 100% confident that you are mistaken
05:06 haasn: glennk: You can verify my methodology here: https://devtalk.nvidia.com/default/topic/771081/linux/30-bit-depth-with-linux-driver-does-not-produce-30-bit-output-on-monitor/post/4461957/#4461957
05:07 haasn: I have done raw captures of the X frame buffer to ensure it actually holds a continuous 30-bit gradient. I have verified to make sure the programs are actually using 10-bit depth fbconfigs. I have tried bypassing the rendering step entirely and directly modifying the GPU's 3x1DLUTs
05:08 glennk: the visual numbers are a bit arbitrary, you need to score them based on the component bit counts to pick the right one
05:08 haasn: I have used many methods including generating a 30-depth window with raw X11 calls
05:08 haasn: The fbconfig they use lists “10 10 10” for the R/G/B depth.
05:08 haasn: (as per glxinfo)
05:09 haasn: Furthermore, this is actually a regression in nvidia's hardware
05:09 haasn: It works fine on some older cards, but anything in the GTX 9xx generation is broken
05:09 haasn: But nvidia is ignoring my bug, despite my providing *ample* amounts of information, proof, verification, testing methodology, even contacting customer support
05:09 haasn: I am willing to try all of the same on AMD's hardware, if you think the situation will be any better
05:11 haasn: glennk: The matter of fact is, nvidia's display hardware doesn't actually dither correctly either - it *clips* the numbers to 8 bit precision, but only in GTX 9xx-series GPUs. If you enable dithering on top of that, it actually clips first, and then dithers the clipped result down to your configured depth (eg. if you configure depth 6, you see a clipped-and-dithered result)
05:11 specing: haasn: did you return the card?
05:11 haasn: specing: No, it was not a new purchase by that point. Though I may re-sell it to a friend
05:11 glennk: what are you using to render the gradient?
05:12 haasn: glennk: Have you read my post on the nvidia devtalk forum?
05:12 haasn: I pasted the source code of several test programs I have written for this purpose
05:13 specing: haasn: ah, used
05:13 glennk: haasn, sorry, not seeing the link to your mpv modifications
05:14 zgreg_: glamor also seems to lack 30 bpp support
05:15 glennk: haasn, this one? http://sprunge.us/PQAX
05:15 zgreg_: I guess it shouldn't be too hard to add the new formats, though
05:18 haasn: glennk: mpv does not need any modifications, it works fine with 30-bit fbconfigs out of the box...
05:18 haasn: glennk: that is a standalone GLUT program, but should work just as well, yes
05:19 glennk: one thing i'll note to watch out for with that standalone program is the precision of the interpolants for vertex interpolated colors are not required to be 10 bits
05:20 glennk: may be a better test to compute the gradient in a pixel shader manually
05:20 glennk: the mpv source video would presumably come straight from the decoder so no interpolation there
05:22 glennk: then for mpv, which decoder is being used?
05:26 haasn: glennk: One of my tests was using mpv with a fragment shader that sets each position to vec4(gl_fragCoord.xxx, 1.0)
05:27 haasn: glennk: Another one of my tests was generating a 16-bit gradient PNG with imagemagick, VERIFYING that it's a 16-bit gradient by inspecting the raw pixel values of the resulting PNG, and then opening this file in mpv (which decodes via libavcodec, generating a 16-bit texture)
05:28 glennk: i'll assume you meant gl_fragCoord.xxx * something to normalize the output to 0-1 range
05:29 haasn: I have then played with mpv's built-in dither settings. At dither-depth=8 it produces an 8-bit approximation (via dithering) of a smooth gradient. At dither-depth=10 it produces a 10-bit approximation (again via dithering) of a smooth gradient, but I see a banded result on the display. Taking a raw, 16-bit screenshot of this window and inspecting the colors, I have verified that it is actually a 10-bit dither
05:29 haasn: pattern that *should*, assuming the nvidia driver was not buggy, result in a smooth gradient on-screen
05:30 glennk: okay, great
05:30 haasn: glennk: The texture coordinates are in the range [0-1] to begin with
05:30 haasn: I'm not sure why it takes such a monumental amount of effort to convice you that 1. yes, I know what the fuck I'm doing, and 2. no, the bug is NOT in my application
05:31 haasn: (I think I'm going to take a break for a while, this is making me unexpectedly angry)
05:31 haasn: (Sorry if I came across as too negative)
05:32 glennk: you are presuming i'm disagreeing with you, i'm not, i'm trying to isolate where the issue is
05:33 haasn: Also, one more tidbit of information: I can isolate the monitor and GPU hardware as the cause of the issue because I can get a working 10-bit signal using a full-screen DirectX buffer in exclusive mode on Windows
05:33 haasn: And this produces a smooth gradient on-screen
05:33 haasn: (With no visible dither pattern - and before you ask, I *can* see dither patterns at 8-bit within certain luminance ranges)
05:34 zgreg_: but doesn't nvidia support 10 bpc only on quadros on windows?
05:34 zgreg_: maybe it's still dithering down to 8 bpc
05:36 glennk: haasn, last question, running a compositor or not and if so which one?
05:37 glennk: (so many opportunities across the stack to ruin precision...)
05:39 haasn: glennk: I am not running a compositor, other than the nvidia drivers itself. (Just a small tiling WM to spawn windows and stuff) There is a “ForceFullCompositionPipeline” setting, and I have replicated all of my experiments with it set to both On and Off. There *is* a noticeable difference between those two settings (so it was actually getting turned on/off), but those just affected vsync behavior - the
05:39 haasn: outcome of these color experiments were identical across the board
05:40 haasn: glennk: I have tried other nvidia hardware, including a 9800 GT, and while that card does *not* have a Display-Port output (and thus can't produce a true 30-bit signal), it *does* dither correctly. That is, if I set it to “8 bit” depth in the control panel and generate a 30-bit signal with one of my test programs, then it dithers down to 8 bits
05:40 haasn: With the GTX 970, this was *not* the case. It clipped to 8 bits instead
05:40 haasn: So I have reason to believe there is some sort of regression here
05:40 haasn: I have also tried unplugging my secondary monitor, with no difference
05:40 haasn: I have also tried replacing the entire X root window with my rendered window, no difference
05:41 haasn: glennk: Also, as a follow-up to “precision of the interpolants for vertex interpolated colors are not required to be 10 bits” <- this is ruled out because I took a raw dump of the X11 screen and inspected the pixel values, which formed a continuous, smooth 10 bit gradient
05:41 haasn: In fact, this rules out OpenGL as a possible source completely IMO
05:41 haasn: Because the OpenGL bit works fine. I can render a 30-bit image using OpenGL just fine
05:41 haasn: eg. if I take a screenshot of the window
05:42 haasn: It's only the output bit (something between X.org and the display) that clips
05:43 glennk: well, with for instance radeonsi you still have glamor which sits on top of GL and is double buffered, so that could introduce another step which can dither or clip
05:43 glennk: i don't know what nvidia do in their ddx blob driver
05:48 glennk: maybe you want to talk to the #nouveau devs if they know something about the display pipe on nvidia, and we can discuss 30 bits on radeon here?
05:48 haasn: I'm fine with that. I don't care about the nvidia hardware at all anymore, I would prefer just to get it working on AMD :)
05:50 haasn: I'm just verifying my methodology here
05:51 haasn: (Anyway, I'll try it later, maybe with some of the patches that have been posted)
05:53 glennk: there's also catalyst which might support 10 bit output if the patches don't work?
05:54 haasn: glennk: The last time I used catalyst it completely failed to start in Depth 30 mode; although there was maybe a similar kernel driver option?
05:54 glennk: also i'm not sure if they don't do the same "market segmenting" and only support it on firepro etc
05:55 haasn: glennk: They absolutely do
05:55 haasn: In fact, for a long time it used to be the case that the output hardware was identical, but 10-bit operation was simply locked via software if you were using the radeon drivers
05:55 haasn: Specifically, I have managed to hex edit the firepro drivers in order to get them to think my HD 4890 was a valid FirePro V8700 (iirc on the model numbers)
05:55 haasn: And then I got working 10-bit output
05:56 haasn: (but since the HD 5xxx series they have diverged too much for that to be an option)
05:56 glennk: yeah, similar hacks long ago unlocked AA lines on a geforce2mx :-|
05:56 haasn: This whole situation is just milking the cash out of the “pro” video/photo market
05:56 haasn: By making them pay 10x as much for a software unlock
05:57 haasn: And I guess that's why both AMD and nvidia, at least officially, are so reluctant to even acknowledge these bugs as a valid issue
05:57 haasn: Even though they both have started “advertising” 10-bit support on their consumer cards
05:57 haasn: (eg. the nvidia linux driver changelogs lists “10-bit mode is now enabled on geforce cards as well” as a change)
05:58 haasn: (and both let you choose ‘10 bit’ depth for your display, which does generate a 30-bit scanout. Of course, both fail to mention the fact that this is just placebo since it only affects the output depth, but all rendered images are still 8 bit :))
05:59 haasn: The even worse thing is that it works in DirectX fullscreen exclusive mode but not OpenGL, simply because photoshop etc. use OpenGL - but they have nothing to “lose” from supporting it for DirectX
05:59 haasn: (which is what eg. the madVR video renderer uses)
05:59 haasn: Unfortunately, I am on Linux and have no access to DirectX :)
05:59 haasn: and afaik this is the same for both AMD and nvidia cards
06:01 glennk: well, looking at the radeon kernel code it looks like it at least recognizes the formats (same patch author)
06:20 haasn: something unrelated that I'm wondering about: with the advent of the ‘amdgpu’ kernel driver, does using an AMD card still require non-free code at all? What about microcode blobs?
06:20 zgreg_: yes, firmware is still required
07:35 sarnex: wow they are actually fixing the bioshock infinite issue
07:57 agd5f: haasn, you need to fix glamor and gbm to support 10 bit surfaces
07:58 agd5f: I think it always uses 8888 now
08:01 fredrikh: that could potentially open a can of worms, since X lets you reinterpret any pixmap as having any format
08:14 glennk: fredrikh, not sure how opening a can of worms in a sea of snakes makes things worse :-p
08:14 fredrikh: hehe
09:06 Black_Prince: so, r600 is now renamed to amdgpu in llvm?
09:10 agd5f: Black_Prince, yes
09:10 Black_Prince: great, thanks
09:13 Black_Prince: also, might not be the right place to ask - but I've been comparing llvm-3.6.2 with llvm-3.7.0rc2 and compiler-rt doesn't install address sanitizer runtime
09:13 Black_Prince: anyone knows what happened?
11:43 Black_Prince: configure: error: LLVM R600 Target not enabled. You can enable it when building the LLVM
11:43 Black_Prince: was it renamed or just a new target is added?
11:45 Black_Prince: or is it just that mesa-10.6 doesn't support llvm-3.7 yet?
11:49 spreeuw: theres presently a compile bug with llvm
11:49 spreeuw: for the opencl stuff
11:49 spreeuw: clover
13:29 tstellar: agd5f: Are there any issues you know of with VCE on Oland?
13:33 tstellar: agd5f: hm, I think it might be an issue with my system. I have an Oland and a Tonga. The Tonga isn't even showing up with lspci, and I get VCE init errors when the radeon driver loads for oland.
13:57 agd5f: tstellar, not that I know of off hand
14:08 spstarr: spreeuw: seems clover breaks a lot with LLVM trunk
14:13 spreeuw: I dont mean the mesa llvm driver if that exists
14:13 spreeuw: but just r600g with opencl
14:18 sarnex: imirkin_: is it expected changing GL_UNIFORM_BUFFER_OFFSET_ALIGNMENT from 4 to 256 to break stuff? BI launches now but the menu is corrupted
14:19 imirkin_: sarnex: i would definitely not expect that
14:19 imirkin_: anything that's aligned to 256 is also aligned to 4
14:19 sarnex: its likely broken someone else too then
14:19 imirkin_: did you change the PIPE_CAP?
14:19 sarnex: http://pastebin.com/4JDfVUrG
14:20 imirkin_: yeah ok, that should be fine
14:20 imirkin_: probably unnecessary to change the texture buffer one but wtvr
14:20 sarnex: too lazy lol
14:21 imirkin_: yeah, no harm done
14:21 sarnex: this is the screen now https://i.imgur.com/ffbr16z.jpg
14:21 imirkin_: are you using my copy image thing?
14:21 sarnex: the main menu
14:21 imirkin_: or are you using the cmdline flag?
14:21 imirkin_: almost right ;)
14:22 sarnex: did you push your patch?
14:22 imirkin_: i did
14:22 imirkin_: but you have to use an ext override
14:22 sarnex: ok let me find the envvar
14:22 imirkin_: MESA_EXTENSION_OVERRIDE=GL_ARB_copy_image
14:22 sarnex: thanks
14:24 sarnex: wow it works
14:24 imirkin_: cool
14:25 sarnex: thanks for the help
15:12 spstarr: mannerov: PlayOnLinux doesnt have recent wine with nine patched in
15:12 spstarr: they should have nine for each build they make really
15:12 sarnex: spstarr: the POL guy has to manually build wine and upload it. i dont want to annoy him with that every wine release
15:13 spstarr: heh
15:13 spstarr: no automated way?
15:13 sarnex: no since it requires recent mesa and libx*, and hes on debian stable
15:13 spstarr: ugh
15:13 sarnex: so i wrote a script to create a chroot and do everything
15:13 spstarr: why won't the wine devs just stop being asses and let this be added
15:13 spstarr: its getting very stupid now
15:14 sarnex: well i guess they dont want the duplicated code or code thats platform specific
15:14 spstarr: since wine is Nine platform specific?
15:14 spstarr: when is
15:14 spstarr: wine runs on *BSD etc and THEY use Mesa too
15:14 sarnex: well it wont work on mac
15:14 spstarr: can the code be separated into its own file ?
15:14 sarnex: yes there were some staging guys working on it but i think its on hold
15:15 spstarr: i see
15:15 sarnex: i would do it if i knew enough
15:18 mannerov: and I think we should find a way to have it compile even if mesa headers are not there
15:18 sarnex: that would be ideal
15:18 mannerov: duplicating headers is a solution, but there's probably better
15:18 sarnex: my POL build hacks the headers into the wine source but i dont know if you can do that
15:19 mannerov: Everyone should be able to compile even without Mesa installed
15:20 sarnex: i agree
15:20 sarnex: but i dont know a real solution
15:25 imirkin_: compile what? wine?
15:25 sarnex: yeah right now you need mesa headers to compile wine with gallium nine
15:26 imirkin_: oh that stinks!
15:26 imirkin_: sounds like the wrong API is being exposed
15:27 mareko: OpenGL?
15:27 mannerov: sarnex meant mesa gallium nine headers
15:27 spstarr: someone else seeing corruption in ARK
15:28 spstarr: it is same
15:28 mannerov: currently you need mesa installed before compiling patched wine
15:28 spstarr: https://imgur.com/QvV0YnP
15:28 spstarr: airlied: this is the same texture corruption I noticed
15:28 imirkin_: mannerov: you need to ship a public ABI header, installed as part of mesa
15:28 imirkin_: mannerov: similar to GL.h
15:29 spstarr: if its Catalyst then this means it is not Mesa rendering this wrong
15:29 spstarr: unsure yet.. asking them to run glxinfo to tell me renderer being used
15:29 imirkin_: mannerov: and then only enable nine in the wine build if that header is found
15:30 mannerov: have a separate repo with just the headers ?
15:30 mannerov: I see the idea
15:30 mannerov: too late for Mesa 11 :-)
15:30 imirkin_: mannerov: not a separate repo... a separate install artifact
15:31 imirkin_: mannerov: no, mesa 11 is just about to be branched, not released
15:31 imirkin_: you can always add it in
15:31 mannerov: what do you mean by separate install artifact ?
15:31 imirkin_: when you run "make install"
15:32 imirkin_: it will put e.g. GL.h into /usr/include/GL/GL.h (or something liek that)
15:32 imirkin_: similarly you should have a nine.h
15:32 imirkin_: which has your library's API
15:32 mannerov: I think that's what happens at make install
15:32 imirkin_: exactly.
15:32 imirkin_: and wine should build against that
15:32 mannerov: but it still means that you build mesa before having the headers
15:33 imirkin_: same as for everything else
15:33 mannerov: which is the problem here, we'd like wine to build on any setup (very old mesa, or no mesa at all)
15:33 imirkin_: right....
15:33 imirkin_: i don't see the issue
15:34 imirkin_: you check for the header at configure time
15:34 imirkin_: like any other optional dependency
15:35 mareko: or copy the header into wine
15:35 imirkin_: that seems... very unusual
15:35 mannerov: I think copying the header into wine is more sane
15:35 imirkin_: you do that for runtime detection of dependencies which is a very rare use-case
15:35 mannerov: because if we add a function to the wine - Mesa interface we end up requiring Mesa git to build wine git with nine
15:36 imirkin_: mannerov: right, if you make an API, you have to stick to it ;)
15:36 mannerov: which means a lot of difficulties to distribute to PlayOnLinux for example
15:36 imirkin_: mannerov: you've been in this land where things are done in lock-step, but a grown-up project has to have a versioned API
15:36 imirkin_: with so-names, etc
15:37 mannerov: our API is has major/minor versions
15:37 mannerov: and the headers are installed at mesa make install
15:37 imirkin_: ok, so why can't wine just depend on those headers and use them?
15:37 mannerov: a practical example is what sarnex was raising
15:37 imirkin_: i.e. how is this different than *every other* dependency?
15:38 mannerov: PlayOnLinx build its wine old debian stable
15:38 mannerov: building patches wine needs recent Mesa
15:38 mannerov: recent Mesa needs a lot of recent packets not in debian stable
15:38 mannerov: etc
15:38 mannerov: whereas you just need have the headers in wine to solve all that
15:39 imirkin_:has lost interest in this conversation
15:39 imirkin_: basically you can either do what every other project in the linux ecosystem does
15:39 imirkin_: or you can go your own way
15:39 imirkin_: doesn't really matter to me, tbh
15:39 mannerov::-(
15:48 funfunctor: oh my, we didn't branch yet?
15:49 imirkin_: i think xexaxo said at 23:00 GMT
15:52 funfunctor: will we be seeing AoA making it in?
15:52 mareko: unlikely
15:52 imirkin_: do you need AoA for anything?
15:53 funfunctor: yea, I was planning just to toy with it in some demo GL apps i'm working on
15:54 spstarr: you can still use git master :)
15:54 spstarr: releases mean nothing if we use git anyway
15:54 spstarr: heh
15:54 imirkin_: funfunctor: then merge it into your local tree and enjoy :)
15:54 funfunctor: spstarr: yea but I like to test the app on 'stable' ;)
15:55 funfunctor: just wondering how ready AoA is on Intel
15:55 mareko: done AFAIK
15:55 funfunctor: mareko: GL_ARB_texture_view its just that one piglit test that is holding it up right?
15:55 spstarr: and they did the piglits also?
15:55 mareko: funfunctor: yes
15:56 spstarr: so we just need the radeonsi part
15:56 mareko: AoA only needs a few changes in st/mesa
15:56 spstarr: for radeonsi to use it?
15:56 mareko: for all gallium drivers
15:57 funfunctor: mareko: which test was that again?
15:57 imirkin_: funfunctor: the unwritten one ;)
15:57 funfunctor: lol
15:58 funfunctor: it was texture on texture 2d or something like that
15:58 mareko: funfunctor: a 2D view into a 2D array texture, first_layer > 0
15:58 funfunctor: ah ok thanks, i'll have a go on it *now*
15:59 imirkin_: didn't my cube array one do that?
16:00 imirkin_: oh no. mine did 2d array on cube array.
16:00 mareko: a similar test for cube vs cube arrays would be nice too :)
16:00 mannerov: is these arrays of arrays efficient for hardware ? It looks like false good idea
16:00 imirkin_: mannerov: it's a purely software feature
16:01 mareko: mannerov: it's syntactic sugar in GLSL
16:01 imirkin_: it's as good of an idea as an array is -- i.e. not a very good one :)
16:01 mannerov: ok
16:02 funfunctor: is there a close piglit test I can fork from?
16:02 mannerov: this just seems to hit the common gl concerns that you have many ways of doing things and it's hard to know the good ones for performance
16:03 mareko: yeah, vulkan is the solution to all our gl issues
16:03 mannerov: haven't seen vulkan, I can't say
16:04 imirkin_: unfortunately it'll also create vulkan issues ;)
16:04 imirkin_: funfunctor: look at mine... sampling-from-...
16:04 mannerov: I think khronos should stop working that much in secret
16:05 mannerov: if there are issues in the spec, that way more people could see it and warn
16:07 mannerov: but from the slides I have seen it looks cool
16:07 fredrikh: we find issues in just about every extension spec we implement in mesa, so that sounds about right
16:07 imirkin_: funfunctor: http://cgit.freedesktop.org/piglit/tree/tests/spec/arb_texture_view/sampling-2d-array-as-cubemap.c
16:07 mannerov: just do not like promises like 'Yeah it will work with Wayland and with X via DRI3'. There is many issues everytime on details on these two that I do not trust people behind black doors can think of all of them
16:08 imirkin_: that one casts a 2darray as cubemap. sounds like mareko wanted 2darray as 2d
16:08 funfunctor: imirkin_: thx
16:08 nathanhi: hi all. i'm currently trying to get mesa working on my old powerbook (BE PPC), but unfortunately i'm out of luck. i'm getting the following output from glxinfo: http://paste.debian.net/hidden/98461d06/. i already built mesa from git (latest greatest) and applied the patches from http://lists.freedesktop.org/archives/mesa-dev/2013-December/050218.html (patching them directly failed of course after two years, but I patched it manually).. any tips
16:08 nathanhi: on how to debug or fix this?
16:09 mareko: imirkin_: does it use first_level > 0 ?
16:09 imirkin_: nathanhi: there was a patch...
16:09 imirkin_: mareko: yeah, but only cubemap
16:09 mareko: first_layer
16:09 mareko: not first_level
16:09 imirkin_: /* the texture view starts at layer 2, so face 1 (-X) will have green */
16:10 mareko: ok
16:10 imirkin_: i actually don't know that i have anything for levels... i think there's stuff in there but i forget
16:10 imirkin_: i did it so long ago
16:10 nathanhi: imirkin_, you mean the one I already patched or another one?
16:10 mareko: khronos don't work in secret, anybody can join them
16:11 imirkin_: nathanhi: http://patchwork.freedesktop.org/patch/56756/
16:11 imirkin_: nathanhi: i guess it might be the one you patched in? dunno
16:11 airlied: mannerov: working with wayland will be driver dependant anyways
16:12 imirkin_: nathanhi: it's a bit of a sad situation... from my (passive) observation of the matter, the people who have the hw can't fix it themselves, and the people who can fix it themselves don't have the hw
16:12 mareko: I'm really suprised what institutions join khronos these days... companies I've never heard of, schools...
16:12 funfunctor: mareko: airlied would it such a bad idea to get that fp64 code in so at least its there?
16:12 airlied: it's not like nvidia vulkan driver is suddenly going to start working with dri3/wayland
16:12 airlied: when their GL driver doesn't
16:13 mareko: funfunctor: what code?
16:14 funfunctor: mareko: 1sec, i'll clean up and rebase the branch now
16:14 nathanhi: imirkin_, yes, thats a dilemma.. thanks for the patch - that's the one I already patched
16:15 mareko: it wouldn't be wise to merge things 5 seconds before it's branched
16:15 airlied: yeah that always ends well :-P
16:16 imirkin_: it's not like things can't be fixed on a branch
16:17 mareko: there is another release in 3 months, it'll be here before you know it
16:19 funfunctor: mareko: https://github.com/victoredwardocallaghan/mesa-GLwork/commits/arb_gpu_shader_fp64_cayman
16:19 nathanhi: imirkin_, any tips on how to debug the "no matching fbconfigs or visuals found"? i guess the fbconfig list is generated/filled directly by X?
16:19 imirkin_: nathanhi: i like to use gdb.
16:19 mareko: nathanhi: yes and no
16:20 imirkin_: as for the reason... it might be due some highly annoying issues, like there legitimately any appropriate formats supported by the card
16:20 nathanhi: =D
16:20 mareko: nathanhi: the dri driver determines what the list is (src/gallium/state_trackers/dri)
16:20 imirkin_: and it all used to work because everyone agreed on the same wrong color ordering
16:20 imirkin_: and now you have to figure out how to continue the lie in a more structured way
16:21 nathanhi: sounds like fun
16:21 nathanhi: mareko, thanks
16:21 imirkin_: but i dunno what gpu you have
16:21 mareko: nathanhi: then the x server loads the driver when is starts, later an app loads the driver for itself, and the list of visuals is the intersection of the app's and x's list
16:21 nathanhi: r300
16:21 imirkin_: nor whether this is really the issue
16:22 mareko: nathanhi: so when you change the list in mesa, always restart the x server
16:23 nathanhi: thanks, that makes sense. I guess I have something to start with!
16:24 spstarr: mannerov: I think there's probably more going on than we know, if Wayland will be Vulkan ready, someone from our community is at the table.. somewhere
16:25 mareko: st/dri is spaghetti code too, it interacts with 4 components in various ways, so it's sometimes difficult to tell what's going on there (the components are: libGL/libEGL, st/mesa, dri_utils, the gallium driver)
16:26 imirkin_: sometimes?
16:26 nathanhi: =D
18:51 slicksam: nevermind about the PLL sharing issue I mentioned yesterday, I think it was just the HDMI->VGA adapter being crappy
18:52 slicksam: it would cut out with certain patterns on the screen
19:09 spstarr:runs piglit on today's LLVM/mesa build
19:37 spstarr: " In the news at winfuture they said AMD will gain 200% Performance Boost because Nvidia cannot handle a lot of Drawcallbacks therefore Nvidia users will only gain 5%"
19:37 spstarr: DX12
19:38 spstarr: does this mean whenever we have Vulkan this will mean Linux will be smokingly fast with AMD and not nvidia? =)
19:59 spstarr: piglit so far 4 crashes, 2 warnings doing ALL tests
19:59 spstarr: whats 'expected' crashes/warnings right now?
21:16 spstarr: funfunctor: piglit can take hours to run full tests
21:17 spstarr: im running today's build of LLVM/Mesa, 4 crashes 2 warnings so far 492 failures
21:39 funfunctor: Hi
21:39 funfunctor: spstarr: ok..? slow card/cpu
21:39 funfunctor: ?
21:39 funfunctor: man, the tgsi API *sucks*
21:39 spstarr: the bonaire isn't slow
21:39 spstarr: just takes long to run
21:40 funfunctor: X is slow
21:40 spstarr: glean isn't multithreaded and kwin is eating 100% CPU
21:40 funfunctor: so the last few tests do lots of forks() or whatever also
21:40 spstarr: kwin is broken right now but glean is using about 20% CPU on one core
21:40 spstarr: its on number:
21:40 funfunctor: spstarr: do your tests without KDE ;)
21:40 spstarr: 29367 of 29775
21:40 funfunctor: Xmonad will do as a WM
21:41 funfunctor: never crashes ever
21:41 spstarr: i have 4 core (8 with hyperthreading
21:41 funfunctor: yea, use Xmonad for your piglit tests not KDE/kwin
21:41 spstarr: Xmonad ? lemme install it
21:42 spstarr: ugh
21:42 spstarr: its a whole blob
21:42 spstarr: maybe i'll just switch to gnome's WM
21:42 funfunctor: tgsi_iterate_shader() is the closest thing to generic
21:43 funfunctor: tgsi_scan_shader() seems to not be generic enough for things like r600g unless I am mistaken
21:43 funfunctor: spstarr: whole blob what do you mean?
21:43 spstarr: running mutter now instead
21:43 funfunctor: spstarr: gnome is just as blated as kde, much bigger 'blobs'
21:44 spstarr: clean is still using 20-30% CPU now even w/o kwin so its just running test
21:44 spstarr: it takes *hours* to run piglit fully
21:46 funfunctor: WTF?! https://github.com/shadowsocks/shadowsocks-iOS/issues/124#issuecomment-133630294
21:47 funfunctor: spstarr: takes me about 30min
21:47 funfunctor: your bloated window manager sucks then
21:47 spstarr: im using mutter now
21:47 spstarr: 30 mins?
21:47 spstarr: im confused why so fast
21:48 spstarr: you are Running *ALL* tests?
21:48 spstarr: or just GPU
21:48 spstarr: maybe im running it wrong?
21:48 spstarr: what is your command invocation
21:54 funfunctor: i'm just repeating myself to you..
21:56 spstarr: i run piglit like this
21:56 spstarr: ./piglit run all results/all
21:56 spstarr: im using git master piglit too
21:58 funfunctor: any way..... any folks around to talk tgsi API with me please?
21:59 spstarr: the AMD folks usually take rest on weekends, except sunday when the otherside of globe (yours) resumes work week
22:03 funfunctor: spstarr: try awesomewm as a window manager, probably easier for you to configure
22:03 funfunctor: spstarr: takes up like ~1M of RAM at most, your tests should run significantly faster
22:07 spstarr: its not the WM
22:08 spstarr: still on #29367 i can see a pixel in the window changing
22:08 spstarr: clean running but this test it taking forever
22:08 spstarr: glean
22:08 spstarr: one tiny white pixel in a small window in corner...
23:01 spstarr: still running
23:01 spstarr: i call fowl
23:02 spstarr: looks like ./piglit run tests/all i should be using
23:02 spstarr: oh yes
23:02 spstarr: 3019.907731] [TTM] Illegal buffer object size
23:02 spstarr: [ 3019.907965] [drm:radeon_gem_object_create [radeon]] *ERROR* Failed to allocate GEM object (0, 2, 4096, -22)
23:02 spstarr: [ 3021.510028] [drm:radeon_cs_ioctl [radeon]] *ERROR* Failed to parse relocation -12!
23:03 spstarr: looks like i stuck GPU
23:03 spstarr: and shader_runer coredumped 3 times
23:04 spstarr: funfunctor: killed the glean now it's finishing
23:14 funfunctor: spstarr: some of the last few tests crash for me also
23:14 funfunctor: I only really care about the shader tests tbh
23:15 funfunctor: spstarr: I am currently just enjoying people core dumping from the Ashley Madison leak, lol.
23:17 funfunctor: all those sorts of people who don't care about the NSA, use Facebook and stay stupid things like "you are a conspiracy theorist, I have nothing to hide" just got d0xed
23:17 funfunctor:eats popcorn
23:17 spstarr: ;)
23:18 spstarr: im rerunning with html output so i can report this
23:18 funfunctor: spstarr: figure out which test crashes and i'll take a look at it
23:18 funfunctor: if you compile with -g you can create a backtrace
23:19 spstarr: im looking how you get it to dump to HTML
23:19 spstarr: then we'll know which test
23:20 spstarr: oh you run it then summary after
23:20 spstarr: so piglit run --> piglit summary html results/all
23:26 spstarr: i have it capturing a lot now
23:26 spstarr: we'll know whats going on
23:27 funfunctor: "(16:03:30) spstarr: and shader_runer coredumped 3 times" <~ paste the backtrace of the core file
23:28 funfunctor: https://bugs.freedesktop.org/show_bug.cgi?id=91687 could be related for you, no idea?
23:30 funfunctor: intel really have been pushing hard on OpenGLES 3.1
23:30 funfunctor: its really good
23:32 spstarr: we'll find out
23:32 spstarr: its processing
23:32 spstarr: if it hangs i will kill glean which will start next test mark that as bad
23:35 spstarr: currently running glean@pixelformats which is taking time but no dmesg and i see it doing something
23:37 spstarr: security@initialized-vbo flagged as warn #1
23:40 spstarr: its stuck
23:40 spstarr: GEM BUSY shows strace
23:50 funfunctor: spstarr: which test exactly
23:51 spstarr: waiting for this to finish its going slower now with mutter but its running them properly it seems
23:51 spstarr: 1 just threw dmesg error
23:53 spstarr: how many failures did you end up with in end?