00:40ajax: how terrible would it be if <GL/internal/dri_interface.h> began to require <vulkan/vulkan.h> ?
00:41ajax: it's kind of already gross that it includes <GL/gl.h> imho
00:47airlied: yeah unlikely vulkan.h would make it much grosser
00:47airlied: ajax: who else uses that file, the X server?
00:48ajax: yeah, glx dri driver loaders
00:49ajax: if it really matters i can just make a new header for the vtables in question i guess
01:28pinchartl: very newbie question about weston: I'm trying to run it with the swrast mesa driver, and I'm getting a "failed to initialize display" error, which I believe is caused by "libEGL warning: did not find extension DRI_DRI2 version 2" when attempting to load the swrast driver
01:28pinchartl: first of all, is weston + swrast supported ?
01:29ajax: depends what you mean by swrast i guess
01:29ajax: the classic thing that never learned shaders, probably not
01:30ajax: llvmpipe or softpipe, maybe
01:30pinchartl: I'm not sure what I need :-) my platform doesn't have a usable GPU, and I'd like to run weston with full software rendering
01:30ajax: i'd imagine those two would work for weston on x11, maybe not for on the raw dri device
01:31pinchartl: but I'm really not sure how to proceed
01:31ajax: i mean. you could try to use the software gl renderers but it'll be quite slow. is the pixman backend not good enough?
01:32pinchartl: it may be. I have pretty much 0 knowledge of the graphics stack above the kernel...
01:32pinchartl: so I have no idea how swrast, llvmpipe and pixman interact with weston and at what level of the stack they sit
01:33pinchartl: (full log available at http://paste.debian.net/1170002/)
01:36ajax: try 'weston-launch -u root -- --use-pixman'
01:37ajax: weston should probably try to recover more gracefully there
01:37pinchartl: thanks for the tip
01:38pinchartl: it's getting one step further, complaining about no event device
01:38pinchartl: that I can try to solve :-)
02:57imirkin: not that it's _really_ news to anyone here, but potentially of interest: https://arstechnica.com/gadgets/2020/11/intel-enters-the-laptop-discrete-gpu-market-with-xe-max/
03:16pinchartl: ajax: it works, thanks a lot
04:25statement2: All i am saying during 25 last years when they crippled me 5times, held six times me responsible of that in the mental insitution, and bypassed all world countries law as alwats to do that and eralier stuff to me, people who give juridical turn to it, there is round about 50 f them, i finance their torturing for 25equivalent years in a row, if someone will think their actions were legit and support them, higher forces will be formulated against you,
04:25statement2: and the war declaration will be sent to you and in this case things are completely out of my hands and in most part out of real justice due those primords supportind terror, but war3 takes off and we do that the hard way, hopefully that means that born primords will die in larger part than the other way around, and later they remember that they never have access to terrorise purest souls anymore next time off.
07:27linkmauve: “22:30:54 jadahl> swick: Lyude what I mean is "hdr" on, but sdr luminance, tht's assumed to eat more power than the same luminance but without hdr mode turned on”, I know very little about HDMI, but isn’t this a case where you transmit half-floats instead of 8-bit integers, and thus will at least incrase bandwidth?
07:27linkmauve: That’s one area which could result in higher consumption.
07:30jadahl: linkmauve: hdr can be 10bpc (instead of 8bpc) too afaiu. it doesn't need to be fp16
07:31jadahl: 10bpc in a 32 bit word, in contrast to 16bpc in a 64 bit wod
07:40emersion: but the simplest solution would be fp16+linear, and blit into a 10bpc+non-linear buffer for scanout?
07:40emersion: (if display driver doesn't support fp16)
07:41emersion: (and appropriate gamma things)
07:42emersion: linkmauve: i'm pretty sure drivers supporting fp16 don't send fp16 on the wire
07:43emersion: the bpc on the wire is controlled by: https://drmdb.emersion.fr/properties/3233857728/max%20bpc
07:44emersion: ah, but i guess you still provide fp16 buffers to the display engine, which increases bw usage inside the GPU
07:48linkmauve: emersion, my non-cursor planes do accept XRGB16161616F, so that’s at least more bandwidth in the display controller during encoding.
07:49emersion: yup, indeed
07:53jadahl: emersion: is there any reason to involve fp16 at all if the display doesn't support it?
07:53emersion: it's simpler
07:54jadahl: for who?
07:54emersion: the compositor\
07:55jadahl: I would suspect compositors would need to be able to blend both with bpc10 and bpc16 though. for fp16 last time I had to use a mesa branch from somewhere for basic support
07:55emersion: (that was my understanding, i haven't worked as much on CM as pq though, so may be wrong)
08:02austriancoder: jekstrand: what do you think about https://gitlab.freedesktop.org/austriancoder/mesa/-/commit/b5edcb747b6bf49685bbe4137284cb3c0802cb88 ?
09:04glennk: swick, its also often a tradeoff between color accuracy and range, srgb content displayed on most hdr monitors looks better when the monitor is running in srgb mode vs hdr mode
09:13pq: glennk, what do you mean by "handling" differing bandwidth requirements of sdr vs. hdr?
09:18glennk: pq, say you have a 4k@120hz display running srgb at 8 bits, to run it at say 10 bits that may exceed the available bandwidth for the particular displayport or hdmi version supported by the monitor/cable/gpu
09:18HdkR: Can confirm, I run srgb content in hdr mode quite often on accident and oops, looks hecka oversaturated
09:20glennk: HdkR, thats not a precision issue but rather a range mapping issue, precision issues appear as banding artifacts
09:20HdkR: Yea. there are lots of gotchas there. Easier to just change the display mode :)
09:25pq: jadahl, emersion is correct. FP16 per se is not a requirement, but if your compositor uses an intermediate buffer in linear-light values, that buffer needs to have more bits per pixel than "normal" to reach even the same quality as "normal". I think 10 bpc integer is minimum for SDR, and if you want HDR, 10 bpc is probably not enough.
09:26pq: jadahl, IOW, if you ever need to store linear-light values, you need more bits that you are used to. It doesn't matter if that's the final FB or an intermediate buffer.
09:27pq: *than you are used to
09:28glennk: if you don't need destination alpha rgb9e5 is pretty good
09:29pq: jadahl, FYI, swick has written code for weston to do linear light blending without using a buffer in linear-light values. The FB is non-linear, and when any fragment is blended in, the existing value in the FB is read, ran through EOTF, blended, ran through EOTF^-1, and written back.
09:31pq: glennk, what do you mean with srgb content "looks better" in sdr mode than hdr mode if not the over-saturation/brightness? If it's not over-saturated, then what is doing the range mapping?
09:32pq: glennk, I see that swick already answered you about the sdr vs. hdr bandwidth handling question.
09:33pq: I wonder what display controller hardware supports rgb9e5 scanout...
09:35pq: glennk, or did you mean rgb9e5 for a linear-light buffer?
09:36emersion: so rgb9e5 is 9bpc and a shared exponent of 5 bits?
09:36pq: I guess so
09:37glennk: pq, yeah linear light buffer for composition, i know at least navi supports blending in that format, not sure about direct scanout support
09:37pq: though I wonder... if you have a pixel dominated by blue, that dictates the exponent, but does it then leave too little precision for green?
09:39glennk: well if you have a large value of green that perceptually masks the blue value
09:39pq: I mean the opposite, large value of blue, and a small but clearly observable amount of red/green
09:40pq: as blue contributes very little to overall brightness
09:41glennk: masking works in that direction too, you can try this with a rgb led
09:42pq: sure, but it works less, right? so I'm wondering if it works enough
09:43pq: well, that's something to be tried out one day :-)
09:47glennk: there's also the 111110 float format with independent exponents, but precision is roughly equivalent to 8 bit srgb
09:49glennk: pq, as for the bandwidth question, i think what i really wanted to ask is who decides which mode to use, is it a manual user setting or is the compositor supposed to guess somehow when to enable hdr?
09:51pq: glennk, IMO, it is a end user setting in his desktop.
09:51emersion: that'd be my guess too, but it also could be hdr as default and fallback to sdr if something goes wrong
09:51emersion: (e.g. failed to light up output)
09:52pq: defaults are something to be argued about :-)
09:53glennk: i think that other os defaulting to sdr mode is probably a hint which is least problematic for now
09:53pq: a compositor could also do on-demand automatic mode switching if it wants to
09:54pq: but as swick said, changing modes changes calibrations, which is a no-go if you profiled your monitor - or then you need to have a profile per mode
09:56glennk: well yeah for a calibrated setup thats all user specified, don't automagic anything
09:56pq: therefore I would recommend compositors to stick to one mode, and let the user decide if he wants a different one
09:57pq: there is also the question of whether you run a monitor in a "standard HDR" mode, like BT.2020 stuff, or in... what would you call it... "direct HDR" mode where the monitor makes no magic tone mapping to the image.
09:58glennk: the acronym alphabet soup will only gain more variants over time :-|
09:59pq: DRM UAPI has enums for these I think
10:02glennk: color_encoding/range property on a plane?
10:02pq: not those...
10:03pq: DRM_MODE_COLORIMETRY_* are kernel internal values for something, but that's not it...
10:04glennk: colorspace on the connector?
10:05emersion: this? https://drmdb.emersion.fr/properties/3233857728/Colorspace
10:05pq: no... maybe it was the HDR metadata struct
10:05pq: emersion, that's the COLORIMETRY thing
10:06pq: i.e. tell the monitor to do magic tone mapping
10:06pq: or color space mapping
10:07pq: struct hdr_metadata_infoframe::eotf
10:08pq: that's the one, I'm pretty sure, but of course the values for that field are not documented with the field
10:08pq: they come from CTA 861.G
10:10glennk: ah thats a property set by user space and sent as a hdmi/dp infoframe blob?
10:11pq: IIRC two of the possible values are "traditional SDR" meaning the normal stuff, and "traditional HDR" where signal covers the full HDR range of the monitor, not only the SDR part of it, but without magic tone mapping.
10:11glennk: so userspace reads out what the monitor is capable of from... displayid blob?
10:11pq: yes, or EDID
10:13pq: if the monitor lies in the blob, whatchagonnado, you certainly won't know that what you are feeding it doesn't work like you expect
14:04zmike: so I've got an update on the intel device lost thing from last week: I rebooted and now it's happening again consistently on a shader test that does a long calc loop
14:06swick: glennk: if you get banding on SDR content in HDR mode then something in the chain is allocating enough bits for a certain range but I also do suspect that HDR mode is generally less accurate regarding colors (at least in the extremes)
14:06swick: *not allocating
14:07swick: with current displays HDR certainly is a tradeoff
14:08pq: swick, is it because of not enough bits on the cable (e.g. 10 bpc), or also inherent to panel tech as well?
14:09swick: the 10bpc on the cable might not be enough for the EOTF in use but the panel is also always free to make lots of dumb decisions
14:10pq: swick, the panel, or the panel controller?
14:12swick: good question. if we compare HDR vs SDR mode then I would guess that the panel itself cant be the culprit for a worse image.
16:09kusma: anholt: After MR 6054, I'm seeing some failures with the D3D12 driver (still out-of-tree)... The problem is that D3D12 requires all declared output system-values to have all elements written. So I'm trying to figure out how to solve this without breaking your optimization... Do you have any good idea?
16:11kusma: I could of course just make an option that drops the undef-mask dance, but that feels a bit meh.
16:13kusma: Alternatively, I could insert a pass that writes all of these system-values to undef-values and run that after nir_opt_undef, but I'm a bit worried about inefficient code in that case...
16:20anholt: output system values? is that what you meant to say?
16:20anholt: it sounds like your backend should be extending NIR's writes to the full length of the value with whatever padding you need to add.
16:28kusma: anholt: it's a DX-ism. In this particular case, it's about clip-distances.
16:28danvet: daniels, sanity check: for egl_protected_surface implementation, reasonable to report to userspace through arb_robustness reset notification when the hw tossed the keys and its all gone?
16:29kusma: anholt: but yeah, output system values. I believe that's what I actually said also ;)
16:29danvet: egl extension doesn't spec what's illegal and what happens, so I think we can just do whatever we feel like anyway
16:29anholt: kusma: I had no idea what "output system values" meant, "system values" means input shader payload bits to me.
16:30kusma: anholt: Right. Well, I believe opengl (and nir) has really taken this term from DX10, so... yeah. It might be a confusing term to some.
16:31kusma: Anyway, thanks for the input, I'll see if that's an easy fix.
16:32anholt: really curious how a test that has valid behavior would have an undef write to a clip distance.
16:32daniels: yeah, just abort or SIGILL is totally valid - but sure, reset works too
16:33kusma: anholt: it's a few piglits that try to only write to a single component from the GLSL source
16:33kusma: "spec@ext_transform_feedback@builtin-varyings gl_clipdistance-no-subscript" et al
16:34kusma: So transform-feedback
16:34kusma: I guess that makes sense.
16:35anholt: so I'm guessing that you've got that array translated to a vec4?
16:35kusma: anholt: kinda. These are two separate vec4 values in D3D12
16:35anholt: ok, sure
16:36anholt: and does d3d12 have separate controls for which clip distances are enabled?
16:36kusma: So we don't have a problem with the second half, it simply doesn't get emitted, and the runtime is happy
16:36kusma: anholt: no
16:36kusma: Actually, perhaps. Let me double check.
16:36anholt: sounds like you need nir_lower_clip_disable()
16:37kusma: I might be confusing this with zink, where we need that
16:37anholt: and the fix to make that write 0s instead of undef
16:38kusma: anholt: Hmm, yeah. That actually sounds much better.
16:38kusma: That being said, I'm afraid that this applies to more than just clipping... But perhaps that's just in theory.
16:39kusma: I mean, that validation rule is generic, it's not specific to clip-distances. I'm not sure if this would apply to anything else, like secondary colors etc...
16:40anholt: you don't get any guarantee from GL that all the components get written. So, I guess you might be getting lucky by lower_io_to_temporaries() causing undefs to get written out instead of channels unused.
16:40kusma: Right, but it seems like until that change we in practice got that guarantee, at least enough for our needs.
16:42anholt: yeah, feels unintentional rather than something the API was guaranteeing. I'm fine with a flag that disables the behavior, though.
16:42kusma: But in either case, I think I'll just give your suggestion a try. Perhaps it's enough for now, and I can look more into this later.
16:42anholt: (but, not knowing your backend, I suspect there's an easy fix on that sidE)
16:42kusma: anholt: I really like the behavior, that's kinda why I don't want to disable it ;)
16:42kusma: it feels like the right thing to do.
16:43anholt: @ed you on the MR to fix the clip_disable pass, you're going to need that if you don't have API-level clip distance flags
16:44kusma: anholt: thanks.
16:46tavvva: Hello guys.
16:46tavvva: I'd love to ask for help with DRI troubleshooting
16:48tavvva: After getting a new company notebook with NVIDIA card I'm unable to start evolution via TightVNC
16:49tavvva: I'm getting the following error: i965_dri.so does not support the 0xffffffff PCI ID. and then it segfaults
16:50tavvva: is there any way how to disable DRI for VNC displays in .drirc ?
16:51tavvva: the application works correctly on a real display
16:52ajax: tavvva: that's quite strange then, the Xvnc you're displaying to (i assume) should have DRI disabled automatically
16:52ajax: tavvva: what happens if you try to run glxinfo instead of evolution?
16:52tavvva: ajax: Hello
16:53tavvva: glxinfo is kinda borked too .... I get X Error of failed request : BadValue
16:53tavvva: and the output only has 6 lines
16:54ajax: can you pastebin the whole output from glxinfo somewhere? also the result of xdpyinfo against the Xvnc server
16:54tavvva: the new NTB apparently has 2 graphics adapters
16:58jenatali: kusma, anholt: Just FYI, the D3D spec differentiates between "system generated values" (inputs) and "system interpreted values" (outputs), but the docs just call them both "system values"
16:59tavvva: ajax: seems nopaste is broken on ubuntu now .... gimme few minutes
16:59kusma: jenatali: thanks for that tidbit :)
17:04tavvva: ajax: http://tavvva.net/files/dri/xdpyinfo.txt http://tavvva.net/files/dri/glxinfo.txt
17:05tavvva: ajax: broken NVIDIA drivers? :]
17:06ajax: i mean. maybe. are you using glvnd?
17:06tavvva: ajax: unfortunately nouveau is much worse
17:07tavvva: ajax: sorry, out of my knowledge .... how can I check?
17:07ajax: oh, ubuntu. you should be, then. hm.
17:09tavvva: at least a workaround with magical env var
17:10tavvva: ...would help a lot
17:10ajax: maybe 'LIBGL_ALWAYS_SOFTWARE=1 evolution' ?
17:10tavvva: doesn't help I've tried
17:11ajax: to be clear, what do you mean by 'via TightVNC' ? that xdpyinfo looks like its of a server that isn't using nvidia's driver, but that _should_ mean it's using llvmpipe (the software mesa driver), which should work
17:11tavvva: the above var was the first I tried ... it helped many times in the past with strange HW
17:12tavvva: ajax: yeah, it's TightVNC with virtual display
17:12tavvva: ajax: so ... not bound to physical displays at all
17:13tavvva: ajax: starting with vncserver :61
17:13danvet: daniels, well because hw the key loss happens a bit too often
17:13danvet: so context reset seemed like the best option
17:13danvet: (anytime an output changes hdcp state, any output even)
17:14tavvva: ajax: my physical display is :0
17:15ajax: (one moment, need to take the dog out)
17:16tavvva: ajax: np, I'll wait
17:20daniels: danvet: hmmm
17:22danvet: daniels, the protected_surface extension doesn't really allow you to write super portable code anyway ...
17:22danvet: more wondering whether I should ping authors to get this added as an official Q&A
17:36ajax: tavvva: can you pastebin 'LIBGL_DEBUG=verbose glxinfo' against the virtual display?
17:43tavvva: ajax: it's the same like without the env
17:44ajax: right, i think i get it now
17:45ajax: vendor release number: 11804000
17:46ajax: so the code to enable libGL on the client side to pick either mesa or nvidia only went in in 1.19, and your Xvnc is based on 1.18.4
17:46ajax: so Xvnc is initializing GLX correctly, but (because you have nvidia's drivers installed) your libGL is nvidia's, which doesn't know how to load mesa's software renderer
17:46ajax: so everything falls over
17:47ajax: i imagine ubuntu had some workaround for this to pick up mesa's libGL through LD_LIBRARY_PATH or something, but i don't know what that would be, you'd need to ask an ubuntu support channel
17:47ajax: your other option is to upgrade to (an ubuntu with) xserver 1.19 or later
17:51tavvva: ajax: cool, thanks for the info
17:52tjaalton: 1.19.6 is available on bionic/16.04 via the hwe stack
17:52tjaalton: err, xenial
17:53tjaalton: going EOL in six months anyway
18:29tavvva: I tried the hwe packages and now it doesn't show the error with PCI IDs, but it still crashes with segv
18:33tavvva: BUT .... I tried to set the LD_LIBRARY_PATH to mesa libGL and IT WORKS!!!
18:33tavvva: so .... thank you!!!
18:35ajax: np, glad we could figure it out
18:39tavvva: putting export LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu/mesa to ~/.vnc/xstartup script seems to be a good workaround
19:53randomray: such concept as lacking pig piles of code does not exist, what exists instead is bunch of deluded devs who can not even understand the fuck out of simple bitwise operations nor anything about computer technology, what also exists is a terror at me, where you might get charged beautifully all. If it found out anyone of you brinwashed or pressurised estonian court of law to take lawless actions against , i said war will be declared gainst you after
19:53randomray: those scammers there are being removed from this planet.
19:55randomray: and period, get the fuck off from my surroundings otherwise
20:07ajax: ffs. how is piglit still this bad at utf8
20:08ajax: piglit, python, whatever
20:16zmike: mareko: heya, I think your multi_draw work has actually given me a significant boost to my fps stabilization in some cases on zink
20:16zmike: really nice work 👍
21:47cwabbott: kusma: I was looking at clip distances for turnip, and I think writing undef like that just isn't right
21:47kusma: cwabbot: how so?
21:47cwabbott: that is, if I'm thinking of the same pass
21:48zmike: see !6563
21:48kusma: Oh, this isn't a pass... It's a dxil requirement. And it's really about xfb, not clipping as such
21:49cwabbott: ah, nvm then
22:15anholt: cwabbott: you're probably thinking of the fix in !6563 that just landed