00:40 ajax: how terrible would it be if <GL/internal/dri_interface.h> began to require <vulkan/vulkan.h> ?
00:41 ajax: it's kind of already gross that it includes <GL/gl.h> imho
00:47 airlied: yeah unlikely vulkan.h would make it much grosser
00:47 airlied: ajax: who else uses that file, the X server?
00:48 ajax: yeah, glx dri driver loaders
00:49 ajax: if it really matters i can just make a new header for the vtables in question i guess
01:28 pinchartl: very newbie question about weston: I'm trying to run it with the swrast mesa driver, and I'm getting a "failed to initialize display" error, which I believe is caused by "libEGL warning: did not find extension DRI_DRI2 version 2" when attempting to load the swrast driver
01:28 pinchartl: first of all, is weston + swrast supported ?
01:29 ajax: depends what you mean by swrast i guess
01:29 ajax: the classic thing that never learned shaders, probably not
01:30 ajax: llvmpipe or softpipe, maybe
01:30 pinchartl: I'm not sure what I need :-) my platform doesn't have a usable GPU, and I'd like to run weston with full software rendering
01:30 ajax: i'd imagine those two would work for weston on x11, maybe not for on the raw dri device
01:31 pinchartl: but I'm really not sure how to proceed
01:31 ajax: i mean. you could try to use the software gl renderers but it'll be quite slow. is the pixman backend not good enough?
01:32 pinchartl: it may be. I have pretty much 0 knowledge of the graphics stack above the kernel...
01:32 pinchartl: so I have no idea how swrast, llvmpipe and pixman interact with weston and at what level of the stack they sit
01:33 pinchartl: (full log available at http://paste.debian.net/1170002/)
01:36 ajax: try 'weston-launch -u root -- --use-pixman'
01:37 ajax: weston should probably try to recover more gracefully there
01:37 pinchartl: thanks for the tip
01:38 pinchartl: it's getting one step further, complaining about no event device
01:38 pinchartl: that I can try to solve :-)
02:57 imirkin: not that it's _really_ news to anyone here, but potentially of interest: https://arstechnica.com/gadgets/2020/11/intel-enters-the-laptop-discrete-gpu-market-with-xe-max/
03:16 pinchartl: ajax: it works, thanks a lot
03:20 ajax: np
04:25 statement2: All i am saying during 25 last years when they crippled me 5times, held six times me responsible of that in the mental insitution, and bypassed all world countries law as alwats to do that and eralier stuff to me, people who give juridical turn to it, there is round about 50 f them, i finance their torturing for 25equivalent years in a row, if someone will think their actions were legit and support them, higher forces will be formulated against you,
04:25 statement2: and the war declaration will be sent to you and in this case things are completely out of my hands and in most part out of real justice due those primords supportind terror, but war3 takes off and we do that the hard way, hopefully that means that born primords will die in larger part than the other way around, and later they remember that they never have access to terrorise purest souls anymore next time off.
07:27 linkmauve: “22:30:54 jadahl> swick: Lyude what I mean is "hdr" on, but sdr luminance, tht's assumed to eat more power than the same luminance but without hdr mode turned on”, I know very little about HDMI, but isn’t this a case where you transmit half-floats instead of 8-bit integers, and thus will at least incrase bandwidth?
07:27 linkmauve: That’s one area which could result in higher consumption.
07:30 jadahl: linkmauve: hdr can be 10bpc (instead of 8bpc) too afaiu. it doesn't need to be fp16
07:31 jadahl: 10bpc in a 32 bit word, in contrast to 16bpc in a 64 bit wod
07:31 jadahl: *word
07:31 linkmauve: Yeah.
07:40 emersion: but the simplest solution would be fp16+linear, and blit into a 10bpc+non-linear buffer for scanout?
07:40 emersion: (if display driver doesn't support fp16)
07:41 emersion: (and appropriate gamma things)
07:42 emersion: linkmauve: i'm pretty sure drivers supporting fp16 don't send fp16 on the wire
07:43 emersion: the bpc on the wire is controlled by: https://drmdb.emersion.fr/properties/3233857728/max%20bpc
07:44 emersion: ah, but i guess you still provide fp16 buffers to the display engine, which increases bw usage inside the GPU
07:48 linkmauve: emersion, my non-cursor planes do accept XRGB16161616F, so that’s at least more bandwidth in the display controller during encoding.
07:49 emersion: yup, indeed
07:53 jadahl: emersion: is there any reason to involve fp16 at all if the display doesn't support it?
07:53 emersion: it's simpler
07:54 jadahl: for who?
07:54 emersion: the compositor\
07:55 jadahl: I would suspect compositors would need to be able to blend both with bpc10 and bpc16 though. for fp16 last time I had to use a mesa branch from somewhere for basic support
07:55 emersion: (that was my understanding, i haven't worked as much on CM as pq though, so may be wrong)
08:02 austriancoder: jekstrand: what do you think about https://gitlab.freedesktop.org/austriancoder/mesa/-/commit/b5edcb747b6bf49685bbe4137284cb3c0802cb88 ?
09:04 glennk: swick, its also often a tradeoff between color accuracy and range, srgb content displayed on most hdr monitors looks better when the monitor is running in srgb mode vs hdr mode
09:13 pq: glennk, what do you mean by "handling" differing bandwidth requirements of sdr vs. hdr?
09:18 glennk: pq, say you have a 4k@120hz display running srgb at 8 bits, to run it at say 10 bits that may exceed the available bandwidth for the particular displayport or hdmi version supported by the monitor/cable/gpu
09:18 HdkR: Can confirm, I run srgb content in hdr mode quite often on accident and oops, looks hecka oversaturated
09:20 glennk: HdkR, thats not a precision issue but rather a range mapping issue, precision issues appear as banding artifacts
09:20 HdkR: Yea. there are lots of gotchas there. Easier to just change the display mode :)
09:25 pq: jadahl, emersion is correct. FP16 per se is not a requirement, but if your compositor uses an intermediate buffer in linear-light values, that buffer needs to have more bits per pixel than "normal" to reach even the same quality as "normal". I think 10 bpc integer is minimum for SDR, and if you want HDR, 10 bpc is probably not enough.
09:26 pq: jadahl, IOW, if you ever need to store linear-light values, you need more bits that you are used to. It doesn't matter if that's the final FB or an intermediate buffer.
09:27 pq: *than you are used to
09:28 glennk: if you don't need destination alpha rgb9e5 is pretty good
09:29 pq: jadahl, FYI, swick has written code for weston to do linear light blending without using a buffer in linear-light values. The FB is non-linear, and when any fragment is blended in, the existing value in the FB is read, ran through EOTF, blended, ran through EOTF^-1, and written back.
09:31 pq: glennk, what do you mean with srgb content "looks better" in sdr mode than hdr mode if not the over-saturation/brightness? If it's not over-saturated, then what is doing the range mapping?
09:32 pq: glennk, I see that swick already answered you about the sdr vs. hdr bandwidth handling question.
09:33 pq: I wonder what display controller hardware supports rgb9e5 scanout...
09:35 pq: glennk, or did you mean rgb9e5 for a linear-light buffer?
09:36 emersion: so rgb9e5 is 9bpc and a shared exponent of 5 bits?
09:36 pq: I guess so
09:37 glennk: pq, yeah linear light buffer for composition, i know at least navi supports blending in that format, not sure about direct scanout support
09:37 pq: alright
09:37 pq: though I wonder... if you have a pixel dominated by blue, that dictates the exponent, but does it then leave too little precision for green?
09:39 glennk: well if you have a large value of green that perceptually masks the blue value
09:39 pq: I mean the opposite, large value of blue, and a small but clearly observable amount of red/green
09:40 pq: as blue contributes very little to overall brightness
09:41 glennk: masking works in that direction too, you can try this with a rgb led
09:42 pq: sure, but it works less, right? so I'm wondering if it works enough
09:43 pq: well, that's something to be tried out one day :-)
09:47 glennk: there's also the 111110 float format with independent exponents, but precision is roughly equivalent to 8 bit srgb
09:49 glennk: pq, as for the bandwidth question, i think what i really wanted to ask is who decides which mode to use, is it a manual user setting or is the compositor supposed to guess somehow when to enable hdr?
09:51 emersion: well…
09:51 pq: glennk, IMO, it is a end user setting in his desktop.
09:51 emersion: that'd be my guess too, but it also could be hdr as default and fallback to sdr if something goes wrong
09:51 emersion: (e.g. failed to light up output)
09:52 pq: defaults are something to be argued about :-)
09:53 glennk: i think that other os defaulting to sdr mode is probably a hint which is least problematic for now
09:53 pq: a compositor could also do on-demand automatic mode switching if it wants to
09:54 pq: but as swick said, changing modes changes calibrations, which is a no-go if you profiled your monitor - or then you need to have a profile per mode
09:56 glennk: well yeah for a calibrated setup thats all user specified, don't automagic anything
09:56 pq: therefore I would recommend compositors to stick to one mode, and let the user decide if he wants a different one
09:57 pq: there is also the question of whether you run a monitor in a "standard HDR" mode, like BT.2020 stuff, or in... what would you call it... "direct HDR" mode where the monitor makes no magic tone mapping to the image.
09:58 glennk: the acronym alphabet soup will only gain more variants over time :-|
09:59 pq: DRM UAPI has enums for these I think
10:02 glennk: color_encoding/range property on a plane?
10:02 pq: not those...
10:03 pq: DRM_MODE_COLORIMETRY_* are kernel internal values for something, but that's not it...
10:04 glennk: colorspace on the connector?
10:05 emersion: this? https://drmdb.emersion.fr/properties/3233857728/Colorspace
10:05 pq: no... maybe it was the HDR metadata struct
10:05 pq: emersion, that's the COLORIMETRY thing
10:06 pq: i.e. tell the monitor to do magic tone mapping
10:06 pq: or color space mapping
10:07 pq: struct hdr_metadata_infoframe::eotf
10:08 pq: that's the one, I'm pretty sure, but of course the values for that field are not documented with the field
10:08 pq: they come from CTA 861.G
10:10 glennk: ah thats a property set by user space and sent as a hdmi/dp infoframe blob?
10:10 pq: yeah
10:11 pq: IIRC two of the possible values are "traditional SDR" meaning the normal stuff, and "traditional HDR" where signal covers the full HDR range of the monitor, not only the SDR part of it, but without magic tone mapping.
10:11 glennk: so userspace reads out what the monitor is capable of from... displayid blob?
10:11 pq: yes, or EDID
10:13 pq: if the monitor lies in the blob, whatchagonnado, you certainly won't know that what you are feeding it doesn't work like you expect
14:04 zmike: so I've got an update on the intel device lost thing from last week: I rebooted and now it's happening again consistently on a shader test that does a long calc loop
14:06 swick: glennk: if you get banding on SDR content in HDR mode then something in the chain is allocating enough bits for a certain range but I also do suspect that HDR mode is generally less accurate regarding colors (at least in the extremes)
14:06 swick: *not allocating
14:07 swick: with current displays HDR certainly is a tradeoff
14:08 pq: swick, is it because of not enough bits on the cable (e.g. 10 bpc), or also inherent to panel tech as well?
14:09 swick: the 10bpc on the cable might not be enough for the EOTF in use but the panel is also always free to make lots of dumb decisions
14:10 pq: swick, the panel, or the panel controller?
14:12 swick: good question. if we compare HDR vs SDR mode then I would guess that the panel itself cant be the culprit for a worse image.
16:09 kusma: anholt: After MR 6054, I'm seeing some failures with the D3D12 driver (still out-of-tree)... The problem is that D3D12 requires all declared output system-values to have all elements written. So I'm trying to figure out how to solve this without breaking your optimization... Do you have any good idea?
16:11 kusma: I could of course just make an option that drops the undef-mask dance, but that feels a bit meh.
16:13 kusma: Alternatively, I could insert a pass that writes all of these system-values to undef-values and run that after nir_opt_undef, but I'm a bit worried about inefficient code in that case...
16:20 anholt: output system values? is that what you meant to say?
16:20 anholt: it sounds like your backend should be extending NIR's writes to the full length of the value with whatever padding you need to add.
16:28 kusma: anholt: it's a DX-ism. In this particular case, it's about clip-distances.
16:28 danvet: daniels, sanity check: for egl_protected_surface implementation, reasonable to report to userspace through arb_robustness reset notification when the hw tossed the keys and its all gone?
16:29 kusma: anholt: but yeah, output system values. I believe that's what I actually said also ;)
16:29 danvet: egl extension doesn't spec what's illegal and what happens, so I think we can just do whatever we feel like anyway
16:29 anholt: kusma: I had no idea what "output system values" meant, "system values" means input shader payload bits to me.
16:30 kusma: anholt: Right. Well, I believe opengl (and nir) has really taken this term from DX10, so... yeah. It might be a confusing term to some.
16:31 kusma: Anyway, thanks for the input, I'll see if that's an easy fix.
16:32 anholt: really curious how a test that has valid behavior would have an undef write to a clip distance.
16:32 daniels: yeah, just abort or SIGILL is totally valid - but sure, reset works too
16:33 kusma: anholt: it's a few piglits that try to only write to a single component from the GLSL source
16:33 kusma: "spec@ext_transform_feedback@builtin-varyings gl_clipdistance[1]-no-subscript" et al
16:34 kusma: So transform-feedback
16:34 kusma: I guess that makes sense.
16:35 anholt: so I'm guessing that you've got that array translated to a vec4[2]?
16:35 kusma: anholt: kinda. These are two separate vec4 values in D3D12
16:35 anholt: ok, sure
16:36 anholt: and does d3d12 have separate controls for which clip distances are enabled?
16:36 kusma: So we don't have a problem with the second half, it simply doesn't get emitted, and the runtime is happy
16:36 kusma: anholt: no
16:36 kusma: Actually, perhaps. Let me double check.
16:36 anholt: sounds like you need nir_lower_clip_disable()
16:37 kusma: I might be confusing this with zink, where we need that
16:37 anholt: and the fix to make that write 0s instead of undef
16:38 kusma: anholt: Hmm, yeah. That actually sounds much better.
16:38 kusma: That being said, I'm afraid that this applies to more than just clipping... But perhaps that's just in theory.
16:39 kusma: I mean, that validation rule is generic, it's not specific to clip-distances. I'm not sure if this would apply to anything else, like secondary colors etc...
16:40 anholt: you don't get any guarantee from GL that all the components get written. So, I guess you might be getting lucky by lower_io_to_temporaries() causing undefs to get written out instead of channels unused.
16:40 kusma: Right, but it seems like until that change we in practice got that guarantee, at least enough for our needs.
16:42 anholt: yeah, feels unintentional rather than something the API was guaranteeing. I'm fine with a flag that disables the behavior, though.
16:42 kusma: But in either case, I think I'll just give your suggestion a try. Perhaps it's enough for now, and I can look more into this later.
16:42 anholt: (but, not knowing your backend, I suspect there's an easy fix on that sidE)
16:42 kusma: anholt: I really like the behavior, that's kinda why I don't want to disable it ;)
16:42 kusma: it feels like the right thing to do.
16:43 anholt: @ed you on the MR to fix the clip_disable pass, you're going to need that if you don't have API-level clip distance flags
16:44 kusma: anholt: thanks.
16:46 tavvva: Hello guys.
16:46 tavvva: I'd love to ask for help with DRI troubleshooting
16:48 tavvva: After getting a new company notebook with NVIDIA card I'm unable to start evolution via TightVNC
16:49 tavvva: I'm getting the following error: i965_dri.so does not support the 0xffffffff PCI ID. and then it segfaults
16:50 tavvva: is there any way how to disable DRI for VNC displays in .drirc ?
16:51 tavvva: the application works correctly on a real display
16:52 ajax: tavvva: that's quite strange then, the Xvnc you're displaying to (i assume) should have DRI disabled automatically
16:52 ajax: tavvva: what happens if you try to run glxinfo instead of evolution?
16:52 tavvva: ajax: Hello
16:53 tavvva: glxinfo is kinda borked too .... I get X Error of failed request : BadValue
16:53 tavvva: and the output only has 6 lines
16:54 ajax: can you pastebin the whole output from glxinfo somewhere? also the result of xdpyinfo against the Xvnc server
16:54 tavvva: the new NTB apparently has 2 graphics adapters
16:58 jenatali: kusma, anholt: Just FYI, the D3D spec differentiates between "system generated values" (inputs) and "system interpreted values" (outputs), but the docs just call them both "system values"
16:59 tavvva: ajax: seems nopaste is broken on ubuntu now .... gimme few minutes
16:59 kusma: jenatali: thanks for that tidbit :)
17:04 tavvva: ajax: http://tavvva.net/files/dri/xdpyinfo.txt http://tavvva.net/files/dri/glxinfo.txt
17:05 ajax: wtf
17:05 tavvva: ajax: broken NVIDIA drivers? :]
17:06 ajax: i mean. maybe. are you using glvnd?
17:06 tavvva: ajax: unfortunately nouveau is much worse
17:07 tavvva: ajax: sorry, out of my knowledge .... how can I check?
17:07 ajax: oh, ubuntu. you should be, then. hm.
17:09 tavvva: at least a workaround with magical env var
17:10 tavvva: ...would help a lot
17:10 ajax: maybe 'LIBGL_ALWAYS_SOFTWARE=1 evolution' ?
17:10 tavvva: doesn't help I've tried
17:11 ajax: to be clear, what do you mean by 'via TightVNC' ? that xdpyinfo looks like its of a server that isn't using nvidia's driver, but that _should_ mean it's using llvmpipe (the software mesa driver), which should work
17:11 tavvva: the above var was the first I tried ... it helped many times in the past with strange HW
17:12 tavvva: ajax: yeah, it's TightVNC with virtual display
17:12 tavvva: ajax: so ... not bound to physical displays at all
17:13 tavvva: ajax: starting with vncserver :61
17:13 danvet: daniels, well because hw the key loss happens a bit too often
17:13 danvet: so context reset seemed like the best option
17:13 danvet: (anytime an output changes hdcp state, any output even)
17:14 tavvva: ajax: my physical display is :0
17:15 ajax: (one moment, need to take the dog out)
17:16 tavvva: ajax: np, I'll wait
17:20 daniels: danvet: hmmm
17:22 danvet: daniels, the protected_surface extension doesn't really allow you to write super portable code anyway ...
17:22 danvet: more wondering whether I should ping authors to get this added as an official Q&A
17:36 ajax: tavvva: can you pastebin 'LIBGL_DEBUG=verbose glxinfo' against the virtual display?
17:43 tavvva: ajax: it's the same like without the env
17:43 ajax: ooookay
17:44 ajax: right, i think i get it now
17:45 ajax: vendor release number: 11804000
17:46 ajax: so the code to enable libGL on the client side to pick either mesa or nvidia only went in in 1.19, and your Xvnc is based on 1.18.4
17:46 ajax: so Xvnc is initializing GLX correctly, but (because you have nvidia's drivers installed) your libGL is nvidia's, which doesn't know how to load mesa's software renderer
17:46 ajax: so everything falls over
17:47 ajax: i imagine ubuntu had some workaround for this to pick up mesa's libGL through LD_LIBRARY_PATH or something, but i don't know what that would be, you'd need to ask an ubuntu support channel
17:47 ajax: your other option is to upgrade to (an ubuntu with) xserver 1.19 or later
17:51 tavvva: ajax: cool, thanks for the info
17:52 tjaalton: 1.19.6 is available on bionic/16.04 via the hwe stack
17:52 tjaalton: err, xenial
17:53 tjaalton: going EOL in six months anyway
18:29 tavvva: I tried the hwe packages and now it doesn't show the error with PCI IDs, but it still crashes with segv
18:33 tavvva: BUT .... I tried to set the LD_LIBRARY_PATH to mesa libGL and IT WORKS!!!
18:33 tavvva: so .... thank you!!!
18:35 ajax: np, glad we could figure it out
18:39 tavvva: putting export LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu/mesa to ~/.vnc/xstartup script seems to be a good workaround
19:53 randomray: such concept as lacking pig piles of code does not exist, what exists instead is bunch of deluded devs who can not even understand the fuck out of simple bitwise operations nor anything about computer technology, what also exists is a terror at me, where you might get charged beautifully all. If it found out anyone of you brinwashed or pressurised estonian court of law to take lawless actions against , i said war will be declared gainst you after
19:53 randomray: those scammers there are being removed from this planet.
19:55 randomray: and period, get the fuck off from my surroundings otherwise
20:07 ajax: ffs. how is piglit still this bad at utf8
20:08 ajax: piglit, python, whatever
20:16 zmike: mareko: heya, I think your multi_draw work has actually given me a significant boost to my fps stabilization in some cases on zink
20:16 zmike: really nice work 👍
21:47 cwabbott: kusma: I was looking at clip distances for turnip, and I think writing undef like that just isn't right
21:47 kusma: cwabbot: how so?
21:47 cwabbott: that is, if I'm thinking of the same pass
21:48 zmike: see !6563
21:48 kusma: Oh, this isn't a pass... It's a dxil requirement. And it's really about xfb, not clipping as such
21:49 cwabbott: ah, nvm then
22:15 anholt: cwabbott: you're probably thinking of the fix in !6563 that just landed