00:17 baryluk: dschuermann: I was hoping somebody will point me to a secret URL with a bunch of shaders extracted from games. So I can produce tables like the one made by you in !4830
00:22 airlied: it would unfortunately be a copyright violation for anyone to do that :-)
00:45 baryluk: airlied: I heard there is some exception from Valve for the stuff available on Steam.
00:58 airlied: baryluk: you may have misheard, since valve don't own all the stuff on steam
00:59 airlied: Valve have given some of their developers and some mesa devs access to large amounts of games on steam
01:00 airlied: and those developers could in theory share between themselves non-publically since they have legal rights to install a copy of the game
01:09 baryluk: airlied: thanks for clarification. I guess I will go over my library and extract all the stuff myself slowly. Will take time :/
06:57 tzimmermann: danvet, hi. what's your plan for the shmem untangle patches? my udl-shmem cleanup depends on patch 7
07:48 dschuermann: baryluk as I said, you can point us to a branch and we report you the results if you like.
09:12 airlied: j4ni, dolphin : am i expecting a fixes tree? will be sending fixes in 12 hrs or so
16:06 mareko: bnieuwenhuizen: I might have to add another hidden tiny "plane" into the modifier: implicit sync boolean fence - 4 bytes
16:07 mareko: because the kernel doesn't wait for idle within the same queue
16:14 mareko: explicit sync doesn't wait either, it only guarantees the correct IB order, but doesn't prevent read-after-write hazards
16:31 bnieuwenhuizen: mareko: what do you mean boolean fence?
16:32 bnieuwenhuizen: mareko: also, does the kernel not do a full sync between processes?
16:49 MrCooper: mareko: the kernel driver should just be fixed?
16:57 mareko: MrCooper: I just had a long conversation with Christian about it. He thinks that the kernel shouldn't wait for idle and cache flush
17:02 mareko: bnieuwenhuizen: dword at the end of the buffer - it's 1 when busy, 0 when idle; the consuming process should do wait_reg_mem on it before use
17:09 bnieuwenhuizen: mareko: is the plan to do an event at the signaling side and then do the wait at the receiving side? If so, why not do both on the signaling side or both on the receiving side?
17:21 mareko: bnieuwenhuizen: not both on the signaling side, because unredirected rendering doesn't have a receiving side on the same queue
17:23 bnieuwenhuizen: and not both on the receiving side?
17:23 mareko: bnieuwenhuizen: not both on the receiving side, because if an independent job is scheduled between the producer and consumer, waiting for idle may be unnecessary, because the independent job might have done the same thing
17:23 bnieuwenhuizen: so basically this is a "are there any cache flushes in progess bit?"
17:24 mareko: bnieuwenhuizen: pixel shaders, color writes, and cache flushes
17:25 mareko: or compute shaders
17:25 bnieuwenhuizen: how likely is it though that independent work will have done the same thing. Most use cases will still have 1 producer & 1 consumer
17:26 mareko: bnieuwenhuizen: 2 unredirected consecutive frames are independent work
17:27 mareko: in that case, there is no sync on the gfx queue
17:27 mareko: even though the driver doesn't know that
17:28 mareko: *there doesn't have to be any sync on the gfx queue
17:30 bnieuwenhuizen: I guess so, though I'd argue that in that case you're already behind :) (latency wise the optimal thing would be: product frame 1, consume frame 1, produce frame 2, consume frame 2)
17:31 mareko: bnieuwenhuizen: how I am behind? produce, present, produce, present (no sync on the gfx queue)
17:33 bnieuwenhuizen: well, if you do produce frame 1, (no independent work, consume frame 1 then you need the sync. So if you can skip the sync, you have something like produce frame 1, produce frame 2, consume frame 1 which is not ideal wrt latency
17:33 bnieuwenhuizen: well, if you do produce frame 1, (no independent work), consume frame 1 then you need the sync. So if you can skip the sync, you have something like produce frame 1, produce frame 2, consume frame 1 which is not ideal wrt latency
17:35 mareko: bnieuwenhuizen: if the consumer is a non-vsynced monitor, the last frame is displayed, so the latency decreases with frame time even if the monitor refresh rate is constant
17:35 bnieuwenhuizen: fair
17:36 buhman: I am monkeying around with xcb and egl. I'm able to run several EGL examples, including src/egl/opengles2/es2tri . In my own test code though, https://gist.github.com/buhman/37540dd0081c567c42815511640d6ec8 eglCreateWindowSurface returns EGL_NO_SURFACE and eglGetError returns EGL_BAD_ALLOC. How can I troubleshoot this?
17:38 bnieuwenhuizen: I guess the main question on feasibility is whether it is fair game to be not done with the memory when the explicit fence signals. IIRC some wayland remote desktop (waypipe?), just maps the buffer and copies it over the network
17:39 bnieuwenhuizen: which is incredibly stupid for AMD HW anyway (as we'll never allocate cacheable memory ...), but I'm not sure whether there is a stance yet on allowing/disallowing that
17:39 bnieuwenhuizen: danvet ^
17:40 emersion: i don't understand why some sync stuff should be part of the buffer data
17:41 emersion: yes, waypipe can be used in a mode that copies dmabufs over the network
17:43 emersion: i think userspace would expect being able to read the buffer, store it in a file, and then re-create a buffer next boot or something
17:43 emersion: also buffers need to be transferable between two different GPUs (of the same vendor), but should be fine in this case
17:43 mareko: waypipe is kinda unrelated to implicit syncing between dependent jobs within the gfx queue
17:44 emersion: well, if some sync state is in the buffer data, things will get copied as-is
17:44 bnieuwenhuizen: mareko: I think the interesting part though is why christian thinks explicit fences shouldn't wait for idle if they're on the same queue. in general if there is a signal->wait relation using an explicit fence, the receive side would depend on any changes in the signal side
17:46 bnieuwenhuizen: I'm not sure what usecases/optimizations we unlock by ever not waiting for idle
17:48 nikolaj_basher: Hi there :-) Im using kubuntu and have an i-tec usb3 dock station and I have following problem
17:48 nikolaj_basher: Is there any who has experinced that KDE start lacking when you plugin the dockstation and disable the laptopscreen?? When all screens is activated (to external screens) and the laptop nothing lacks
17:50 buhman: (oh I figured it out-- I needed a call to xcb_flush; my xcb window didn't exist yet when I called eglCreateWindowSurface)
17:52 nikolaj_basher: This is before i disable the screen https://paste.ubuntu.com/p/Qjnqj9ZzQs/
17:52 nikolaj_basher: this is after (when lacking) https://paste.ubuntu.com/p/SRZ6v7qtbv/
17:52 nikolaj_basher: https://paste.ubuntu.com/p/syWj4spRnP/ (Xorg.0.log)
17:53 bnieuwenhuizen: hmm, okay, I see how it can be more useful with implicit sync
17:56 bnieuwenhuizen: since unlikel explicit sync the existence of a fence is not a clear signal that there is a real dependency there
17:57 nikolaj_basher: bnieuwenhuizen, is that a comment on my post?
17:57 bnieuwenhuizen: no
17:57 nikolaj_basher: :-)
18:01 bnieuwenhuizen: mareko: this entire not waiting for idle story only happens if both signalling & receiving side are on the same queue and using the same drm fd right?
18:01 bnieuwenhuizen: would it make sense to have that wait flag in a side-buffer?
18:01 bnieuwenhuizen: in the driver and not be shared
18:02 bnieuwenhuizen: I guess that still needs some radv & radeonsi cooperation if they are using the same drm fd (e.g. in the same process)
18:08 nikolaj_basher: no one who are willing to take a look?
18:09 bnieuwenhuizen: nikolaj_basher: what do you mean lacking?
18:11 nikolaj_basher: bnieuwenhuizen, everything slow really down, moving with mouse, terminal input etc as all the computer power stops
18:12 nikolaj_basher: until I enable the laptop screen again.
18:12 bnieuwenhuizen: what GPU?
18:13 nikolaj_basher: I don't know im just moved away from windows, so I don't know where to see what it effects
18:14 nikolaj_basher: It's just all I cant do any thing just to open the menu takes 5-7 sec. normaly it takes just the mouseclick
18:16 nikolaj_basher: bnieuwenhuizen, but if you tell my the command for what you want to see I'll send the pastie of it
18:17 bnieuwenhuizen: you can find your GPU with glxinfo | grep "OpenGL renderer"
18:18 bnieuwenhuizen: bt honestly not much ideas on what can be wrong here, 5-7 sec responses is a lot
18:18 bnieuwenhuizen: my first thought was laptop overheating due to extra load of the external monitor but I don't see anything like that in dmesg
18:20 nikolaj_basher: bnieuwenhuizen, i just need to install the mesa-utils to sec then i make the dump
18:20 nikolaj_basher: bnieuwenhuizen, OpenGL renderer string: Mesa DRI Intel(R) HD Graphics 4600 (HSW GT2)
18:24 bnieuwenhuizen: so if you have ssh index while it is slow it might be worth logging in via ssh and checking via top (and intel_gpu_top for the GPU) what is being so slow
18:24 bnieuwenhuizen: ssh access *
18:33 nikolaj_basher: bnieuwenhuizen, can I do in a terminal without ssh ? it just take a while but it can be done
18:35 bnieuwenhuizen: nikolaj_basher: the trick is that you can check ssh on a different computer while GFX is unresponsive. That gets pretty messy if you try to do it on the same computer :)
18:35 nikolaj_basher: bnieuwenhuizen, arhh but I don't have other ;-(
18:35 bnieuwenhuizen: though you could just run top in a terminal and see if anything jumps out when it is slow
18:36 nikolaj_basher: bnieuwenhuizen, i'll try
18:39 nikolaj_basher: bnieuwenhuizen, none... CPU 1,7% avarage GPU render 1,5%
18:43 LiquidAcid: nikolaj_basher, if you have an android device you can run something like connectbot
18:43 nikolaj_basher: LiquidAcid, thanks for the hint.
19:17 nikolaj_basher: bnieuwenhuizen, I have found the rootcause but not the fix
19:18 nikolaj_basher: The problem is when the laptop screen turnoff
19:18 bnieuwenhuizen: okay, no idea then
19:22 emersion: hm, drm-tip doesn't build anymore for me, amdgpu has a ref to drm_gem_object_put_unlocked, should be changed to drm_gem_object_put
19:43 airlied: nikolaj_basher: yeah it's a messy problem, when you turn off the screen, a fake crtc gets used with a refresh rate of 1 fps
19:47 mattst88: do I need to do anything more than CC="ccache gcc" make to get kernel builds to use ccache?
19:47 mattst88: some google results indicated that that was sufficient, but I'm not seeing anything reported in ccache -s
19:47 anholt_: mattst88: you don't have a cross compile set by chance, do you?
19:48 anholt_: only other thing I could think of
19:48 mattst88: anholt_: nope, I don't
19:50 anholt_: mattst88: something that may help you in general, if your distro does it: put /usr/lib/ccache in your path.
19:51 nikolaj_basher: airlied, is it the driver or do you now why it's happen?
19:52 mattst88: anholt_: oh! that is the fix. thanks a bunch!
19:52 anholt_: mattst88: note that the path trick doesn't help you with cmake, because cmake is a dumpster fire.
19:52 mattst88: anholt_: I assume that /usr/bin/ccache needs to find its compiler symlinks in $PATH
19:53 anholt_: hmm. wonder if you'd end up doubly invoked then. (/usr/bin/ccache finds the compiler in path, which is /usr/lib/ccache/gcc, which then finds the real /usr/bin/gcc)
19:55 mattst88: that's a good question. looks like setting PATH is sufficient, so no need for CC="ccache gcc"
19:56 mattst88: wish I'd realized ccache wasn't working a little earlier into this bisect...
19:57 airlied: nikolaj_basher: it part of the X server
20:01 nikolaj_basher: airlied, the problem is the code in X server??
20:02 nikolaj_basher: The reason why I ask is, that ubuntu-mint cinnamon dosn't have this problem
20:13 airlied: nikolaj_basher: oh the compositor is probably gettihg hit by it
20:13 airlied: so non-composited desktop should be fine
20:14 airlied: but in a composited desktop the compositor will get hit with 1fps updates
20:23 nikolaj_basher: airlied, so is it possible to disable it?
20:24 nikolaj_basher: airlied, or is it because og kde vs gnome
20:24 airlied: nikolaj_basher: other than not closing the lid, not really
20:24 airlied: nikolaj_basher: it's not kde vs gnome, it's composited vs non-composited desktop
20:25 nikolaj_basher: airlied, thanks for the hint I will google it and read about it :-)
20:26 nikolaj_basher: airlied, but what i can read linux mint cinnon is also a composited desktop
20:28 airlied: oh maybe being a GL compositor also matters, not sure if there still are render based compositors
20:29 nikolaj_basher: airlied, thansk. I'll use my laptop as the 3 screen
20:42 bnieuwenhuizen: airlied: xcompmgr :)
20:46 dinosomething: for a modern system, should vga arb be on? if its on, does it interfere with anything, or is it safe to simply have on in case its needed?
20:47 airlied: safe to leave on and ignore
20:53 dinosomething: when i boot, i have /dev/dri/card0 as my igpu, and /dev/dri/card1 as my external gpu. is it possible, on boot, to have the radeon be card0?
20:53 karolherbst: dinosomething: why would you want to do something like this?
20:53 dinosomething: the reason why is because xorg is selecting card0 to be the primary. but i want the egpu to be the primary, so i figured maybe something like "dri.ignore=the-intel-gpu" ?
20:54 karolherbst: dinosomething: is there a problem with the external display or something?
20:54 karolherbst: normally all this should work just fine
20:54 dinosomething: karolherbst: not at all
20:54 karolherbst: is it a 4k display and it's laggy or something else?
20:55 dinosomething: karolherbst: its just a 1920x1080 display, displayportted into my radeon card, the radeon card in a egpu enclosure, then thats thunderboltted to my laptop
20:55 karolherbst: mhhh
20:55 karolherbst: but what are the symptoms?
20:55 dinosomething: karolherbst: when you say it should be automatic, what do you mean?that xorg should automatically select card1 as the primary?
20:55 dinosomething: karolherbst: symptoms are that glxgears runs at 300fps
20:56 karolherbst: ignore glxgears
20:56 dinosomething: ok but check this out
20:56 dinosomething: when i black list i915, and card0 is radeon, it runs very fast, glxgears, glmark2, dota2, all that
20:56 dinosomething: its on par with what i expect from the card, everthing works great when i blacklist i915
20:56 airlied: just use an xorg.conf
20:57 dinosomething: airlied: yea, that would work, but im trying to avoid touching manual xorg.conf's as much as i can
20:57 karolherbst: yeah.. if it's a performance concern then probably that
20:57 karolherbst: but laptops...
20:57 dinosomething: karolherbst: yea seriously
20:57 karolherbst: airlied: the issue is just, that X refuses to start if you don't have the egpu
20:57 karolherbst: this just sucks on so many levels
20:57 dinosomething: yup!
20:57 dinosomething: karolherbst: you got it!
20:57 dinosomething: karolherbst: exactly
20:57 karolherbst: dinosomething: well... I know the issue, I just doubt it matters for gaming
20:57 dinosomething: i guess i could do multiseat stuff?
20:57 karolherbst: you will probably be capped by the refresh rate
20:57 karolherbst: but...
20:57 karolherbst: this shouldn't actually matter
20:58 dinosomething: karolherbst: exactly
20:58 karolherbst: those fps numbers don't really mean anything if you go above the refresh rate
20:58 dinosomething: karolherbst: so, are you saying that xorg should automatically pick the best gpu as the primary?
20:58 karolherbst: but our multi GPU architecture also sucks _a_lot_ in linux
20:58 dinosomething: karolherbst: yea lol
20:58 dinosomething: id love to contribute to it
20:58 karolherbst: dinosomething: no
20:58 airlied: nobody knows what the correct answer is in that case thouygh
20:58 dinosomething: but first i need to figure this out
20:58 karolherbst: but you shouldn't see any actual problems
20:58 airlied: some people don't want the egpu
20:58 karolherbst: lower fps.. sure
20:58 airlied: some people do
20:58 karolherbst: but not lower than the refresh rates
20:58 dinosomething: karolherbst: oh, yea
20:59 airlied: you can't pick the "right" answer by default
20:59 dinosomething: karolherbst: no no, youre 100% correct
20:59 dinosomething: there are no "problems"
20:59 dinosomething: like
20:59 dinosomething: other than really bad performance
20:59 karolherbst: dinosomething: well.. as long as you stay above 60 it's fine
20:59 airlied: I suppose if no monitors are connected to the egpu you can make a slightly better decission
20:59 karolherbst: or do you get higher fps drops?
20:59 dinosomething: there is 1 problem actually, that you might be interested in
20:59 dinosomething: check this out:
21:00 karolherbst: airlied: well.. we really don't want to have to copy the frames around
21:00 karolherbst: we need to solve this issue :p
21:00 airlied: karolherbst: how do we know though
21:00 karolherbst: what do you mean?
21:00 airlied: karolherbst: if I want to use my egpu as just an offload but display on my panel
21:00 karolherbst: you use the gpu where the display is connected for compositing
21:00 airlied: how do you decide that
21:00 airlied: the laptop also has a display
21:00 airlied: or maybe a desktop has another gpu in it
21:00 karolherbst: and if you have 3 gpus, you have 3 compositor contexts for compositiing
21:01 karolherbst: and you never copy the content to another GPU from a window
21:01 dinosomething: so, intel gpu on, laptop screen on, amdgpu on, monitor plugged into radeon card. so, glxgears and whatnot is low fps. ok, now, this is super interesting: i close the laptop lid, then its just the monitor plugged into the radeon egpu. this is whats sooo weird: the mouse cursor moves totally fine and fluidly. but the actual updates of the rest of the desktop happen like 1 time every 3-5 seconds
21:01 karolherbst: except on the ones it's displayed on
21:01 karolherbst: dinosomething: uhhh... yeah.. that sounds like a bug
21:01 airlied: dinosomething: yeah that's what I described above to someone else
21:01 dinosomething: its so bizarre. because if i run glxgears, it still reports 300fps!
21:01 dinosomething: but the desktop like, wont show updates except at .3 fps!
21:02 airlied: when you have a secondary screen on a secondary GPU, the primary screen controls the compositor refresh
21:02 dinosomething: airlied: yup!
21:02 dinosomething: thats what i was thinking
21:02 airlied: when you turn off the panel, the X server drops the primrary refresh to 1fps
21:02 dinosomething: ok, what i can do to help?
21:02 dinosomething: like
21:02 dinosomething: i want to contribute
21:02 airlied: to save power because hey the lid is closed
21:02 dinosomething: airlied: ohhhhh
21:02 airlied: but the secondary monitor isn't taken into account
21:02 dinosomething: whoa
21:02 karolherbst: yeah...
21:02 airlied: since it's not on the primary GPU
21:02 dinosomething: uhhhh
21:02 karolherbst: but because multi gpu sucks.. it sucks :p
21:02 dinosomething: yea! ok so
21:02 dinosomething: so that comes back to my original question, right?
21:02 dinosomething: i want the radeon as the primary
21:03 airlied: unfortunately the solutions are a) rewrite X server, b) rewrite wayland compostiors
21:03 karolherbst: I really wished we would just completly reworked how we do multi GPU stuff
21:03 airlied: dinosomething: xorg.conf
21:03 karolherbst: airlied: we need to do this though :p
21:03 airlied: and maybe write a script at boot that runs lspci and rm's it :-)
21:03 dinosomething: airlied: ok im happy with editing xorg.conf, but i just wanted to make sure im like
21:03 karolherbst: the current situation is just crappy
21:03 dinosomething: doing it the "right" way
21:03 airlied: karolherbst: I've been writing it for 10 years now
21:03 karolherbst: I know :/
21:03 dinosomething: check this out: i got a cool project idea
21:03 dinosomething: want to get your thoughts
21:03 karolherbst: dinosomething: the issue is just, if you have a custom Xorg.conf, you have to change it everytime you disconnect the case
21:03 airlied: karolherbst: I even rewrote the X server to dynamically move stuff between screen
21:03 karolherbst: or is there config magic to prevent this?
21:04 dinosomething: some sort of daemon, right. it keeps a file at "/run/egpu/xorg.conf.d/blah.conf"
21:04 airlied: karolherbst: not if you write a system service to do it :-)
21:04 karolherbst: airlied: heh... yeah
21:04 dinosomething: then you symlink /etc/x11/xorg.conf.d -> /run/egpu/xorg.conf.d
21:04 karolherbst: some distributions already have something around though
21:04 karolherbst: but yeah
21:04 karolherbst: essentially
21:04 airlied: it might be possible to write a seat configuration file, but that stuff is messy
21:04 dinosomething: the daemon will keep /run/egpu/xorg.conf.d up to date with changes to your hotplugged gpus or whatever
21:04 karolherbst: airlied: you still have the login manager in between :/
21:05 dinosomething: but you still gotta restart X when it changes yea?
21:05 karolherbst: ohh wait.. the login manager can just use the default stuff
21:05 karolherbst: mhhhhh
21:05 karolherbst: dinosomething: yeah
21:05 karolherbst: eGPU is fundamentally broken right now :p
21:05 karolherbst: sadly
21:05 airlied: dinosomething: if you unplug the egpu I guarantee restarting X will be the lest of your worries
21:05 karolherbst: maybe in 10 years it will be better
21:05 dinosomething: karolherbst: yea thats the sense i get, but i dont see why the fix isnt as simple as, for one, the "drop_to_1_fps_if_lid_closed()" can add a check for like
21:06 karolherbst: dinosomething: it's a bug though...
21:06 karolherbst: we should be smarter about this
21:06 dinosomething: airlied: heh yea, in my scenario the idea is that the cable will be left plugged in as long as its running
21:06 dinosomething: so the pseudocode or whatever is like "if running on integrated -> if lid closed -> drop to 1fps"?
21:06 karolherbst: airlied: can't we not do this 1 fps stuff if there are connected displays?
21:06 karolherbst: on any gpu?
21:06 karolherbst: or is that a fundamental X issue
21:07 airlied: karolherbst: it's not that simple
21:07 airlied: it's a 3D client problem with present
21:07 airlied: if you have a 3D compositor (mutter) and it is doing vblank synced drawing
21:07 airlied: and you close the lid, and turn off the crtc it was syncing to, what do you sync it to
21:08 karolherbst: one of the rmaining crtc
21:08 airlied: what remaining crtc?
21:08 airlied: the primary gpu is the only one you can use
21:08 dinosomething: quick question: is closing the lid the exact same thing as selecting "dont use this display" in gnome settings?
21:08 airlied: before you answer that with the secondary one :-P
21:08 airlied: dinosomething: pretty much
21:08 karolherbst: airlied: right.. but that's assuming our broken infra...
21:08 karolherbst: but yeah
21:08 dinosomething: ok
21:08 karolherbst: I get your point
21:08 karolherbst: it wouldn't be an issue with local compositing
21:09 airlied: karolherbst: it's part of how GL/DRI3/present works
21:09 karolherbst: still sounds like broken design to me :p
21:09 airlied: I suppose we could fake up a 60fps timer instead ofa 1fps timer
21:09 karolherbst: or the old refresh rate?
21:09 airlied: karolherbst: we likely don't know that
21:09 karolherbst: well
21:09 karolherbst: we could save it
21:09 airlied: since the crtc got turned off
21:09 karolherbst: at some point we did know it
21:09 airlied:can see you need to work on the X server some more
21:10 karolherbst:hopes he never has to
21:10 airlied: if you keep talking like this :-P
21:10 karolherbst: would it be better with wayland though?
21:10 dinosomething: im sorry, im slightly lost... the issue yall are discussing is related to what scenario? is it "laptop lid was on, monitor was on through egpu. now laptop lid is closed" ?
21:10 airlied: karolherbst: once mutter is rewritten from scratch
21:10 karolherbst: ahh.. right
21:10 dinosomething: so the compositor is the issue, not xorg?
21:10 karolherbst: so with wayland you also see this 1 fps problem?
21:10 airlied: dinosomething: it's a bit of both
21:10 karolherbst: mhhh
21:11 karolherbst: annoying
21:11 airlied: karolherbst: not sure what mutter does in that situation
21:11 dinosomething: im not sure how to say "use wayland" in ubuntu20.04
21:11 airlied: probably fails before you get that far
21:11 karolherbst: airlied: heh.. worth a shot I guess :p
21:11 dinosomething: if you know how im happy to try and report back in here
21:11 karolherbst: dinosomething: in the login manager you usually can select your session
21:11 karolherbst: somewhere
21:11 karolherbst: dunno how that is with ubuntu now
21:11 karolherbst: but they moved to wayland by default now, didn't they?
21:12 airlied: at some point I might try and fix the X server again, though it still crappy solution
21:12 karolherbst: dinosomething: https://linuxconfig.org/how-to-enable-disable-wayland-on-ubuntu-20-04-desktop
21:12 dinosomething: karolherbst: i dunno, but my default is X because i look at the env var
21:12 airlied: since you really do want a compositor per GPU output and lots of magic buffer sharing
21:12 airlied: and device loss and context loss
21:12 airlied: and smart applications
21:12 karolherbst: yeah
21:12 airlied: and a nouveau that runs the dgpu fast
21:12 airlied: so that you can write one set of rules
21:12 karolherbst: :)
21:12 airlied: that aren't nouveau is slower dgpu than igpu, don't do any of thise
21:13 karolherbst: I have workloads where nouveau on the 1050 is faster than the intel gpu though :p
21:13 dinosomething: errr apparently i should be using wayland but im pretty sure im not because there was some WAYLAND_SESSION var or whatever i read about on stack overflow
21:13 airlied: karolherbst: a reclocked 1050?
21:13 karolherbst: airlied: nope
21:13 dinosomething: ill be back on here in a lil bit, gonna verify its using wayland and attempt to reproduce
21:13 karolherbst: dinosomething: good luck
21:13 dinosomething: ;)
21:13 karolherbst: maybe using the other thing will work better for you
21:14 dinosomething: which thing
21:14 karolherbst: xorg vs wayland
21:14 dinosomething: oh yeah
21:14 dinosomething: so the hypothesis is that wayland will continue to composite at > 1 fps, even when the lid is closed, right?
21:15 airlied: wayland isn't a thing
21:15 airlied: mutter's wayland implementation is
21:15 airlied: and I don't think it does 1fps fallbacks
21:15 karolherbst: airlied: gputests pixmark_piano runs worse on the intel GPU compared to a stock clocked 1050
21:15 airlied: but it might just stop drawing altogether
21:15 airlied: karolherbst: once that's a game on steam you can fix it :-P
21:15 karolherbst: not much of a difference though
21:15 karolherbst: :D
21:18 dinosomething: whoa
21:18 dinosomething: ok um
21:18 dinosomething: i edited /etc/gdm3/custom.conf, i set WaylandEnable=true (previously it was commented out), and now everything works....
21:18 dinosomething: laptop lid is "off", and my monitor is driven by the radeon just fine
21:19 dinosomething: well, the fps is still like 300 though
21:19 dinosomething: its not as good as if i blacklist i915
21:20 dinosomething: but i guess force enabling wayland does the trick to atleast make it usable
21:20 dinosomething: but im still confused about why its 300fps
21:20 dinosomething: is the igpu still being used for compositing? shouldnt the compositing be done on the card, and not locally?
21:21 airlied: dinosomething: yes, and yes, but not how it works
21:21 airlied: saying it should be, and writing code for 5 years to make it happen are very different
21:21 karolherbst: dinosomething: pcie bandwidth limitation
21:21 dinosomething: most def, and i appreciate the work put into this
21:21 karolherbst: we need to copy every frame over
21:22 dinosomething: i just want to know so that i can contribute some how
21:22 karolherbst: and over the x4 pcie link can reduce the overall throughput
21:22 karolherbst: amdgpu does change the pcie link speeds, right?
21:22 karolherbst: maybe not if only the link is under full load.. dunno
21:22 karolherbst: not my department :p
21:23 karolherbst: but I was thinking of adding something like this for nouveau
21:23 karolherbst: to increase the link speed if there is a lot of stuff going on on the link
21:23 agd5f_: amdgpu can change the speed of it's link to the bridge above it
21:23 karolherbst: ohhhh
21:23 karolherbst: I remember the discussion where there is a bug probably
21:23 karolherbst: right...
21:24 airlied: dinosomething: contributing is probably a large task, involving fixing the X server, mutter, mesa and kernel bits in order to make somethign
21:24 agd5f_: the problem with thunderbolt is that it claims one thing in the pcie spec, but actually does another in practice
21:24 karolherbst: agd5f_: I am still convinced that the code is wrong in practice and you really just want to max it out
21:24 airlied: I expect fixing mutter to do this correctly, involves rewriting mutter in nearly every direction
21:24 karolherbst: but...
21:24 karolherbst: maybe AMD GPUs areless resiliant in this domain
21:25 karolherbst: with nvidia we just max it out and the devices do the right thing themselves
21:25 dinosomething: karolherbst: in windows this all works really fast, with dota 2 settings all the way up, so i think pcie bandwidth might not be an issue
21:25 agd5f_: karolherbst, wastes power to max it out if the next link up is narrow
21:25 karolherbst: agd5f_: doesn't matter
21:25 dinosomething: also, i have a hypothesis that where x does its 1fps stuff, wayland does 30fps, heres why i think that:
21:25 karolherbst: I checked the consumption and... well
21:25 karolherbst: it doesn't matter
21:25 karolherbst: especially if you have high load
21:26 dinosomething: when i run dota2 in linux, laptop lid off, monitors driven by radeon card, i have the settings all the way up, and the fps is consistently 30fps
21:26 dinosomething: but when i blacklist i915 its 140fps+ consistently
21:26 agd5f_: karolherbst, it does matter. otherwise we wouldn't spend lots of sweat and tears making pcie reclocking work
21:26 karolherbst: well, now it's broken
21:26 karolherbst: so..
21:26 dinosomething: so im thinking that the wayland thing thats working right now, is only actually woring at 30fps, whereas (mutter?) is doing 1fps
21:26 karolherbst: your choice though
21:26 dinosomething: does that sound like it makes sense?
21:27 airlied: dinosomething: 30fps sounds like it just hits bw limits around there
21:27 dinosomething: i dont think this is pcie stuff though becuase blacklist i915 allows dota2 to run at top settings at 140fps just fine
21:27 airlied: dinosomething: it is pcie
21:27 karolherbst: dinosomething: but then it's not copied over the pcie bus
21:27 dinosomething: karolherbst: ohhhhhhh
21:27 karolherbst: yeah
21:27 dinosomething: karolherbst: damn, yall are amazing
21:27 dinosomething: xorg have a donation page? ill toss some money
21:28 dinosomething: so, in theory, having the compositing happen locally is like, whats currently "the way" its done?
21:28 dinosomething: because the idea is that, in a desktop, that bandwidth limitation isnt an issue
21:28 dinosomething: right?
21:28 dinosomething: in a desktop, the pcie card has a 16x or so connection, and is much closer to ram, so its faster?
21:28 karolherbst: yeah.. more or less. you don't want to move everthing to the "main" GPU, because that wastes pcie bandwidth
21:28 karolherbst: and then
21:28 karolherbst: you copy it back to display it
21:28 karolherbst: it's really bad
21:29 dinosomething: ok so i need to make sure that im using radeon as the primary
21:29 agd5f_: you also want to render in vram. It's faster even with the transfer at the end
21:29 dinosomething: is that synonymous with setting Screen0 -> device to "amdgpu" ?
21:29 airlied: yeah you want to launch the apps on the gpu they are going to display on, in theory
21:29 airlied: in practice we can't do that
21:29 dinosomething: in the Screen section in an xorg config, is the "device" field uses as the device where compositing happens?
21:30 airlied: and if they get dragged across then you want to either copy or issue context lost and let them restart
21:30 karolherbst: dinosomething: the issue with that is just, that if you unplug the egpu, you have to make intel the primary again.. but with a reboot it shouldn't matter
21:30 karolherbst: dinosomething: maybe have two grub entries per kernel?
21:30 karolherbst: or something?
21:30 dinosomething: karolherbst: yea in this scenario im like, assuming i dont unplug for now
21:30 karolherbst: one blacklists i915 the other doens't
21:30 dinosomething: karolherbst: yea i use refind, i have 2 kernel entries, one that blacklists, and a default that doesnt
21:30 karolherbst: ahh yeah
21:30 dinosomething: karolherbst: but check this out
21:30 karolherbst: probably the less annoying thing then
21:30 dinosomething: itd be nicer if i didnt have to blacklist, why cant i just like
21:31 dinosomething: well
21:31 dinosomething: idk, just umm, provide a way to tell dri "hey this is the primary", prior to xorg load?
21:31 karolherbst: I think there are some people having a systemd service which detects whether the egpu is plugged in or not
21:31 airlied: just write a startup script that runs lspci and write an xorg.conf
21:31 dinosomething: i guess an xorg conf is the right way
21:31 karolherbst: and switches config files over
21:31 dinosomething: yea
21:31 dinosomething: karolherbst: yea there is
21:31 karolherbst: yeah
21:31 karolherbst: I guess this is probably good enough
21:31 karolherbst: hopefully
21:31 karolherbst: broken xorg config is annoying
21:31 dinosomething: but even better is using multiple "serverLayout"s though yea?
21:31 karolherbst: prevents starting X
21:31 dinosomething: isnt that what serverlayout is for?
21:32 dinosomething: or seats or whatever? or loginctl ?
21:32 karolherbst: dunno
21:32 karolherbst: never played around with that
21:32 dinosomething: karolherbst: i think you can do it if you use serverlayout
21:32 dinosomething: but when you start X you gotta specify which layout name to use
21:32 dinosomething: i thiiiiink that does it
21:33 dinosomething: ok, so, the gist of it is, either A) manual xorg.conf or B) blacklist i915
21:33 dinosomething: the manul xorg.conf would simply select a different "/dev/dri/card" as the primary i suppose
21:34 dinosomething: or, would select which /dev/dri/card device is the "device" for Screen0
21:34 dinosomething: i guess i could have a serverlayout with 2 screens defined, and run 2 instances of X
21:34 dinosomething: but itd be really fun to work on this part of x internally, but i guess it sounds like thats a super huge undertaking
21:35 dinosomething: but, as far as dri is concerned, dri isnt doing anything wrong, yea? its adding the cards just fine, so its all X?
21:46 dinosomething: ok, yea, using my own xorg.conf and specifying Screen0 device as amdgpu takes care of it
21:46 dinosomething: my laptop lid just continues to display the console bootup messages because i didnt define that monitor so thats good. thanks alot for your help
22:26 dinosomething: karolherbst: airlied: you can use MatchSeat on screen/device/serverlayout: check this out https://gitlab.freedesktop.org/xorg/xserver/commit/7070ebeebaca1b51f8a2801989120784a1c374ae
22:26 dinosomething: so that is perfect for a gpu that might not exist. so im just gonna have x start under a different seat
23:47 alyssa: krh: Looking into 16-bit booleans for us, I'm noticing even without bool_to_int32, bcsel gets missed going into 16-bit
23:48 alyssa: We'll get sequences like ('f2fmp', ('bcsel', a, '#b', '#c')) where the constants are 32b and the csel happens at 32b
23:48 alyssa: I mean, ok, I can add an algebraic rule to propagate inside
23:48 alyssa: But then I have the sequence in -bshading:shading=cel
23:48 alyssa: ('f2fmp', ('bcsel', a, ('bcsel', b, '#c', '#d'), '#e')))
23:48 alyssa: still should be done all at 16b, but not so easy to snarf out as an algebraic rule
23:51 anholt_: where is this coming from that the constants aren't 16-bit already?
23:51 alyssa: GLSL..?
23:52 alyssa: anholt_: shader-db/shaders/glmark/1-8.shader_test
23:53 alyssa: The constants used for the csel are coming in at 32b
23:54 anholt_: should probably look into why the constants were 32b in the first place, rather than trying to recover by pushing fmp down after the fact.
23:54 alyssa: ack
23:55 alyssa:is scared of GLSL passes :p
23:56 imirkin: it's more scared of you than you are of it
23:57 alyssa: Okay it's just GLSL except Lisp