08:23 markus_wanner: Hi, on wayland, is there a way to list attached displays from the CLI? Or where does Gnome Display get its information from?
08:46 pmoreau: markus_wanner: Hello, you could try `grep . /sys/class/drm/card*-*/status`.
09:13 markus_wanner: pmoreau: Thanks, that's what I'm already checking, but it gives me just a connected/disconnected status. No vendor info or anything.
09:13 markus_wanner: Is it i2c that's used to get possible resolution and other such vendor info?
09:15 pmoreau: You can try `grep . /sys/class/drm/card*-*/modes` to ge the resolutions
09:17 pmoreau: And `grep . /sys/class/drm/card*-*/device/device/vendor` will give you the PCI? ID of the vendor; 10de is NVIDIA, nor sure about Intel and AMD.
09:18 pmoreau: There might be a better way to get all that information, but at least it is exposed through various files under /sys/class/drm/card*-*.
09:18 pmoreau: markus_wanner: -^
09:34 markus_wanner: pmoreau: ah, I overlooked `modes`, that seems to be about the display, indeed. card-*-*/device links back to the GPU device, not the display.
11:18 pmoreau: markus_wanner: Oh, you wanted the vendor of the screen, not of the card, got it. No idea how to get the screen’s vendor.
11:31 mivanov: :imirkin :karolherbst Hello again, we've talked before about Vertical sync with Nouveau on GK104(Kepler NVE0)
11:32 mivanov: I've wanted to ask why did you recommend me to use the Nouveau DDX instead of the Generic Modesetting DDX?
11:32 mivanov: I've read some threads stating that most DDX will be deprecated in favor of the Modesetting DDX with Glamor
11:35 mivanov: By the way with the Nouveau DDX(intel disabled, just the Nvidia card), if I disconnect my monitors and reconnect them again, I get a deadzone, meaning I can't put any window there. And mirroring becomes gray in the Display utilities.
11:36 karolherbst: mivanov: the nouveau ddx is generally lower overhead (but this is the case for most ddx compared to modesetting), higher power efficiency and doesn't use GL, which causes less issues overall
11:36 mivanov: Doesn't the modesetting also use GL via Glamor or am I mistaken
11:37 karolherbst: right
11:37 mivanov: Modesetting DDX without Glamor is slow
11:37 karolherbst: sure, but you have a high API overhead
11:37 karolherbst: and a driver ddx can call into a driver directly to do stuff
11:37 mivanov: Actually I don't use any GL applications, so do I really need 3D acceleration at all
11:38 karolherbst: well, most GUI applications are using GL these days
11:38 karolherbst: even if it's only hidden inside Qt5 or gtk3
11:39 mivanov: I am actually forwarding framebuffers from Virtual machines, so it's not like I am rendering anything via the GPU
11:39 mivanov: So I don't get it why videos feel slower without Glamor
11:39 mivanov: The Virtual machines doing the actual rendering don't have GPUs
11:40 karolherbst: mivanov: because modesettings software paths are just super slow
11:40 mivanov: So imagine that the X server is just receiving frames via a shared memory. I just want those frames to be displayed in sync.
11:40 mivanov: I see
11:40 mivanov: So what about Intel, how come the modesetting DDX works so well for Intel
11:41 mivanov: Most distros even stopped delivering the i915 DDX
11:45 karolherbst: mivanov: mhh, for me the intel ddx works better than modesetting
11:46 karolherbst: but generally intel cares more about the modesetting ddx
11:46 karolherbst: there are just random issues you can't really solve because you deal with OpenGL
11:46 karolherbst: or how glamor handles buffers
11:46 karolherbst: scrolling in chromium at 4k res is super painful with glamor
11:48 mivanov: Don't know I get very snappy interface with 2x1920x1080 with Intel modesetting and Glamor. No tearing in videos either. I am using Cinnamon which has a built-in compositor
11:48 karolherbst: but the biggest issue with glamor and nouveau is glamors memory usage... especially on older cards with less memory you can get to the point where gl memory allocations are failing...
11:48 mivanov: Hm, I doubt that's an issue with K4100M, it has 4G
11:48 karolherbst: mivanov: it's not about tearing, but eg in chromium when I scroll parts need to rerendered
11:48 karolherbst: *to be
11:48 karolherbst: which doesn't happen with the intel ddx
11:48 mivanov: are you sure it's not the compositor fault?
11:49 karolherbst: so the scrolled in content appears blank for a while
11:49 karolherbst: mivanov: well, it happens with modesetting, doesn't happen with the intel ddx
11:49 mivanov: but what about modesetting plus compositor
11:49 mivanov: I can't understand yet, but sometimes compositors fix the strangest of errors.
11:49 karolherbst: well, then it's a workaround a bug the ddx doesn't want to fix
11:49 karolherbst: which is equally bad
11:50 mivanov: Like nothing is working, you move windows and they leave trails, screen is not refreshing. Then you run a compositor and it works.
11:50 karolherbst: well, that's just X
11:50 karolherbst: or even modesetting indeed
11:50 mivanov: I experiencied this so many times. Tried so many compositors with both nouveau and intel. In the end it feels like pure magic and I can't make any sense out of it.
11:50 karolherbst: glamor kind of requires a compositor
11:50 karolherbst: and for X you want a compositor anyway
11:51 karolherbst: anyway, with the intel ddx I have less issues than with the modesetting one
11:51 mivanov: So, how does tearing relate to compositors in general? Some compositors have a Vsync option, sometimes even enabling the compositor without Vsync fixes the tearing.
11:51 karolherbst: because you have multiple surfaces
11:51 mivanov: Also most of the time tearing increases with the higher resolution
11:51 karolherbst: and something has to sync up displaying/updating each
11:52 mivanov: Sometimes I manage to not have tearing in 1280x1024 but when it gets above 1920x1080 tearing gets very bad
11:52 mivanov: As for i915, even without a Compositor with TearFree "true" it's okay
11:53 mivanov: How come even with Prime without synchronization Intel manages to be Tear free?
11:53 karolherbst: yeah.. that's kind of workarounding how stupid X is inside the ddx
11:53 karolherbst: well
11:53 karolherbst: because they hard sync inside the kernel driver
11:53 karolherbst: which... causes other issues
11:53 mivanov: But what about Prime? I've read a Paper stating that without enabling the PRIME Synchronization via xrandr you will get tearing.
11:53 karolherbst: like bad displaying performance
11:54 karolherbst: mivanov: well, I don't know about the details, but usually things get changed later on and most of that isn't caused by the architecture or can be workaround in different places
11:54 karolherbst: anyway, X sucks in this regard from every point of view
11:55 karolherbst: can't have a superior solution to fix all the issues :/
11:55 karolherbst: because you just cause others
11:55 mivanov: Actually there's one other thing I do not understand about Prime, say you have 2 GPUs, Intel and Nvidia. Nvidia has 2 heads, Intel has 2 heads. Why do you need PRIME for this to work? Can't X just run with two GPUs. I thought Prime was for headless cards.
11:55 mivanov: I have such configuration and I don't get it why I need PRIME.
11:56 karolherbst: mivanov: because you want to display stuff on both GPUs at the same time
11:57 mivanov: I read many X docs about Randr and Xinerama, some Prime stuff too. But nowhere did it said that modern Xorg can't run on 2 GPUs.
11:57 karolherbst: eg if you run an OpenGL application, it has to run accelerated on one GPU and display on the other
11:57 karolherbst: so you need to pass buffers around
11:57 karolherbst: xinerama has no acceleration
11:57 karolherbst: and is a crappy solution overall
11:58 karolherbst: and reverse prime is the solution inside X to make it all work on 2 GPUs
11:58 mivanov: But I do not need to do acceleration on one GPU and show result on other. I just want to have one screen that is combined from all my monitors. And for Xorg to run with 2 GPUs.
11:58 mivanov: Isn't PRIME useful only when you want to offload all work to 1 GPU?
11:59 mivanov: Do you generally need prime to run X on many GPUs and have one big continous screen?
12:00 mivanov: But now everytime I write a Xorg conf, the first GPU becomes master and the second one uses PRIME to get buffers from it.
12:00 mivanov: I.e one becomes source and two becomes sink
12:05 pmoreau: karolherbst: I can try out the patch, but I don’t think my laptop is actually using _DSM: it’s using a hardware multiplexer controlled by the apple_gmux driver,
12:07 karolherbst: pmoreau: ohh, true
12:08 karolherbst: mivanov: if you have displays on 2 GPUs you run into the issue that one applications is rendered on a different GPU than it's displayed on
12:08 karolherbst: mivanov: you can't get around this
12:09 karolherbst: imagine a qt5 application, and qt5 uses some acceleration for displaying or glamor does, doesn't matter, point is one of the GPU is doing the rendering
12:09 karolherbst: now you move one window of that application from one display to the other
12:10 karolherbst: and the displays are driven by different GPUs
12:10 karolherbst: so what do you do?
12:11 karolherbst: mivanov: anyway, especially for this we have reverse prime and you won't need any xorg.conf file to set this up
12:11 karolherbst: just let xrandr do it's magic
12:13 karolherbst: also, think about the sitautions where the window is still on both displays
12:16 mivanov: So when did PRIME appear? I remember having 2 GPUs with multimonitors and dragging a window on both displays and having it work many years ago
12:16 karolherbst: uhh, quite a long time ago
12:17 mivanov: I thought that RANDR took care of this and not Prime.
12:17 karolherbst: mivanov: also keep in mind that xinerama was quite old, but it had serious drawbacks
12:17 mivanov: And I thought that RANDR replaced Xinerama
12:17 karolherbst: randr can only do this with the help of prime
12:17 karolherbst: prime is a kernel feature
12:18 mivanov: but isn't Prime a relatively new feature?
12:18 mivanov: I remember in 2013, nvidia used bumblebee
12:18 karolherbst: nvidia still doesn't implement prime
12:18 karolherbst: but that's not primes fault, but nvidias
12:18 mivanov: How so, I read that it does actually
12:19 karolherbst: it does in a crappy way
12:19 karolherbst: either you render everything on nvidia or you can't use it
12:19 karolherbst: which makes it pointless on laptops
12:19 mivanov: Hm, so what about the Nouveau prime
12:19 mivanov: Doesn't it render everything on the Primary GPU
12:19 karolherbst: well, we only render what you want on the nvidia gpu and if there is no active consumer, the gpu can be turned off
12:20 mivanov: But how do I choose what do I want to render on it?
12:20 karolherbst: the DRI_PRIME env variable is the main thing, but there is also driconf and some desktop environnments have some context menu thing to select it
12:21 mivanov: And what of the case with the multimonitors
12:21 mivanov: basically plugging a monitor both into the nvidia and the intel graphics
12:21 mivanov: who renders the window if you do that?
12:21 mivanov: when dragging the window for example
12:22 mivanov: I thought that PRIME used the Primary GPU for everything and pushed some buffers to screens connected to the secondary GPU
12:23 karolherbst: usually the main one
12:23 mivanov: So when PRIME Synchronization is disabled you get tearing
12:23 mivanov: how does DRI_PRIME relate to that?
12:24 mivanov: And to PRIME Synchronization?
12:24 mivanov: So to simplify: if you had a Desktop with many GPUS would you still use Prime?
12:24 mivanov: Say having for example 3 nvidia cards
12:26 karolherbst: there is no other reasonable way
12:27 mivanov: So for multi gpu it's either PRIME or Xinerama?
12:28 mivanov: I guess the Arch wiki made me confused: "PRIME is a technology used to manage hybrid graphics found on recent laptops"
12:33 cyberpear: mivanov: you'll likely find that any laptop w/ a dedicated GPU has to do either PRIME or reverse-PRIME, because any given output is generally only connected to either the dedicated GPU or the integrated one
12:35 mivanov: cyberpear: why do people use the terms PRIME and reverse PRIME? Is it about which GPU is primary? Reverse meaning integrated is primary and feeds buffers to discrete. And Prime meaning making the discrete gpu primary and feeding buffers to integrated?
12:36 mivanov: or are both of those cases Reverse PRIME?
12:36 cyberpear: mivanov: it's about rendering the image one one GPU then copying it to the other for output to a display
12:36 mivanov: cyberpear: If rendering on one GPU and passing to the other for output is Reverse PRIME, then what is just PRIME?
12:37 cyberpear: I think the distinction is about which GPU is considered "Primary"
12:38 cyberpear: PRIME came first, where the display is hooked to the integrated GPU, but you wanted to offload to the dedicated GPU
12:38 mivanov: So wouldn't that just make dedicated GPU primary?
12:38 cyberpear: then reverse PRIME came later, where the display is connected to the dedicated GPU, but you want to render on the integrated GPU but output to a display hooked to the dedicated GPU
12:39 cyberpear: you'd want reverse-prime for power-saving reasons, and PRIME for graphical performance reasons
12:39 mivanov: and what if I want to render on the dedicated GPU and and output to it
12:39 cyberpear: mivanov: that's a case of neither prime nor reverse prime; it's the simple case of a single GPU
12:39 mivanov: Say for example I have intel and nvidia cards, both have outputs connected to different monitors.
12:40 mivanov: But what if I have 2 GPUs, both with their outputs connected to different monitors.
12:40 cyberpear: you'll likely find that the laptop display is hardwired to the integrated GPU, so you have to do PRIME to display images rendered by the dedicated GPU on the laptop screen
12:40 mivanov: But is it true that I need Prime if I don't want to offload?
12:40 cyberpear: I don't think prime plays in there at all, but I could be wrong as that's beyond my understanding
12:41 mivanov: Say a desktop with 2 nvidia cards
12:41 mivanov: Do I still need Prime? Each card has it's own monitor, but I want to have one big screen where I can move windows between monitors
12:41 cyberpear: I don't think you'd use prime in that case, if you had one display hooked to each card
12:41 cyberpear: (anyone please jump in if I'm wrong)
12:42 mivanov: karolherbst mentioned that for any multimonitoring via 2 different GPUs, I need either Prime or Xinerama
12:43 cyberpear: it's beyond my understanding
12:43 mivanov: I am experimenting with a laptop with an Intel + Nvidia. Both have their own outputs. Without any configuration => Intel is primary. If I connect a monitor to some of the Nvidia outputs, the Nvidia is started and listed as a secondary(it has prime attributes in Xrandr)
12:44 mivanov: I can swap it around if I make a xorg.conf and list the Nvidia GPU as the first GPU and the Intel as the second one.
12:44 mivanov: Of course since the laptop display is connected to the Intel GPU, it can't get turned off.
12:45 mivanov: But Prime is always listed in xrandr for the inputs connected to the second GPU
12:46 mivanov: It's like Xorg always uses just one GPU and if you have extra monitors on another GPU, it will start another Xorg for the second GPU and forward windows to it via that.
12:46 mivanov: It's even more confusing because most of the time it works out of the box without a config.
12:47 mivanov: And that makes it that much harder to learn what is happening or if there are alternatives.
12:54 karolherbst: mivanov: ohh, that's easy, there are no alternatives
12:59 mivanov: by the way, about another issue I have. I've tried a docking station and the displayports don't want to go above 1lane x 540 MB. Is that related to Nouveau
13:00 mivanov: strange thing is if I don't go through the Dock and plug a monitor into the laptop's displayport I get 5 lane x 540 MB(full bandwidth)
13:00 mivanov: Same K4100M(GK104) card
13:00 mivanov: No thunderbolt, just a standard Dock
13:09 mivanov: Link training just won't do more than 1 lane of 540 MB via Dock
13:09 karolherbst: mhh
13:09 karolherbst: maybe an updated kernel might help
13:09 karolherbst: Lyude was fixing quite a lot in this regard
13:09 imirkin: Kepler has DP 1.2
13:09 imirkin: which means you don't get more than 540*MHz* per lane
13:09 mivanov: Yep, and via Laptop directly it works fine. But if I use the Dock, no more than 1 lane
13:10 mivanov: But there should be 4 lanes
13:10 imirkin: chances are it's a DP dock
13:10 mivanov: and via Dock I only get 1 lane
13:10 mivanov: I did disassemble the Dock
13:10 imirkin: er, i mean 5.4Gbit/s i guess
13:10 mivanov: The board schematics says it has Displayport 1.2 connectors
13:10 imirkin: each lane is a separate physical group of pins
13:11 imirkin: is it a DP-MST dock?
13:11 mivanov: But sometimes I could get a high res monitor to work by doing the following: plug it into the laptop
13:11 mivanov: and then really fast replug into dock
13:11 mivanov: I don't think it's using MST
13:12 mivanov: But the fact that I could trick it to work by replugging fast would mean that the problem is with the training phase?
13:13 imirkin: most likely
13:14 mivanov: Strange thing is that even in full hd this monitor wants 2x540mb. The dock has two displayports. The other port however has an option to run with intel or nvidia. When running with intel, the other dp dock port allows up to 1920x1080
13:14 mivanov: but not above
13:14 mivanov: if I set both dock ports to use the nvidia, I can't even get 1920x1080
13:14 mivanov: but either way I can't go above 1920x1080
13:15 mivanov: The Dock board clearly states it has Displayport 1.2
14:56 kreyren: is separate project that introduces vulkan to nouveau that is written in rust mergable in nouveau?
14:57 kreyren: there would probably have to be some changes done to nouveau to accept it tho
15:00 imirkin_: kreyren: "mergeable"? meaning what? where would it be merged?
15:00 kreyren: like if it would be accepted in nouveau project so that it would be maintained
15:00 imirkin_: i don't think anyone here is interested in rust
15:01 imirkin_: but i could be wrong
15:01 kreyren: why not? it's faster then C* afaik
15:01 imirkin_: it would present a lot of overhead for basically no reason
15:01 imirkin_: (development overhead)
15:01 imirkin_: it would also prevent the tried-and-true way of developing vulkan drivers
15:01 imirkin_: step 1: copy anv
15:01 imirkin_: step 2: replace some functions
15:01 kreyren: anv?
15:02 imirkin_: however this is all secondary to the primary issue -- first the kernel support needs to be figured out
15:02 imirkin_: anv is the intel vulkan driver
15:02 imirkin_: vk has an enormous amount of boilerplate
15:03 kreyren: well but we don't have anything better to vulkan atm
15:03 karolherbst: although we already abstracted the most annoying bits away
15:03 karolherbst: like the dispatcher and the wsi stuff
15:03 imirkin_: exactly. and rust would be a huge redo.
15:03 imirkin_: for ... no reason
15:03 imirkin_: kreyren: in C, you can reuse (a) the stuff everyone else is doing and (b) the khronos-supplied headers
15:04 karolherbst: isn't b actualy generated by python scripts?
15:04 imirkin_: maybe, maybe not. good luck validating.
15:04 karolherbst: and for a you can do the stuff in C you want to reuse and just mix and match... that should be possible with rust and C, no?
15:04 imirkin_: maybe. something else that Just Works (tm) -- C :)
15:05 imirkin_: look, the driver part isn't the hard part here
15:05 kreyren: relevant: https://benchmarksgame-team.pages.debian.net/benchmarksgame/fastest/rust.html dunno seems reasonable to me to use rust for vulkan support assuming that it would have positive effect on performance
15:05 imirkin_: the hard part is changing the kernel to support the functionality that vk requires
15:05 imirkin_: you can make one in each language you know and 3 you don't in the time that the kernel changes will take.
15:06 imirkin_: the other hard part is the shader compiler. good luck reimplementing that.
15:06 kreyren: can't we grab that from AMDGPU?
15:06 imirkin_: sure, as long as we run it on AMD gpu's
15:06 imirkin_: (there's already a shader compiler ... just not in rust)
15:08 kreyren: why not using the original then with compatibility to accept instructions from rust?
15:08 imirkin_: why not use C?
15:08 kreyren: slow, boring >.>
15:08 imirkin_: C is the definition of fast, esp for this stuff.
15:08 kreyren: how?
15:09 imirkin_: unpacking and repacking data.
15:09 kreyren: eh?
15:09 imirkin_: bit manipulations
15:09 kreyren: rust can do that afaik
15:09 imirkin_: it all compiles quite simply to CPU
15:09 imirkin_: of course rust can do that
15:09 imirkin_: but it can't do it better than C
15:09 kreyren: doubt
15:09 kreyren: check the benchmark made by debian provided
15:10 imirkin_: this appears to be benchmarks of cpu-heavy things
15:10 imirkin_: the vk driver is cpu-light
15:10 imirkin_: multi-cpu stuff, etc
15:10 imirkin_: this is not that use-case.
15:11 kreyren: afaik rust is also faster on cpu-light iiuc
15:12 kreyren: if so would it be accepted?
15:12 HdkR: Aside from arguments about performance I think the bigger issue is development overhead for everyone else that doesn't care about rust
15:13 HdkR: Unnecessary burden for bringing in the newest shiny
15:15 karolherbst: anyway, rust isn't faster as C, but that's not even the point of rust
15:15 karolherbst: and benchmars wich claim otherwise have crappy code
15:16 kreyren: can you prove otherwise or provide scenario in which it it's not fast?
15:16 karolherbst: I am not saying it's slow
15:17 karolherbst: rust and C are similiar fast, though C is generally a bit faster
15:17 kreyren: i'm saying that it's faster then C based on available info..
15:17 karolherbst: and a bit means <5%
15:17 karolherbst: it's all totally irrelevant
15:17 kreyren: yep it's around <5% but thats astill a lot tho ;-;
15:17 karolherbst: kreyren: the benchmark you posted is biased
15:17 karolherbst: that's always failing
15:18 kreyren: they are made by debian.. doubt
15:20 kreyren: relevant also http://cantrip.org/rust-vs-c++.html
15:22 kreyren: and there are two active devs in nouveau anyway so it would probably encourage other ppl to contribute
15:23 HdkR: Maintaining a project with multiple languages in use is such a headache :/
15:23 kreyren: slowly recode it in rust then? :p
15:23 HdkR: Try and convince the entire mesa project with that
15:24 karolherbst: my internet connection today is also the best.... not
15:24 HdkR: You end up with having to define your internal APIs on both sides of the C++ and Rust boundary, which ends up being problematic and prone to breaking
15:25 karolherbst: kreyren: it's biased: "and there are 4 tests whre rust is faster, 5 where gcc c is"
15:25 HdkR: additionally you get a hard dependency on LLVM as a compiler, which means if you try mixing GCC/LLVM object files then there is a high chance of something breaking.
15:25 karolherbst: but only rust is ever marked as being faster
15:25 karolherbst: that's means being biased
15:25 HdkR: (Oh hi LTO)
15:26 karolherbst: anyway, switching over to rust won't mean more developers anyway
15:26 karolherbst: coding is the smallest part of deveoping a driver anyway
15:26 kreyren: still wierd that there aren't many devs here tho
15:27 HdkR: Nvidia is doing a very good job killing motivation
15:27 kreyren: elaborate?
15:27 kreyren: i'm just waiting for the dead fish to appear with reason
15:27 HdkR: I don't get the phrase
15:28 imirkin_: switching to $language-of-the-week could mean new developers interested in the project solely because of that reason
15:28 imirkin_: however i suspect that interest will fade
15:28 imirkin_: a handful of gpu driver developers are bullish on rust (iirc at least anholt liked it?)
15:28 imirkin_: however the majority of the community is fairly happy with C
15:29 imirkin_: a big part of the reason that there aren't more developers contributing to nouveau is that it's hard work, and those developers can get an actual paid job doing something similar but for amd/intel drivers
15:29 orbea: switching to rust would be a major pain in the ass to compile the drivers...compiling rust itself is a horrible experience
15:29 kreyren: is there even a possibility that nvidia will be faster then AMD on linux tho?
15:30 imirkin_: no
15:30 kreyren: why? because of the hardware limitation?
15:30 imirkin_: for anything starting with the GTX 9xx series, the fact that nvidia will never release signed firmware which enables us to reclock
15:31 imirkin_: no reclock = joke performance
15:31 kreyren: assuming nouveau beeing in it's gold condition?
15:31 imirkin_: it's like saying "will intel ever be faster than AMD if you lock the intel chip to its lowest perf setting"
15:31 kreyren: meaning everything working on sanitized code etc..
15:32 HdkR: You could have perfect codegen, if you're running at 1/20th maximum clocks then you're not going anywhere fast.
15:33 kreyren: assuming that we have fully functional nouveau with all required functions ?
15:33 kreyren: since RX Vega 64 is 114% of GTX1080ti on linux in native linux games
15:33 HdkR: With which driver stack?
15:33 imirkin_: for all the shit we dump on nvidia, the nvidia blob drivers are actually quite good
15:34 imirkin_: they have much better codegen than we do
15:34 kreyren: HdkR, don't remember i made the tests 4 months ago and said system is not at my current residence
15:34 imirkin_: they've probably sunk 10000x the manpower into it, so it seems like a reasonable result though
15:35 kreyren: and we can get even more performance by using DXVK/dxv9 in directX apps on linux so.. If not then probably just open-source raytracing for RTX2060 is sane
15:36 HdkR: pfft
15:36 kreyren: o.o
15:36 HdkR: Don't even have basic Turing support and you're already thinking about making an RT stack?
15:37 kreyren: yep :p
15:37 kreyren: since graphic support is not needed assuming that performance would be provided by AMDGPU (well for my usecase)
15:37 HdkR: Make RT fast in compute first, then maybe in a few years once Turing support comes up you'll be able to accelerate it
15:39 HdkR: Or just use the Nvidia stack where they've spent a decade working on making RT fast on GPUs
15:39 kreyren: yes that is always an alternative, but nvidia drivers sucks in general >.>
15:40 karolherbst: kreyren: well, they get hired by other companies like AMD/Intel/Valve, so a lot of nouveau developers are working on other stuff now
15:41 karolherbst: kreyren: also, you have a big misconception about rust. It's just makes it easier for developers to now mess up, but at some level it doesn't really matter as you can always cause security issues. With rust you just trade one group of issues with another one in the end
15:41 karolherbst: and what you get with rust is even worse
15:41 imirkin_: the issues that cause slowness, btw, have nothing to do with CPU
15:41 imirkin_: in general they have to do with poor management of buffers between GPU and CPU, unnecessary synchronization points, etc
15:42 imirkin_: this is the stuff that causes 99% of the slow. maybe the 1% is cpu overhead. which you can use rust to reduce down to 0.95% in theory? woo hoo?
15:42 karolherbst: intel synchronizes a lot :/
15:42 kreyren: any proof of rust beeing less effective that i can verify on my end?
15:42 karolherbst: I even have cases where games running at around 50 fps, but the window redraws at 20 because of something intel does
15:42 karolherbst: kreyren: any proof it's more effective?
15:43 imirkin_: kreyren: absolutely none. just not interested in the language. the fact that it takes down my system every time gentoo wants to build a new version of rust certainly doesn't enhance my perception of it.
15:43 karolherbst: imirkin_: you should "nice" and cgroup portage :p
15:43 imirkin_: and that it's like a 1GB download
15:43 imirkin_: karolherbst: if only i'd thought of that...
15:43 imirkin_: maybe cgroup is the way
15:43 karolherbst: yeah
15:43 imirkin_: i definitely nice it
15:43 imirkin_: but i have 6GB of ram at home
15:44 karolherbst: I can compile chromium/libreoffice while having 0 impact on games
15:44 kreyren: imirkin_, it never took my gentoo down o.o
15:44 kreyren: karolherbst, will do some research since my current research is not valid
15:44 imirkin_: [and i generally run without swap]
15:44 imirkin_: karolherbst: yeah, on my work computer, with 32GB of ram it's a lot more practical
15:44 karolherbst: imirkin_: mhh, I have 32GB of ram :/
15:44 kreyren: or use exherbo where paludis is not that vulnerable to overload on system resources :p
15:44 karolherbst: but I also compile inside zram
15:45 imirkin_: you sure have a lot of cpu to spare.
15:45 karolherbst: so no IO except reading system headers/libraries
15:45 kreyren: well it probably is tho ;-;
15:45 imirkin_: i have a piddly little i7-920
15:45 karolherbst: imirkin_: meh.. the overhead isn't all that big
15:45 karolherbst: imirkin_: it's faster than doing IO
15:45 imirkin_: probably true.
15:45 karolherbst: i7-6820HK here...
15:46 karolherbst: anyway, works out quite nice overall
15:47 karolherbst: mhh, my portage zram device is 16GB big... probably because of libreoffice or something
15:48 kreyren: o.o
15:48 karolherbst: yeah.. libreoffice requires like 12GB to build
15:53 kreyren: or you can also do it like me :p https://i.imgur.com/kjzmp7D.png
16:03 karolherbst: kreyren: anyway, the biggest problem I see with rust is this "cargo culting" of dependencies where your application is repsonsible of updating all deps.. and "serious business enterprise software" is already showing us how hard this fails
16:03 karolherbst: and I kind of don't see how dependency management is in better hands if software engineers have to take care of it instead of distributions
16:03 imirkin_: the non-hermetic aspect of all this new stuff frightens me immensely too
16:19 kreyren: karolherbst: serious business enterprice software?
22:29 imirkin_: skeggsb: need anything from me for the 1024-sized lut thing?