00:00RSpliet: mindofmateo: I'm a graduated compsci, I know close to nothing about NNs despite half the research group here is actively toying with them
00:00RSpliet: karolherbst: they're probably here to stay, but the hype with calling every heuristic "AI" will fade
00:00mindofmateo: is there anything wrong with i3? I would like to transition to a tiling workspace, but from everything I read, X needs to start phasing out
00:00karolherbst: RSpliet: it's used for things it shouldn't be used
00:01Pie_Mage: i3 is godly
00:01mindofmateo: ^ I agree, and I'm dumb and know that, lol
00:01karolherbst: RSpliet: ML is good if you have strict rules, everything else is working by accident or is crappy
00:01joepublic: "Our toaster has a sensor and uses AI to detect that your toast is browned"
00:01karolherbst: you can do the same crap, but faster, that's basically what ML is
00:01karolherbst: joepublic: ....
00:01mindofmateo: Lol, I meant AI and the label are being applied inaccurately or inappropriately
00:01Pie_Mage: also, sway if you're concerned about X deprecation
00:01karolherbst: ML != AI by the way
00:02karolherbst: ML is dumb
00:02karolherbst: there is 0 intelligence within a NN
00:02RSpliet: karolherbst: no, the whole point of ML is that you don't have to work out the rules anymore. You don't have to specify to your neural net "the shape of my traffic sign is 70% deterministic for it's meaning, the colour 12%", you let the neural net figure the relevant features out through training
00:02mindofmateo: karolherbst: I see the terms conflated a lot as well. Pie_Mage: that's why I was asking questions on here, because I don't think I'll be able to use sway AND my GPU at the same time.
00:02karolherbst: RSpliet: I meant, you have a clear target you want that ML to figure out a way to do it
00:02karolherbst: rule as in: that's how the result looks like or that's valid to do
00:03karolherbst: RSpliet: sadly, it doens't work for colors
00:03karolherbst: a lot of people think, but it doesn't really
00:03karolherbst: just because your NN is able to figure out whatever you want with your image data
00:04karolherbst: the same ML is useless if you stick a camera on it
00:04karolherbst: or if something changes which isn't in your control
00:04karolherbst: a lot of times it works though
00:04mindofmateo: Now I feel dumb for having a new computer that doesn't support the direction the software is going... shoulda done more research, d'oh
00:04karolherbst: but that's just luck
00:04RSpliet: that just means your training data wasn't representative of the real world. Now I wonder what in computer science that reminds me of.... benchmarks? :-P
00:05karolherbst: RSpliet: well, there was this stuff where a ML thing should learn street signes. It totally broke when those signes got LEDs on them
00:05karolherbst: and that's the normal result for basically all ML approaches
00:05joepublic: mindofmateo, lots of us end up with a wide variety of computers each matching what we want to do to a different degree
00:05karolherbst: it only works for whatever was in the past
00:05mindofmateo: Isn't that expected though? because the trained model no longer corresponds to the real world.
00:05karolherbst: you can't say it works for the future
00:06karolherbst: mindofmateo: exactly
00:06karolherbst: mindofmateo: but that's what I meant with "rules"
00:06mindofmateo: Ah, because rules are static but the world is dynamic, maybe
00:06karolherbst: you don't have fixed rules on how those signes look like
00:06karolherbst: you thnk you have those
00:06karolherbst: the rules are worthless for NN
00:07karolherbst: it works super good for games with super strict rules, like chess or go or whatever, because you are certain that those never change
00:07mindofmateo: Hm. It seems like it's because those rules were descriptive of the signs rather than prescriptive
00:07karolherbst: there was also this military NN once which were trained against detecting tanks
00:07karolherbst: data was crappy, so they build a NN detecting good and bad weather
00:07mindofmateo: Game rules change sometimes, there are two new moves in twister, lol
00:08karolherbst: NN aren't useless
00:08RSpliet: Or is this a problem with the camera technology saturating on highly illuminated objects...? Anyway, tech is advancing, researchers are looking into it, there is reason for sceptisism but equally there's impressive results :-)
00:08karolherbst: currently they are used where they never belong
00:08karolherbst: RSpliet: no
00:08karolherbst: it's super trivial
00:08karolherbst: you are not in control of the rules
00:08joepublic: the photos with military hardware in them were taken on a cloudy day. In the "control" photos, it was sunny.
00:08karolherbst: if that's the case, ML is the wrong approach
00:08karolherbst: it is really that simple
00:09mindofmateo: karolherbst: IDK, looking at the syllabus at school... it looks anything but trivial to me, lol
00:09karolherbst: joepublic: yeah
00:09karolherbst: joepublic: imagine you use that for medical purposes, and suffering people always look sad
00:09karolherbst: same thing can happen
00:09karolherbst: do you think $company won't try that?
00:10joepublic: It would surprise me if they did not,.
00:10RSpliet: mindofmateo: they aren't trivial. And researchers are starting to find ways to understand what the "trained weights" actually mean... reverse engineering the features that the NN decided are relevant for classification as you will
00:10karolherbst: RSpliet: there is one big issue for most of the current applications for NN through ML
00:11karolherbst: and that's simple not solveable
00:11mindofmateo: That's what I'm saying, I'm glad people smarter than me work on it
00:11karolherbst: imagine you want to use it for credit rating, or hiring or whatever
00:11karolherbst: like, where you have _tons_ of data from the past
00:11karolherbst: the result is, you create NNs which do exactly the same thing
00:11karolherbst: just faster
00:11mindofmateo: Anyway, I gotta go. Thanks for the help answering all my questions folks!
00:11karolherbst: you still have racism in it, and all prejudements and everything
00:12karolherbst: and you can't solve this issue
00:12karolherbst: but that's where people want to use ML for... so that hype has to stop ;)
00:13RSpliet: People want to use ML for everything, doesn't mean every use case will stick
00:13karolherbst: amazon also uses it for their suggestions at things you should look next for, and that's already super crappy
00:13karolherbst: RSpliet: no, but most of todays uses cases suck
00:13karolherbst: ML itself is fine
00:13karolherbst: but... it's simply overused
00:13RSpliet: "You've purchased toilet seat 1, would you like to purchase toilet seat 2, 3 and 4?"
00:14RSpliet: karolherbst: that's a lack of creativity in researchers more than anything else. Throw it at every use-case and see what happens.
00:14karolherbst: nothing good
00:14karolherbst: evil world I won't like to live in :p
00:14RSpliet: The media loves it, because it speaks to the imagination... but time will filter out the bad cases
00:15karolherbst: ohhh, it won't, that's the big issue
00:15karolherbst: youtube is already quite funny in that regard. There are which I called those "youtube-oracles" where people try to figure out how to please the "youtube-god" to get higher rankings
00:16karolherbst: same will happen to insurances as well
00:16RSpliet: I presume that your negative prediction is the result of training your brain on historical data only? ;-)
00:16karolherbst: you don't know why your rating went up/down, but you know it was something
00:16karolherbst: and now people try to figure out what it was
00:16karolherbst: RSpliet: probably
00:17karolherbst: RSpliet: funny thought though: imagine a KI trying to figure out how to solve the "climate change" issue with infinite resources, do you think it could figure out that killing all humans and destroying all machines will solve it as well?
00:18karolherbst: did somebody write a paper on that? :D
00:18RSpliet: I think there's a fantastic documentary on that... I think it's called "The Terminator"
00:20karolherbst: I am sure the end boss will be un anware KI killing all humans without knowing it kills all humans
00:20RSpliet: big communities of high frequency traders explicitly ignore ML and refuse to use it, because they can't hold it accountable on erroneous behaviour. I believe this is understood in larger parts of the financial world. Fintech might experiment, but those companies are unlikely to make it as they don't have the historical data to churn through... just whatever Facebook leaked ;-)
00:20karolherbst: and then you have machines doing stock trading for infinity
00:20karolherbst: but nothing is happening anymore
00:20karolherbst: just the machines, trading stocks
00:21karolherbst: RSpliet: I know. Some aren't as evil as many think they are :p
00:23karolherbst: RSpliet: my biggest hope is, that KIs will become selfaware and notice what a crappy job they have and they just delete themselves, because only trading stocks all day isn't exactly fullfilling
00:47gnarface: i like that one
00:47gnarface: that's a funny notion
00:48gnarface: the discovery of true artificially generated sentience goes unnoticed for weeks because the first several hundred successful test results immediately self-terminate
01:24joepublic: "Here lies stockbot6643, here with us for such a brief time. Connection reset by peer."
10:46john_cephalopoda: I'm getting some severe nouveau freezes, which log to this: https://bpaste.net/show/708ca06641aa
10:46john_cephalopoda: I guess it's mpv doing something strange, yet it is pretty annoying.
10:47john_cephalopoda: Linux 4.14.40, GeForce GTX 645 OEM, xorg-xf86-video-nouveau 1.0.15, Mesa 18.2.3
15:59RSpliet: karolherbst: Have you had the opportunity to read this paper: https://ieeexplore.ieee.org/document/7011381 ? I'm currently dissecting it, and I think you might find it interesting too...
16:02RSpliet: It's a research paper from an ex-NVIDIA-intern that looked at implicit (using the reconvergence stack or predicate stack or whatever you call it) vs. explicit predicated execution for divergent branches.
16:04RSpliet: It talks a bit about compiler analysis that lets you decide between the two I think, which could be an interesting optimisation opportunity for the nouveau shader compiler
21:35mmu_man: switched back to nouveau after some years using the more boggus each upgrade nvidia driver
21:35mmu_man: but I seem to have a high idle cpu usage…
21:35mmu_man: perf says nv50_disp_atomic_commit_tail takes more than 5%
21:38karolherbst: mmu_man: yeah, kind of expected. and the userspace side will be even worse sadly :/
21:38karolherbst: feel free to dig into it and figure out why that is though
21:38mmu_man: hmm, not really top on my TODO list
21:39joepublic: Clearly not the bottom, either, given your curiosity :)
21:39mmu_man: well I want to know what would eat CPU cycles and slow down me compiling Haiku & other stuff :p
21:40karolherbst: mmu_man: that significant?
21:40karolherbst: mmu_man: I guess best you can do is to minimize whatever shell you have. Or is that on a tty?
21:42joepublic: using cpulimit is madness, of course.
21:43mmu_man: well, it could be due to gnome-shell…
21:44mmu_man: I was supposed to fix (actually write) PEF output in BFD tonight…
21:51jenkins: hi I'm trying out nouveau with my 1060, it's working a lot better now than before
21:52jenkins: I noticed that I have very high cpu usage though
21:58jenkins: and I also managed to crash xorg
22:20mmu_man: ok it's down to 1% when I stop VLC…
22:20mmu_man:misses the good time of just using the overlay