14:58 imirkin: RSpliet: i'm guessing ckoenig is talking about the fact that we don't have this patch: https://github.com/skeggsb/nouveau/commit/04bc1dd62db05b22265ea0febf199e5967b9eeb2

14:58 imirkin: cosurgi: btw, did you get around to testing it?

16:53 cosurgi: imirkin: uh. still not yet. These calculations should finish in roughly two days.

16:54 cosurgi: imirkin: I feel bad. Because you remember about me. And you worked hard pushing skeggsb (more than one email, at least) to fix this. I will test it, I need this!

16:54 imirkin_: if it takes *this* long, the answer better be 42...

16:55 cosurgi: heh. Fortunately it is saving progress along the way. Usually my calculations take months, sometimes more.

16:56 cosurgi: about 5 years ago, I was so lazy to convert some part of algorithm from python to C++ that the calculations took 6 months instead of one week. But that stuff wasn't for me, it was for my former boss. I told him to be patient and he was patient :)

16:57 imirkin_: haha

16:57 imirkin_: should have asked for a nicer comp

16:57 imirkin_: so that the calc could go faster

16:57 cosurgi: I already did this. Three times :)

16:57 cosurgi: I still use these servers, even though I changed my faculty.

16:58 imirkin_: you might enjoy this - https://medium.com/swlh/how-is-computer-programming-different-today-than-20-years-ago-9d0154d1b6ce

16:58 imirkin_: "Since we have much faster CPUs now, numerical calculations are done in Python which is much slower than Fortran. So numerical calculations basically take the same amount of time as they did 20 years ago."

16:58 cosurgi: yeah :)

17:00 cosurgi: But for stuff which is important for me I am pushing the boundaries ;) Right now I am about to finish implementing ability to do high precision calculations: long double, float128, MPFR. It almost all these merge requests here: https://gitlab.com/yade-dev/trunk/merge_requests

17:00 cosurgi: Now I need to give time to other people in my team to review this.

17:03 cosurgi: it's nice to see it passing all the tests in the CI pipeline doing the same stuff as usual, but with 150 decimal places :) Like here: https://gitlab.com/yade-dev/trunk/-/jobs/407216815#L232

17:03 cosurgi: or a screenshot from GUI tests with 150 decimal places: https://gitlab.com/yade-dev/trunk/-/jobs/407216815/artifacts/file/screenshots/scr_Simple_11.png

17:04 cosurgi: though the real target was of course float128. It is only two times slower. And in quantum mechanics 33 decimal places are really needed.

17:04 cosurgi: some experimental results currently have 20, sometimes 30 decimal places.

17:11 imirkin_: just move to BCD :)

17:11 cosurgi: binary coded decimal? Too slow. Slower than MPFR ;)

17:12 imirkin_: but supported by 8087 iirc

17:12 imirkin_: just get a _lot_ of those

17:12 cosurgi: huh?? I need to check this out. Native FPU support?

17:12 imirkin_: i think just for loads / stores

17:13 cosurgi: ehh. so not worth it ;)

17:13 imirkin_: from some random intel doc: "IA-32 architecture defines operations on BCD integers located in one or more general-purpose registers or in one or more x87 FPU registers"

17:13 imirkin_: "When a decimal integer is loaded in an x87 FPU data register, it is automatically converted to the double-extended-precision floating-point format. All decimal integers are exactly representable in double extended-precision format."

17:14 cosurgi: uhh. So loss of precision. not for me.

17:14 cosurgi: I need mathematical functions like cylindrical bessel, or logarithm calculated with full precision.

17:15 imirkin_: i hear mathematica does math.

17:15 cosurgi: that's right. But it's not C++, which I happen to be very good at.

17:15 cosurgi: I use mathematica to verify some of my derivations.

17:16 cosurgi: it's too slow for regular calculations.

17:16 imirkin_: yeah, probably not the ideal target for actual calculations... really good at symbolic.

17:16 cosurgi: and their licensing is insane. I can use at most 4 CPUs and have only two mathematica instances open. How would I ever calculate anything useful with such horrendous restrictions.

17:17 imirkin_: i still don't understand how it does this: https://www.wolframalpha.com/input/?i=Integrate%5BLog%5BSin%5Bx%5D%5D%2C+%7Bx%2C+0%2C+Pi%7D%5D

17:18 imirkin_: some day i'll learn math.

17:20 cosurgi: when I needed to use mathematica a bit more for something I had to open 6 virtualbox instances with 2 mathematica instances in each of them. Then I could attack that problem from 12 different angles. And of course inside virtualbox it was slower.

17:21 cosurgi: nice integral :) It does it in a very simple way: there are losts of precalculated integrals, and a couple rules of integration like by parts, with derivative, etc. Try mixing a couple of them and it gets a result.

17:21 cosurgi: human is still better at this.

17:22 imirkin_: it comes up with -Pi Log(2)

17:22 imirkin_: how does it do this?

17:22 cosurgi: mathematica can usually just confirm or deny the result.

17:22 imirkin_: doubt it. there must be a symbolic way of arriving at this.

17:22 cosurgi: "losts of *symbolically* precalculated integrals"

17:22 cosurgi: *lots

17:22 imirkin_: fine

17:23 imirkin_: how do *i* calculate that integral symbolically

17:23 cosurgi: books with 1000 pages.

17:23 imirkin_: someone wrote them

17:23 cosurgi: ah okay.

17:23 imirkin_: presumably.

17:24 cosurgi: yeah. The simplest way of doing this symbolically is the use infinite taylor expansion, then you just manipulate polynomials in clever ways.

17:24 imirkin_: i like understanding stuff :) i was fascinated by Sum[1/n^2, {n, 1, Infinity}] == Pi^2/6 when i was younger.

17:24 imirkin_: i think i tried that without success. perhaps my manipulation wasn't clever enough.

17:24 cosurgi: yeah, these things are cool

17:25 imirkin_: another fun fact -- i^i is real.

17:28 cosurgi: yeah. I loved how Feynman in his book explained complex number exponentiation.

17:31 cosurgi: These "clever ways" are 'tricks of the trade' for mathematicians. It is cool. But for me it's still just a tool. Just like programming is a tool. The ultimate goal for me is physics.

17:32 imirkin_: sure

17:32 imirkin_: tools are fun to understand / use thoug

17:32 imirkin_: like a chain saw... tons of fun for the whole family

17:32 cosurgi: hahah

17:32 imirkin_: or, you know, taylor series expansion manipulation, depending on the family

17:33 cosurgi: :)

17:35 imirkin_: physics was fun, enough to get a degree, but not enough to get a job :)

17:36 cosurgi: that's why I stayed at the university

17:37 cosurgi: I wouldn't have the nerves to cope with the world outside of science.

17:37 cosurgi: (:

17:40 imirkin_: too many people sitting around doing nothing in academia for me

17:45 cosurgi: yeah. that's a real problem

17:46 cosurgi: but as long as they don't bother me I am not going to try to "reform" them and battle entire administration.

17:46 imirkin_: they get good at e.g. writing grants, and make it seem like they're doing stuff, but in practice, they do nothing. very annoying to me.

17:46 cosurgi: And they are glad for my work, because they do nothing. And only benefit from my publications and other stuff. I don't mind, because I can do what I like doing.

17:47 cosurgi: If I wanted to fight this bureaucracy I would just become part of them. Because that's all they do: bureaucratic fight between each other over salary or other stuff.

17:50 cosurgi: there are funny situations sometimes, when one of them wants me in some commision of something or something else. And I tell them that I have no time, because I prefer science.

14:58 imirkin: cosurgi: btw, did you get around to testing it?

16:53 cosurgi: imirkin: uh. still not yet. These calculations should finish in roughly two days.

16:54 cosurgi: imirkin: I feel bad. Because you remember about me. And you worked hard pushing skeggsb (more than one email, at least) to fix this. I will test it, I need this!

16:54 imirkin_: if it takes *this* long, the answer better be 42...

16:55 cosurgi: heh. Fortunately it is saving progress along the way. Usually my calculations take months, sometimes more.

16:56 cosurgi: about 5 years ago, I was so lazy to convert some part of algorithm from python to C++ that the calculations took 6 months instead of one week. But that stuff wasn't for me, it was for my former boss. I told him to be patient and he was patient :)

16:57 imirkin_: haha

16:57 imirkin_: should have asked for a nicer comp

16:57 imirkin_: so that the calc could go faster

16:57 cosurgi: I already did this. Three times :)

16:57 cosurgi: I still use these servers, even though I changed my faculty.

16:58 imirkin_: you might enjoy this - https://medium.com/swlh/how-is-computer-programming-different-today-than-20-years-ago-9d0154d1b6ce

16:58 imirkin_: "Since we have much faster CPUs now, numerical calculations are done in Python which is much slower than Fortran. So numerical calculations basically take the same amount of time as they did 20 years ago."

16:58 cosurgi: yeah :)

17:00 cosurgi: But for stuff which is important for me I am pushing the boundaries ;) Right now I am about to finish implementing ability to do high precision calculations: long double, float128, MPFR. It almost all these merge requests here: https://gitlab.com/yade-dev/trunk/merge_requests

17:00 cosurgi: Now I need to give time to other people in my team to review this.

17:03 cosurgi: it's nice to see it passing all the tests in the CI pipeline doing the same stuff as usual, but with 150 decimal places :) Like here: https://gitlab.com/yade-dev/trunk/-/jobs/407216815#L232

17:03 cosurgi: or a screenshot from GUI tests with 150 decimal places: https://gitlab.com/yade-dev/trunk/-/jobs/407216815/artifacts/file/screenshots/scr_Simple_11.png

17:04 cosurgi: though the real target was of course float128. It is only two times slower. And in quantum mechanics 33 decimal places are really needed.

17:04 cosurgi: some experimental results currently have 20, sometimes 30 decimal places.

17:11 imirkin_: just move to BCD :)

17:11 cosurgi: binary coded decimal? Too slow. Slower than MPFR ;)

17:12 imirkin_: but supported by 8087 iirc

17:12 imirkin_: just get a _lot_ of those

17:12 cosurgi: huh?? I need to check this out. Native FPU support?

17:12 imirkin_: i think just for loads / stores

17:13 cosurgi: ehh. so not worth it ;)

17:13 imirkin_: from some random intel doc: "IA-32 architecture defines operations on BCD integers located in one or more general-purpose registers or in one or more x87 FPU registers"

17:13 imirkin_: "When a decimal integer is loaded in an x87 FPU data register, it is automatically converted to the double-extended-precision floating-point format. All decimal integers are exactly representable in double extended-precision format."

17:14 cosurgi: uhh. So loss of precision. not for me.

17:14 cosurgi: I need mathematical functions like cylindrical bessel, or logarithm calculated with full precision.

17:15 imirkin_: i hear mathematica does math.

17:15 cosurgi: that's right. But it's not C++, which I happen to be very good at.

17:15 cosurgi: I use mathematica to verify some of my derivations.

17:16 cosurgi: it's too slow for regular calculations.

17:16 imirkin_: yeah, probably not the ideal target for actual calculations... really good at symbolic.

17:16 cosurgi: and their licensing is insane. I can use at most 4 CPUs and have only two mathematica instances open. How would I ever calculate anything useful with such horrendous restrictions.

17:17 imirkin_: i still don't understand how it does this: https://www.wolframalpha.com/input/?i=Integrate%5BLog%5BSin%5Bx%5D%5D%2C+%7Bx%2C+0%2C+Pi%7D%5D

17:18 imirkin_: some day i'll learn math.

17:20 cosurgi: when I needed to use mathematica a bit more for something I had to open 6 virtualbox instances with 2 mathematica instances in each of them. Then I could attack that problem from 12 different angles. And of course inside virtualbox it was slower.

17:21 cosurgi: nice integral :) It does it in a very simple way: there are losts of precalculated integrals, and a couple rules of integration like by parts, with derivative, etc. Try mixing a couple of them and it gets a result.

17:21 cosurgi: human is still better at this.

17:22 imirkin_: it comes up with -Pi Log(2)

17:22 imirkin_: how does it do this?

17:22 cosurgi: mathematica can usually just confirm or deny the result.

17:22 imirkin_: doubt it. there must be a symbolic way of arriving at this.

17:22 cosurgi: "losts of *symbolically* precalculated integrals"

17:22 cosurgi: *lots

17:22 imirkin_: fine

17:23 imirkin_: how do *i* calculate that integral symbolically

17:23 cosurgi: books with 1000 pages.

17:23 imirkin_: someone wrote them

17:23 cosurgi: ah okay.

17:23 imirkin_: presumably.

17:24 cosurgi: yeah. The simplest way of doing this symbolically is the use infinite taylor expansion, then you just manipulate polynomials in clever ways.

17:24 imirkin_: i like understanding stuff :) i was fascinated by Sum[1/n^2, {n, 1, Infinity}] == Pi^2/6 when i was younger.

17:24 imirkin_: i think i tried that without success. perhaps my manipulation wasn't clever enough.

17:24 cosurgi: yeah, these things are cool

17:25 imirkin_: another fun fact -- i^i is real.

17:28 cosurgi: yeah. I loved how Feynman in his book explained complex number exponentiation.

17:31 cosurgi: These "clever ways" are 'tricks of the trade' for mathematicians. It is cool. But for me it's still just a tool. Just like programming is a tool. The ultimate goal for me is physics.

17:32 imirkin_: sure

17:32 imirkin_: tools are fun to understand / use thoug

17:32 imirkin_: like a chain saw... tons of fun for the whole family

17:32 cosurgi: hahah

17:32 imirkin_: or, you know, taylor series expansion manipulation, depending on the family

17:33 cosurgi: :)

17:35 imirkin_: physics was fun, enough to get a degree, but not enough to get a job :)

17:36 cosurgi: that's why I stayed at the university

17:37 cosurgi: I wouldn't have the nerves to cope with the world outside of science.

17:37 cosurgi: (:

17:40 imirkin_: too many people sitting around doing nothing in academia for me

17:45 cosurgi: yeah. that's a real problem

17:46 cosurgi: but as long as they don't bother me I am not going to try to "reform" them and battle entire administration.

17:46 imirkin_: they get good at e.g. writing grants, and make it seem like they're doing stuff, but in practice, they do nothing. very annoying to me.

17:46 cosurgi: And they are glad for my work, because they do nothing. And only benefit from my publications and other stuff. I don't mind, because I can do what I like doing.

17:47 cosurgi: If I wanted to fight this bureaucracy I would just become part of them. Because that's all they do: bureaucratic fight between each other over salary or other stuff.

17:50 cosurgi: there are funny situations sometimes, when one of them wants me in some commision of something or something else. And I tell them that I have no time, because I prefer science.