04:28graphitemaster: I take it back. AMD is so much more of a clusterfuck with naming
04:28graphitemaster: The mobile GPU sector is so scuffed
17:10karolherbst: seriously... this is so weird
17:10karolherbst: every day I hit issues on the G98 but every day it's something different
17:10karolherbst: now I get this on boot: https://gist.github.com/karolherbst/83fca0177d6e6427a20a8f9f44f6a3b2
17:10karolherbst: and gnome is green
17:11karolherbst: but not like everywhere green
17:11karolherbst: only the background
17:11karolherbst: and stuff just works regardless
17:15ccr: eco-friendly mode enabled?
17:18karolherbst: mhhh
17:18karolherbst: I am sure it's some bug in novueau though :p
17:19karolherbst: just surprised I see the same issue now without having to suspend/resume
17:19karolherbst: imirkin: ^^
17:24karolherbst: btw.. seems like there is only m2mf available for buffer copies on that GPU anyway
19:51imanho: I found this doc: https://nvidia.github.io/open-gpu-doc/pascal/gp100-mmu-format.pdf with nice descriptions of GPU page tables
19:52imanho: but still, the errors I get trying to mmap "/dev/nvidia0" into a host process's address space is a cryptic "NVRM: VM: invalid mmap" (this shows up in dmesg)
19:53imanho: int fd = open ( "/dev/nvidia0" , O_RDWR | O_SYNC);
19:53imanho: int *ptr = mmap(NULL, N * 8, PROT_READ, MAP_PRIVATE, fd, offset_idx*4096);
19:58karolherbst: imanho: well.. it's the nvidia driver handle the ioctls
19:59karolherbst: it might react to open, but it might just do random stuff
19:59karolherbst: what reason would nvidia have to just expose memory like that?
19:59karolherbst: you also can't just map VRAM with nouveau like that
20:00imanho: is this NVRM message coming from their driver? I was hoping "invalid mmap" might imply there is _some_ valid mmap :)
20:00karolherbst: ehh.. mmap I mean
20:00karolherbst: imanho: well, yes... but the glue code is all open source
20:00karolherbst: you might be able to wrap around calls inside the source code.. maybe
20:01karolherbst: but.. you might be able to mmap just like that, but might have to figure out what nvidia allows or doesn't
20:01karolherbst: imanho: maybe check with strace?
20:04imanho: if you mean check the mmaper?, $strace ./mmaper spews: "mmap(NULL, 32, PROT_READ, MAP_PRIVATE, 3, 0x10b400000) = -1 EINVAL (Invalid argument)" and before that I see a "openat(AT_FDCWD, "/dev/nvidia0", O_RDWR|O_SYNC) = 3"
20:05karolherbst: imanho: no, I meant with using nvidia binaries
20:05karolherbst: like running glxgears or nvidia-settings or...
20:05karolherbst: some might mmap on that file
20:05imanho: ohhh.. I see. smart.
20:06karolherbst: it could be something stupid like only being able to mmap with a multiple of page_size size or something
20:07karolherbst: or one having to use nvidia-uvm for those things
20:08karolherbst: normally you have a VM per GPU context, not per process, so without handing in a handle to the context you shouldn't be able to mmap, because.. what's your target GPU VM in the first place?
20:10airlied: its probably like drm mmap
20:10airlied: so you cant random mmap at all
20:10airlied: you call an ioctl go give you and mmap cookie and mmap that
20:11karolherbst: airlied: yeah, the only benefit we have here is, that the nvidia-uvm API is all open source :)
20:11karolherbst: and how to create VMs and everything
20:11karolherbst: although I don't think it contains the channel creation bits
20:11karolherbst: so you can't do anything without a channel anyway... I think
20:12karolherbst: ehhh.. no, you should be.. a VM can be created without a context
20:12karolherbst: at least from a hw pov
20:13karolherbst: airlied, imanho: https://gist.github.com/karolherbst/bc1bdb23c5b0d980b8a060182db8ebac
20:14imanho: So when I straced a cuda app I see some interesting things like "mmap(0x7f9b42e00000, 2097152, PROT_READ|PROT_WRITE, MAP_SHARED|MAP_FIXED, 15, 0) = 0x7f9b42e00000" (where the fd, 15, is /dev/nvidia0)
20:14karolherbst: it's all a bit compute related, but they use it for compute shaders as well afaik
20:15karolherbst: imanho: yeah... the question is just, what kind of memory that is
20:15karolherbst: but.. again... it could be something useful
20:15imanho: and why is offset always 0? is this an indication of anything?
20:16karolherbst: imanho: probably you can't map random memory
20:16karolherbst: nvidia is doing some memory management stuff inside userspace
20:16karolherbst: but... I don't really have in depth knowledge here, just making educated guesses
20:17karolherbst: and the uvm API also seem to be a thing _besides_ their normal one, requiring more stuff before
20:17airlied: well for uvm if makes sense fof fixed mmaps
20:17karolherbst: yes
20:17airlied: since you want to have the pages in the same address space as the gpu
20:17airlied: so you get the vm address
20:17karolherbst: airlied: it's not only for host memory
20:18karolherbst: they use uvm for compute shaders
20:18karolherbst: you can map host memory in, but you can also just map GPU memory into the host VM with uvm
20:19airlied: pages for an mmap could be in both in theory
20:20airlied: esp with hmm type functionality
20:20karolherbst: mhhh, true... it was some time ago I looked at the API
20:20karolherbst: might be that it's also used for general cases where you want to have the same virtual addressses on both sides...
20:21imanho: it also _does_ open "/dev/nvidia-uvm" but uses it much less in mmaps, only twice (it's both MAP_FIXED) "mmap(0x205c00000, 2097152, PROT_READ|PROT_WRITE, MAP_SHARED|MAP_FIXED, 4, 0x205c00000) = 0x205c00000"
20:21karolherbst: imanho: yeah... things are weird, but with cuda they use uvm for the actual thing
20:22karolherbst: where nvidia0 might just be for driver related stuff...
20:22karolherbst: dunno
20:22karolherbst: maybe mmt handles it somehow?
21:31imanho: based on "uvm_ioctl.h", UVM_MEM_MAP is 0x23. yet no such ioctl is issued. "UVM_PAGEABLE_MEM_ACCESS" happens 6 times. I'm really beginning to love this "uvm_ioctl.h", is "UVM_TOOLS_READ_PROCESS_MEMORY" a way to read _GPU_ memory?
21:37imanho: karolherbst: so regarding the issue of, 'if you want to mmap some gpu memory, you _probably_ need to know which channel/context you are talking about', in the header I see a UVM_MEM_MAP _but_ in the params there is no way to specify the context: "regionBase,regionLength,rmStatus"
21:49karolherbst: imanho: yeah.. the issue is that with UVM you kind of mirror the CPUs VA on the GPU, so you can reuse the same addresses if known.
22:57karolherbst: imanho: in the original way of doing things (graphics only) you have to pass the channel in, demmt/valgrind should show this while handling nvidia ioctls
23:13imanho: while building mesa, I get 'need 'libdrm_intel; ['>=2.4.109'] found '2.4.105' (2.4.105 is the latest available when I apt-get libdrm-intel1)
23:41imanho: oh, drm main branch is 107, just had to chekout to .109 branch and recompile. [so mesa main is currently not cool with drm main]
23:44airlied: imanho: not sure where you are cloning drm from, but main is always compatible
23:45imanho: I just did "$ git clone git://anongit.freedesktop.org/mesa/drm"
23:45airlied: the version is in meson.build
23:46airlied: imanho: you are using master not main, not sure why it doesn't clone to main
23:46airlied: might need to change something in gitlab maybe
23:47imanho: (ouch! yup it had defaulted to master, I was just monkey-copying stuff from https://nouveau.freedesktop.org/InstallNouveau.html)
23:47airlied: imanho: ah also it's on gitlab now
23:47airlied: https://gitlab.freedesktop.org/mesa/drm is the proper repo