00:21 fdobridge: <a​irlied> I killed nearly all of dd.h
00:22 fdobridge: <a​irlied> like just after we dropped classic
00:22 fdobridge: <a​irlied> https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/14100
00:36 fdobridge: <k​arolherbst🐧🦀> mhh, and that didn't lead to lower driver overhead or something?
00:36 fdobridge: <k​arolherbst🐧🦀> benchmarking Civilization 5 might figure it out
00:36 fdobridge: <k​arolherbst🐧🦀> end game saves have like 500k gl calls per frame
00:37 fdobridge: <k​arolherbst🐧🦀> benchmarking Civilization 5 might give some results (edited)
00:41 fdobridge: <a​irlied> draw is still indirect, but I think the alternate is a branch which may or may not be better
00:44 fdobridge: <k​arolherbst🐧🦀> well.. most of those calls aren't draws
00:44 fdobridge: <k​arolherbst🐧🦀> but yeah..
00:44 fdobridge: <k​arolherbst🐧🦀> I'd benchmark with heavily CPU bound games, but indirect vs direct calls might not matter all that much overall.. mhh
00:45 fdobridge: <k​arolherbst🐧🦀> maybe with LTO it makes more of a difference
00:45 fdobridge: <k​arolherbst🐧🦀> or even less so
00:49 Eighth_Doctor: what is fdobridge?
00:55 airlied: bridge to discord
01:06 Eighth_Doctor: you have a discord :o
03:07 karolherbst: apparently we do
04:25 airlied: dakr, jekstrand : okay I have to rethink the new uapi buffer allocs a bit, I think I've tied the non-sparse ones a bit too close to the kernel api
04:25 airlied: forcing the over alignment problem
04:57 airlied: I expect we'll have to treat buffers rather differently to images (at least images with kind flags)
04:58 airlied: and burn some VM space mappings for non-sparse images
05:20 fdobridge: <a​irlied> @gfxstrand so I probably need someone to think this over with, I think device memory should allocate a VMA range then that gets used for buffers and 0 kind images? and images that have a kind and sparse image should allocate a private VMA space and bind the mem bo into it? will this affect aliasing anywhere?
05:22 fdobridge: <g​fxstrand> Sounds right. Should work.
05:22 fdobridge: <a​irlied> okay I'll go kick that around a bit and see where it ends up
05:23 fdobridge: <g​fxstrand> Cool
06:28 fdobridge: <a​irlied> okay I've updated the branch, I think it should work properly
14:50 fdobridge: <k​arolherbst🐧🦀> @airlied I might also look at the new UAPI from a CL perspective, because I need proper userptr support and kind of SVM as well
14:50 fdobridge: <k​arolherbst🐧🦀> did you look into those parts already?
15:52 fdobridge: <g​fxstrand> Allocation shouldn't care about alignment for the most part. It should allocate an integer number of pages but otherwise shouldn't care.
15:52 fdobridge: <g​fxstrand> @airlied ^^
15:53 fdobridge: <g​fxstrand> Also, the kernel needs to stop assigning addresses. That's our job. We can align the base address of the memory object to whatever we want.
16:06 fdobridge: <k​arolherbst🐧🦀> how are we going to support that without that being ioctl calling hell? provide a "initial" VM address on bo allocation? do two ioctls every time we allocate new bos or have an ioctl where we can pass a list of bos+addresses? Not sure how much overhead even matters here.
16:06 fdobridge: <k​arolherbst🐧🦀>
16:06 fdobridge: <k​arolherbst🐧🦀> Also.. do we already have any driver doing that completely from userspace?
17:52 fdobridge: <g​fxstrand> ANV has been assigning its own addresses since forever.
17:53 fdobridge: <g​fxstrand> Iris has never let the kernel assign addresses.
17:53 fdobridge: <g​fxstrand> I think the radeon drivers do address assignment in libdrm.
17:53 fdobridge: <g​fxstrand> As for how, yeah, it means two ioctls to allocate but meh. Vulkan uses sub-allocation so we really shouldn't have bajillions of BOs.
17:54 fdobridge: <g​fxstrand> And two ioctls aren't bad if your kernel driver is well-written.
17:54 fdobridge: <p​ixelcluster> I think allocating and binding memory is two ioctls for amdgpu as well
17:54 fdobridge: <p​ixelcluster> I think allocating (and binding) memory is two ioctls for amdgpu as well (edited)
17:55 fdobridge: <p​ixelcluster> I think allocating memory with virtual addresses bound to it is two ioctls for amdgpu as well (edited)
18:00 anholt: note: for virtgpu-native-context it's nice to be able to allocate and bind in one ioctl, since ioctl round trip time is preposterous.
18:17 fdobridge: <g​fxstrand> I think we can batch all the binds, just not the allocations, so we can amortize it well enough if that's a problem.
18:32 fdobridge: <p​ixelcluster> I think allocating memory and binding is one ioctl each for amdgpu as well (edited)
18:42 fdobridge: <a​irlied> The new API is all userspace allocated vma
18:42 fdobridge: <a​irlied> Not sure if there was a question there
18:44 fdobridge: <a​irlied> @karolherbst🐧userptr is another can of worms, maybe dakr next thing
18:58 fdobridge: <k​arolherbst🐧🦀> sure, but how'd do you tell the kernel about your allocations
19:00 fdobridge: <a​irlied> with a bind ioctl
19:00 fdobridge: <k​arolherbst🐧🦀> right, and I asked about how that one is designed, if you have to call it for each bo, or can you submit an initial placement when allocating, or submit it in a batches
19:01 fdobridge: <k​arolherbst🐧🦀> or rather what would be the end plan here
19:01 fdobridge: <k​arolherbst🐧🦀> interesting.. because I tried to figure out where exactly that happens, maybe I'm just blind 🙂
19:02 fdobridge: <g​fxstrand> Search for `vma_heap`
19:02 fdobridge: <k​arolherbst🐧🦀> yeah.. probably a good idea if one can batch it.. could even be smart about it and just collect the changes ... or make it part of the command submission one
19:03 fdobridge: <a​irlied> No we will not be making it part of command submits
19:03 fdobridge: <k​arolherbst🐧🦀> though with userspace command submission one might not want to mix those two anyway
19:03 fdobridge: <g​fxstrand> Iris tells the kernel addresses on every draw call because i915 is dumb.
19:03 fdobridge: <k​arolherbst🐧🦀> yeah.. that sounds a bit... heavy
19:03 fdobridge: <g​fxstrand> Yeah, that's not going in command submit.
19:04 fdobridge: <g​fxstrand> I will nak that so hard...
19:04 fdobridge: <a​irlied> The vm bind ioctl has alloc/free and map/unmap
19:04 fdobridge: <a​irlied> The kernel tree has more documentation
19:04 fdobridge: <k​arolherbst🐧🦀> sure, but that's per bo I assume?
19:04 fdobridge: <k​arolherbst🐧🦀> oh well.. maybe that's fine
19:05 fdobridge: <k​arolherbst🐧🦀> well.. just means we'll probably have to change that later if it becomes a overhead one might be able to optimize away
19:07 fdobridge: <g​fxstrand> If we need to do that, we can match `VM_BIND` ioctls. Not batch them in with other things. Just batch them by themselves.
19:09 fdobridge: <a​irlied> it's never been a problem on radv, usually if those things are overhead you just cache in userspace if you can
19:09 fdobridge: <a​irlied> at least for GL Drivers
19:10 fdobridge: <a​irlied> we could maybe consider a bo alloc + vma bind combined thing, but I didn't want to distrub the current bo alloc ioctl more than necessary
19:17 fdobridge: <g​fxstrand> Keep it dumb for now
19:17 fdobridge: <a​irlied> yeah I've seen no reason to change it, if I thought it was important I'd have designed it that way 😛
19:18 fdobridge: <a​irlied> like even batching makes no real sense, since vulkan doesn't really work like that
19:19 fdobridge: <a​irlied> if someone is writing a new GL driver on top of this, then just use the pb bufmgr stuff
19:19 fdobridge: <g​fxstrand> Vulkan will once we do sparse.
19:19 fdobridge: <g​fxstrand> vkQueueBindSparse is a batch thing. Within a given sparse bind, we may be binding multiple discontiguous ranges.
19:20 fdobridge:<g​fxstrand> starts reading up on proc macros in Rust. Help! Somebody save me before I do something stupid!
19:20 fdobridge: <a​irlied> sparse is fully implemented
19:20 fdobridge: <a​irlied> in that branch
19:20 fdobridge: <a​irlied> it even passes all the cts tests
19:21 fdobridge: <a​irlied> except for a couple where the gpu is too slow due to lack of reclocking
19:23 fdobridge: <g​fxstrand> cool
19:47 fdobridge: <g​fxstrand> Well, by "fully", I assume you just mean sparse binding, not sparse residency.
19:48 fdobridge: <g​fxstrand> NIL doesn't have nearly enough helpers for sparse residency yet
19:50 fdobridge: <a​irlied> then clearly magic is fine, since it passes CTS with sparse residency enabled
19:50 fdobridge: <a​irlied> sparseBinding = true
19:50 fdobridge: <a​irlied> sparseResidencyBuffer = true
19:51 fdobridge: <a​irlied> sparseResidencyImage2D = true
19:51 fdobridge: <a​irlied> sparseResidencyImage3D = true
19:51 fdobridge: <a​irlied> sparseResidency2Samples = true
19:51 fdobridge: <a​irlied> sparseResidency4Samples = true
19:51 fdobridge: <a​irlied> sparseResidency8Samples = true
19:51 fdobridge: <a​irlied> sparseResidency16Samples = true
19:51 fdobridge: <a​irlied> sparseResidencyAliased = true
19:52 fdobridge: <a​irlied> maybe it's an accidental pass, would be good to get nil up to speed so we can validate the uapi
19:53 fdobridge: <a​irlied> ah yeah should fill in nvk_GetImageSparseMemoryRequirements2 a bit more 🙂
19:59 airlied: dakr, jekstrand : pushed a fix to new uapi, I didn't fully sync the headers
20:00 fdobridge: <g​fxstrand> I'm not super worried about that.
20:00 fdobridge: <g​fxstrand> I mean, yeah, it'd be great, but there's a pile of compiler work involved in sparse residency and I don't want to do that in codegen.
22:01 fdobridge: <k​arolherbst🐧🦀> yeah.. we'd have to rework how codegen deals with predicates :/