00:17 fdobridge: <a​irlied> It will be a while, since at least one fix is queued for Linus rc1 and won't be backported until next week
00:20 fdobridge: <z​mike.> Oof
00:20 fdobridge: <z​mike.> Revert it is then
00:36 fdobridge: <g​fxstrand> It's really quite nice. @zmike. is just using it in the most painful way possible. (Not his fault. Just that reconstructing SPIR-V is painful.)
00:49 fdobridge: <z​mike.> Yeah I suppose if I didn't have that restriction I might enjoy it more
00:51 fdobridge: <a​irlied> Not sure which crash you are seeing though, there are two different fixes for different BAR problems
00:54 fdobridge: <z​mike.> I get an unrecoverable hang that only triggers during full CTS runs
00:54 fdobridge: <z​mike.> Can't repro any other way I've tried
00:55 airlied: ah okay that sounds like the fix that isn't in, the other fix was for an oops
03:55 fdobridge: <g​fxstrand> I really wish I knew why the `dispatch_base` tests are crashing. They only crash in parallel runs and they're the only ones that crash and they segfault. Everything's fine when I run with valgrind
03:57 fdobridge: <g​fxstrand> I suppose I could turn coredumps back on
03:58 fdobridge: <a​irlied> do they segfault or device lost and kernel msg logs it?
04:03 fdobridge: <g​fxstrand> segfault
04:03 fdobridge: <g​fxstrand> dmesg logs the segfault
04:03 fdobridge: <g​fxstrand> no GPU errors
04:03 fdobridge: <g​fxstrand> If I turned on coredumps, I could probably gather some data
04:03 fdobridge: <g​fxstrand> but ugh... coredumps...
04:10 fdobridge: <a​irlied> doesn't coredumpctrl show them?
05:51 fdobridge: <a​irlied> @gfxstrand I think there is either a CTS bug of a runner bug around the device id
05:52 fdobridge: <a​irlied> or maybe both
05:52 fdobridge: <a​irlied> getVKDeviceId usage is inconsistent
05:52 fdobridge: <a​irlied> some tests call getVKDeviceId() - 1 which with the command line we pass in translate to -1
05:53 fdobridge: <a​irlied> so I think changing your run script to use 1 base instead of 0 will fix some crashes
06:14 fdobridge: <a​irlied> @gfxstrand fixing that seems to make things a lot less crashy here
08:44 fdobridge: <!​DodoNVK (she) 🇱🇹> There are 9 extensions with non-draft MRs for :triangle_nvk: (hopefully all of these can be merged to compete with Turnip)
09:13 fdobridge: <v​alentineburley> Turnip has a ton of trivial and easy to implement extensions left, I kind of wish I had some hardware to take a swing at them
09:14 fdobridge: <v​alentineburley> It's a close race tho 😄
09:20 fdobridge: <v​alentineburley> Has anyone tried Path of Exile with NVK? A couple of years ago it needed a trivial Google extension on RADV, I wonder if it's still the case?
09:20 fdobridge: <v​alentineburley> I have a MR for it: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28155
09:21 fdobridge: <!​DodoNVK (she) 🇱🇹> I included your MRs in the list
13:13 fdobridge: <g​fxstrand> I've been running with 1 for a while
13:15 fdobridge: <g​fxstrand> Once I get my compiler MR posted, I'm going to dig through the backlog. There's some compiler improvements from @mhenning I need to review, too.
13:15 fdobridge: <g​fxstrand> IDK if I'll get to them this morning but hopefully this afternoon
13:48 fdobridge: <!​DodoNVK (she) 🇱🇹> What will that MR contain?
13:49 fdobridge: <g​fxstrand> That was it! This test doesn't -2
13:49 fdobridge: <g​fxstrand> That was it! This test doesn't -1 (edited)
13:55 fdobridge: <t​om3026> perhaps not exactly nouveau related but since you guys are pretty much gpu driver gurus, pcie_option=pcie_bus_perf ,
13:55 fdobridge: <t​om3026> ```
13:55 fdobridge: <t​om3026> Set device MPS to the largest allowable MPS
13:55 fdobridge: <t​om3026> based on its parent bus. Also set MRRS (Max Read Request Size)
13:55 fdobridge: <t​om3026> to the largest supported value (no larger than the MPS that the device or bus can support) for best performance.
13:55 fdobridge: <t​om3026> ```
13:55 fdobridge: <t​om3026>
13:56 fdobridge: <t​om3026> what is this? O_o just noticed it kernel parameters manual
13:56 fdobridge: <t​om3026> *it in
14:02 fdobridge: <g​fxstrand> @airlied https://gitlab.khronos.org/Tracker/vk-gl-cts/-/issues/3232
14:02 fdobridge: <g​fxstrand> Okay, now I know I can safely ignore those fails
14:07 fdobridge: <!​DodoNVK (she) 🇱🇹> What does that issue say?
14:17 fdobridge: <S​id> pcie max payload size
14:17 fdobridge: <S​id> technically sets MPS and MRRS to the max value permitted by the hardware
14:18 fdobridge: <S​id> or, well, permitted/supported by a pcie device's parent bus
14:24 fdobridge: <S​id> basically should allow larger data transfers where possible
14:25 fdobridge: <g​fxstrand> https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28300
14:26 fdobridge: <g​fxstrand> Okay, now all the uniform convergence tests pass
14:27 fdobridge: <t​om3026> so why isnt it always set to that? heh
14:27 fdobridge: <S​id> nooo idea
14:28 fdobridge: <t​om3026> oh well il turn it on, lets see if it blows up
14:28 fdobridge: <S​id> would be interesting to see perf benchmarks
14:28 fdobridge: <S​id> with and without it
14:30 fdobridge: <t​om3026> its the holy grail to all fps dips
14:31 fdobridge: <g​fxstrand> It's an issue for a bunch of CTS tests where `--deqp-vk-device-id` and `--vk-deqp-device-group-id` interact badly.
14:32 fdobridge: <g​fxstrand> The problem is that the `*dispatch_base*` tests are technically device group tests and so they try to treat `--deqp-vk-device-id` as the id within the group but we only report individual groups with 1 instance each so it indexes OOB and blows up.
14:34 fdobridge: <t​om3026> oh nvm, found a mailing list. seems a bit misleading it depends on bunch of things and in general, it can even reduce perf compared to the sane default heh
14:34 fdobridge: <t​om3026> i think
14:34 fdobridge: <t​om3026> oh well no fun without trying
14:38 fdobridge: <S​id> heh
14:46 fdobridge: <g​fxstrand> @airlied Not out of the woods yet. 😩
14:46 fdobridge: <g​fxstrand> https://cdn.discordapp.com/attachments/1034184951790305330/1220020593621991496/dmesg.txt?ex=660d6bb8&is=65faf6b8&hm=e1a9ed9e0ba90941dd650ddc3d50466c1512a918b70c9c909bf87ebb2b5e7fbf&
14:47 fdobridge: <g​fxstrand> I don't have an easy reproducer for that. AFAIK I saw it for the first time just now
15:01 fdobridge: <m​arysaka> Sad to see we need to go unstructured
15:01 fdobridge: <m​arysaka> Sad to see that we need to go unstructured (edited)
15:01 fdobridge: <g​fxstrand> The hardware is unstructured. 🤷🏻‍♀️
15:02 fdobridge: <g​fxstrand> On Maxwell, we just won't run the new pass and we'll use the structured merge intrinsics
15:02 fdobridge: <m​arysaka> make sense yeah...
15:03 fdobridge: <g​fxstrand> I joked to Jeff (NVIDIA) one time that while AMD has spent the last decade trying to get LLVM to work better for their hardware, NVIDIA's solution seems to be to make their hardware better for LLVM. Jeff's response was something like "Yeah, well, it seems to be working out for us."
15:04 fdobridge: <m​arysaka> lol
15:06 fdobridge: <g​fxstrand> IDK if that's a joke about AMD SW architects, LLVM, or NVIDIA hardware. All three, maybe?
15:07 HdkR: Considering it took me like a week to get a working Ampere backend in to LLVM. It's a pretty good fit :P
15:07 HdkR: Then a month to clean it up so it supported more than basic code blocks, but whatever
15:08 HdkR: Ampere? No, Volta
15:18 fdobridge: <g​fxstrand> Yeah, the fact that you can just throw totally unstructured control flow at it is pretty neat
15:18 fdobridge: <g​fxstrand> Of course, once your threads diverge, there's no getting them back so there is that...
15:20 fdobridge: <k​arolherbst🐧🦀> it's funky how all that CL work we were doing becomes useful for a lot of other things 😄
15:20 fdobridge: <g​fxstrand> Yeah
15:20 fdobridge: <g​fxstrand> And with the new validation rule for NIR that I added in my MR, I think we can just flip on unstructured for a LOT of passes.
15:21 fdobridge: <g​fxstrand> The other thing we need to do to make it work is to have a pass which auto-converts nir_if to unstructured.
15:21 fdobridge: <g​fxstrand> Which could be part of the block sorting pass, honestly.
15:21 fdobridge: <g​fxstrand> Possibly integrated with nir_builder somehow
15:25 fdobridge: <k​arolherbst🐧🦀> yeah... though maybe we need a different way of defining nir passes at some point, and have it more declarative on what they require, metadata invalidation they do, etc.. but most of it can also be done in the entry function, so not sure it's even all that helpful
15:26 fdobridge: <k​arolherbst🐧🦀> but I think we'll arrive at a place where a pass will have to state it only works on structured or unstructured CF
15:30 fdobridge: <r​edsheep> I'm not certain I'm clear on what this means, and it doesn't seem to be an easy Google search, but do you think this might have been motivated by raytracing performance? Seems like that required a lot of crazy changes around control flow
15:32 fdobridge: <k​arolherbst🐧🦀> nvidia hardware was always like that.. more or less
15:32 fdobridge: <k​arolherbst🐧🦀> I think they just wanted to get rid of the internal stack because it was a real pita
15:32 fdobridge: <k​arolherbst🐧🦀> and more complex programs required you to spill it to VRAM
15:36 fdobridge: <d​adschoorse> I've seen their compiler get control flow with subgroups wrong too, tbf not as often as amdvlk's llvm backend, but it happens
15:38 fdobridge: <k​arolherbst🐧🦀> sums up their mindsets pretty neatly tbh
15:40 fdobridge: <g​fxstrand> Unstructured control-flow is where you just have branch and conditional branch instructions. Structured is where the control-flow is represented as ifs, loops, and other high-level constructs
15:40 fdobridge: <g​fxstrand> What motivates it? CUDA.
15:41 fdobridge: <g​fxstrand> Hopefully NAK is now correct. I'm not sure how well tested it all is but I'm fairly convinced that my lowering pass re-converges at all the right places.
15:45 fdobridge: <r​edsheep> Hmm interesting, I thought all those high level concepts were just implemented through branches, wasn't aware hardware having more awareness than that was an option. What little I have learned down at this low level was mostly about old CPUs though.
15:45 fdobridge: <k​arolherbst🐧🦀> depends on the hardware
15:46 fdobridge: <k​arolherbst🐧🦀> I think some hardware has like structured CF in the ISA
15:47 fdobridge: <d​adschoorse> yeah, it's either that, nvidia's solution or uniform branches + explicit exec mask like amd
15:49 fdobridge: <d​adschoorse> x86+avx512 is kind of like amd gpu hw, you have uniform branches and exec masks for simd ops
15:50 fdobridge: <r​edsheep> Oh interesting, didn't realize anything like that existed in cpu land
15:50 fdobridge: <r​edsheep> I guess it makes sense for axv512 though
15:51 HdkR: SVE on ARM also has predication masks
15:51 fdobridge: <r​edsheep> *avx
15:52 HdkR: Matches AVX512 behaviour relatively well. At least for AVX512F
15:52 fdobridge: <d​adschoorse> zen4's avx512 implementation makes actually using the exec masks like you would on gpus unattractive though, because they have worse performance if you want to preserve unused lanes
15:52 fdobridge: <d​adschoorse> zen4's avx512 implementation makes actually using the exec masks like you would on gpus unattractive though, because they have worse performance if you want to preserve inactive lanes (edited)
15:53 fdobridge: <r​edsheep> Yeah I mean it's not a GPU. As a whole their implementation seems to be working out for them though.
15:55 fdobridge: <r​edsheep> They don't need to make all cases performant if it saves power and such.
15:57 fdobridge: <r​edsheep> AMD has drifted towards nvidia's philosophy a bit with rdna, right? Wave32 was intended to help with this kind of thing from what I gathered
15:57 fdobridge: <r​edsheep> Trying to rely less on compiler engineering to get good performance
15:58 fdobridge: <d​adschoorse> if anything rdna made compiler engineering more important
15:58 fdobridge: <d​adschoorse> with gcn, you didn't have to care about alu latency at all
15:59 fdobridge: <r​edsheep> Just because it didn't vary, right?
16:02 fdobridge: <r​edsheep> Well I suppose it probably still doesn't, that whole thing was kind of confusing to be honest
16:02 fdobridge: <d​adschoorse> with gcn, the latency was always hidden because the SIMD16 unit runs each instruction 4 times. wave32 on rdna one instruction is issued in one cycle, but it takes another 4 cycles until you can use the result
16:04 fdobridge: <r​edsheep> Ah right, so you end up with interleaving where you wouldn't have it before?
16:06 fdobridge: <r​edsheep> Not sure if that's the right word here.
16:13 fdobridge: <r​edsheep> Ok I went and reread that part of the rdna whitepaper and I think I understand now.
16:13 fdobridge: <r​edsheep> I can see how that would actually lead to more compiler work
16:15 fdobridge: <m​ohamexiety> @gobrosse may find it interesting too. I know he has some cursed stuff that breaks some compilers
16:15 fdobridge: <g​obrosse> Faith pinged me earlier 🙂
16:16 fdobridge: <g​obrosse> Gonna try to review it tonight, but should avoid making promises (publish or perish, all that ...)
16:17 fdobridge: <g​obrosse> Faith pinged me earlier today, I DM'd her following the masto post a few days ago 🙂 (edited)
17:18 fdobridge: <g​obrosse> had a quick look, so far seems sound, but I need to study how the nvidia sync stuff works more to be 100% positive
17:20 fdobridge: <g​obrosse> the design where you reconverge before breaking out of a loop is super heavy-handed but to not do that you'd need threads to talk to each other in different points of control-flow, somehow
17:22 fdobridge: <k​arolherbst🐧🦀> the reconvergence happens on the sync point though, no?
17:23 fdobridge: <g​obrosse> i mean, ideally you'd have threads that break out only reconverge once, where they're breaking/returning to
17:23 fdobridge: <g​obrosse> but here the threads that didn't break need to know of the other ones, and to know that they gotta ask the ones who are breaking out
17:24 fdobridge: <g​obrosse> and since this stuff lives in per-lane gprs ... you gotta talk to 'em
17:25 fdobridge: <g​obrosse> i _think_ you can do it another way (ie have two barriers at two locations talk, possibly exchanging depth info?) but I actually don't know and I wouldn't want to stall anything
17:25 fdobridge: <g​obrosse> even if I figure this out I think this design is safer and there's also the question of what happens with pre-volta or whatever
17:27 fdobridge: <g​obrosse> and since this stuff lives in per-lane gprs ... you gotta talk to 'em (which means sync, which means you need to know who takes part ... cyclical dependency if that changed since you entered the currentt scope!) (edited)
17:28 fdobridge: <g​obrosse> wait does nak support that
17:28 fdobridge: <g​obrosse> wait does nak support that even (edited)
19:04 fdobridge: <a​irlied> Ughh I think we have one of those in gitlab report already
19:07 fdobridge: <k​arolherbst🐧🦀> @airlied _any_ idea on this bug? It seems like the bar mapping is either screwed up or... dunno 🙂 `boot0` contains `0xbad0ac00`
19:09 fdobridge: <g​obrosse> @gfxstrand so hum what are the semantics of `bar_sync_nv` / where can I read about them ? I believe `set` and `break` are basically just modifying the barrier bitmask.
19:09 fdobridge: <g​obrosse>
19:09 fdobridge: <g​obrosse> I'd like to know what happens if there are two outstanding `bar_sync_nv` with one having a mask that's a subset of the other one, depending on what happens I might have a nifty idea
19:12 fdobridge: <a​irlied> @karolherbst either my thread is broke or I didn't get a link to a bug
19:20 fdobridge: <m​henning> @karolherbst do you have docs on BSYNC that explain this case?
19:23 fdobridge: <g​obrosse> full open-access NV ISA docs when 🐸
19:26 fdobridge: <k​arolherbst🐧🦀> https://gitlab.freedesktop.org/drm/nouveau/-/issues/342
19:26 fdobridge: <k​arolherbst🐧🦀> only for turing+
19:27 fdobridge: <g​obrosse> that's fine
19:27 fdobridge: <k​arolherbst🐧🦀> well.. what they store is implementation defined, so it's not documented
19:27 fdobridge: <k​arolherbst🐧🦀> just the semantics on how they are supposed to be used
19:30 fdobridge: <a​irlied> oh that bug is pretty special, esp as a regression, I can' t think of anything nouveau related, I wonder if it's pci or runpm somehow
19:30 fdobridge: <k​arolherbst🐧🦀> yeah.. maybe...
19:31 fdobridge: <k​arolherbst🐧🦀> runpm would be kinda weird, because it looks like the GPU responds
19:31 fdobridge: <k​arolherbst🐧🦀> `0xbad.....` is such a common pattern in nvidia
19:31 fdobridge: <k​arolherbst🐧🦀> basically means there was an error accessing the mmio range
19:31 fdobridge: <k​arolherbst🐧🦀> either because it's not there, or because of other reasons
19:32 fdobridge: <k​arolherbst🐧🦀> those codes even mean something, but I have no idea if we ever got docs on that
19:32 fdobridge: <a​irlied> yeah it's just so early to get that, like we don't even touch fw or anything
19:32 fdobridge: <k​arolherbst🐧🦀> yeah...
19:32 fdobridge: <k​arolherbst🐧🦀> it's like the first mmio read we do
19:32 fdobridge: <k​arolherbst🐧🦀> maybe second
19:32 fdobridge: <a​irlied> it could be aspm or some other pcie thing also, but never seen that particular behaviour
19:33 fdobridge: <k​arolherbst🐧🦀> yeah.. dunno.. at least _something_ is up, and the GPU is responding...
19:33 fdobridge: <k​arolherbst🐧🦀> we should ask for docs on those codes tbh
19:50 fdobridge: <m​henning> @karolherbst Right. The question is "what happens if there are two outstanding BSYNCs with one synchronizing a group of threads that's a subset of the other one?"
19:51 fdobridge: <k​arolherbst🐧🦀> what means "outstanding"?
19:51 fdobridge: <k​arolherbst🐧🦀> there is no stack
19:51 fdobridge: <k​arolherbst🐧🦀> the only input is the barrier
19:53 fdobridge: <m​henning> I read it as: Some lanes are waiting on one BSYNC and other lanes are waiting on a different BSYNC
19:53 fdobridge: <k​arolherbst🐧🦀> sounds like dead locking situation
19:54 fdobridge: <k​arolherbst🐧🦀> `BSYNC` waits until all threads specified through the barrier arrive
19:54 fdobridge: <k​arolherbst🐧🦀> mhh well
19:54 fdobridge: <k​arolherbst🐧🦀> actually
19:55 fdobridge: <k​arolherbst🐧🦀> that's not true
19:55 fdobridge: <k​arolherbst🐧🦀> `BSYNC` just checks if all relevant threads are yielded, blocked or exited
19:56 fdobridge: <k​arolherbst🐧🦀> and all threads will be unblocked
19:57 fdobridge: <k​arolherbst🐧🦀> so I don't think it actually matters on which `BSYNC` those threads are blocked on
19:58 fdobridge: <k​arolherbst🐧🦀> `YIELD` indicates that as well
19:59 fdobridge: <k​arolherbst🐧🦀> `YIELD` is just there to guarantee forward progress
19:59 fdobridge: <g​obrosse> wait I am asking about the case where one _is_ the subset of the other...
19:59 fdobridge: <g​obrosse> i thought of bsync as a barrier that blocks until all the threads in the mask arrive at a barrier too but it's still a bit fuzzy in my head
20:00 fdobridge: <k​arolherbst🐧🦀> there is no relation between those barriers
20:00 fdobridge: <k​arolherbst🐧🦀> but anyway...
20:00 fdobridge: <k​arolherbst🐧🦀> `BSYNC` waits until all threads in that barrier are blocked (waiting via `BSYNC`, are yielded via `YIELD` or exited)
20:01 fdobridge: <k​arolherbst🐧🦀> and then releases _all_ threads in that barrier
20:03 fdobridge: <k​arolherbst🐧🦀> it apparently doesn't matter where those threads are waiting
20:03 fdobridge: <k​arolherbst🐧🦀> and on what
20:03 fdobridge: <k​arolherbst🐧🦀> @gfxstrand ^^ in case you didn't know
20:05 fdobridge: <m​henning> @karolherbst really? So then if we
20:05 fdobridge: <m​henning> ```
20:05 fdobridge: <m​henning> if (divergent cond) {
20:05 fdobridge: <m​henning> if (divergent cond) {
20:05 fdobridge: <m​henning> }
20:05 fdobridge: <m​henning> // BSYNC on threads A, B
20:05 fdobridge: <m​henning> }
20:05 fdobridge: <m​henning> // BSYNC on threads A, B, C
20:05 fdobridge: <m​henning> ```
20:05 fdobridge: <m​henning> If C reaches the last BSYNC first, then A, B reaching the inner sync will unblock C?
20:05 fdobridge: <k​arolherbst🐧🦀> no
20:05 fdobridge: <k​arolherbst🐧🦀> only the threads participating
20:06 fdobridge: <k​arolherbst🐧🦀> but like
20:06 fdobridge: <k​arolherbst🐧🦀> if threads A and B are waiting on the inner BSYNC, and C on the outer one
20:06 fdobridge: <k​arolherbst🐧🦀> all threads will unblock
20:06 fdobridge: <g​obrosse> which all threads? does C proceed ?
20:06 fdobridge: <k​arolherbst🐧🦀> yes
20:06 fdobridge: <k​arolherbst🐧🦀> all participating threads
20:07 fdobridge: <k​arolherbst🐧🦀> and on the outer one, all threads participate
20:07 fdobridge: <g​obrosse> hum what does participating mean exactly
20:07 fdobridge: <k​arolherbst🐧🦀> part of the barrier
20:07 fdobridge: <g​obrosse> so like, threads that reached _a_ bsync
20:08 fdobridge: <k​arolherbst🐧🦀> no
20:08 fdobridge: <g​obrosse> so like, threads that reached _a_ bsync ? (edited)
20:08 fdobridge: <k​arolherbst🐧🦀> the _barrier_ not the instruction
20:08 fdobridge: <k​arolherbst🐧🦀> the input to those instructions
20:08 fdobridge: <g​obrosse> ah by barrier you mean like the threadmask right?
20:08 fdobridge: <k​arolherbst🐧🦀> well.. it might be a threadmask, but that's considered opaque in the docs
20:09 fdobridge: <g​obrosse> pretty sure it is iirc, i have a couple CUDA/nv experts in my group with whom I discussed the topic a while ago, plus what the heck else could it be
20:09 fdobridge: <k​arolherbst🐧🦀> `BREAK` e.g. also modifiers the barrier because it's cursed
20:10 fdobridge: <k​arolherbst🐧🦀> it is very likely to be a mask
20:10 fdobridge: <k​arolherbst🐧🦀> but again.. I wouldn't know
20:10 fdobridge: <k​arolherbst🐧🦀> it's also not relevant
20:11 fdobridge: <k​arolherbst🐧🦀> what is relevant, that instructions like `BSSY` mark the threads as participating in the barrier which gets returned
20:12 fdobridge: <k​arolherbst🐧🦀> apparently there is an aliasing bit on the barrier 🙂
20:12 fdobridge: <k​arolherbst🐧🦀> which happens if you use a barrier with existing threads on `BSSY`
20:12 fdobridge: <k​arolherbst🐧🦀> *which is set
20:13 fdobridge: <k​arolherbst🐧🦀> I have no idea if that bit can even read out and if it matters for anything
20:13 fdobridge: <k​arolherbst🐧🦀> 😄
20:14 fdobridge: <m​henning> I'm struggling to understand how we can ever reconverge nested control flow if this is true
20:14 fdobridge: <k​arolherbst🐧🦀> good question
20:15 fdobridge: <k​arolherbst🐧🦀> it might be that in hardware it's different
20:15 fdobridge: <g​obrosse> The way the MR does it is by reconverging one construct at a time
20:15 fdobridge: <k​arolherbst🐧🦀> I can only tell what I know
20:15 fdobridge: <g​obrosse> The way the MR does it is by reconverging one construct at a time, never multiple levels a tonce (edited)
20:15 fdobridge: <g​obrosse> The way the MR does it is by reconverging one construct at a time, never multiple levels at once (edited)
20:16 fdobridge: <g​obrosse> The way the MR does it is by reconverging one construct at a time, never multiple levels at once, so there never is ambiguity with what threads you're waiting on, but this also means that you can't just jump arbitrarily far out in one go (edited)
20:16 fdobridge: <g​obrosse> pretty ironic when the HW's claim to fame is supporting unstructured CF 🙃
20:17 fdobridge: <g​obrosse> but I _think_ you can do better by basically spinning on `bsync` and checking that you have indeed reconverged enough to proceed
20:17 fdobridge: <g​obrosse> i have a draft writeup about it but I'm holding on posting it in a comment until I do some whiteboarding to convince myself it works at all
20:18 fdobridge: <g​obrosse> (AFAIU) The way the MR does it is by reconverging one construct at a time, never multiple levels at once, so there never is ambiguity with what threads you're waiting on, but this also means that you can't just jump arbitrarily far out in one go (edited)
20:18 fdobridge: <g​obrosse> (AFAIU) The way the MR does it is by reconverging one construct at a time, never multiple levels at once, so there never is ambiguity with you're reconverging with and what threads you're waiting on, but this also means that you can't just jump arbitrarily far out in one go (edited)
20:19 fdobridge: <g​obrosse> but I _think_ you can do better by basically spinning, doing `bsync` and checking that you have indeed reconverged enough to proceed (edited)
20:19 fdobridge: <g​obrosse> but I _think_ you can do better by basically spinning, doing `bsync` and checking that other threads have indeed reconverged enough to proceed (edited)
20:20 fdobridge: <m​henning> Unless I'm misunderstanding karol's description, one constuct at a time isn't enough to handle my example above - the inner reconvergence will restart thread C
20:21 fdobridge: <g​obrosse> yes, so,
20:21 fdobridge: <g​obrosse> C would still need to check that A and B's depth is low enough, since it's not it bsyncs again until A and B get done with the inner if and reach the outer sync
20:22 fdobridge: <k​arolherbst🐧🦀> you could use `YIELD` with a vote
20:22 fdobridge: <k​arolherbst🐧🦀> but yeah.. _maybe_ it makes sense to do what nvidia is doing
20:22 fdobridge: <k​arolherbst🐧🦀> but knowing nvidia, they just loop merge it into one construct
20:22 fdobridge: <k​arolherbst🐧🦀> done and done
20:24 fdobridge: <g​obrosse> wdym? what do they loop merge ? the example ?
20:24 fdobridge: <g​obrosse> i feel like they need to have a general solution, or did you mean it's similar to what I propose?
20:27 fdobridge: <k​arolherbst🐧🦀> they merge nested loops into one loop e.g.
20:27 fdobridge: <k​arolherbst🐧🦀> and ifs are just predicates
20:28 fdobridge: <k​arolherbst🐧🦀> so all threads run in lock step within that loop until they are all done
20:29 fdobridge: <k​arolherbst🐧🦀> and one thread can break out of it at any time, because there is just one barrier to sync on to begin with
20:33 fdobridge: <m​henning> This all sounds bizarre to me. I might spend some time figuring out what nvcc emits
20:34 fdobridge: <k​arolherbst🐧🦀> yeah.. check what nvidia is doing. But whenever I checked more complex control flow, they used a tons of predicates and loop merged the heck out of everything
20:35 fdobridge: <k​arolherbst🐧🦀> though I never checked what they'd do with optimizations disabled tbh
20:53 fdobridge: <g​fxstrand> That doesn't make any sense.
20:54 fdobridge: <k​arolherbst🐧🦀> well...
20:54 fdobridge: <k​arolherbst🐧🦀> in the end only the hardware is actual truth
20:54 fdobridge:<k​arolherbst🐧🦀> but
20:55 fdobridge: <k​arolherbst🐧🦀> yeah.. no idea really... because with `YIELD` in the mix it's kinda hard to require the same `BSYNC`
20:55 fdobridge: <k​arolherbst🐧🦀> unless it's the same `BYSNC` or any `YIELD`
21:08 fdobridge: <g​fxstrand> According to the NVIDIA blog (https://developer.nvidia.com/blog/using-cuda-warp-level-primitives/):
21:08 fdobridge: <g​fxstrand> > The `__syncwarp()` primitive causes the executing thread to wait until all threads specified in mask have executed a `__syncwarp()` (with the same mask) before resuming execution. It also provides a memory fence to allow threads to communicate via memory before and after calling the primitive.
21:10 fdobridge: <g​fxstrand> So it doesn't care about it being the same instruction, it cares about them all being on a sync with the same mask. As long as you don't screw up your masks, the mask should uniquely identify sync instruction within the set of active syncs
21:13 fdobridge: <g​fxstrand> My understanding of `bssy` is that it's basically `ballot(true)` and `break` is basically `bar &= ~ballot(pred)`
21:14 fdobridge: <p​ac85> I'm curious about this unstructured cf in nir, how do we retain the reconvergence information?
21:21 fdobridge: <k​arolherbst🐧🦀> sounds about right
21:21 fdobridge: <k​arolherbst🐧🦀> I suspect there is no `YIELD` thing for cuda, because that's something the compiler inserts apparently
21:22 fdobridge: <k​arolherbst🐧🦀> who is saying anything about the same mask?
21:22 fdobridge: <k​arolherbst🐧🦀> ohh wait
21:22 fdobridge: <k​arolherbst🐧🦀> mhhh
21:22 fdobridge: <k​arolherbst🐧🦀> that blog does...
21:22 fdobridge: <k​arolherbst🐧🦀> well.. my docs don't :ferrisUpsideDown:
21:23 fdobridge: <k​arolherbst🐧🦀> soo
21:23 fdobridge: <k​arolherbst🐧🦀> the question is.. is it something CUDA guarantees by lowering it or if it's something the hardware actually checks
21:23 fdobridge: <k​arolherbst🐧🦀> but it would make sense if it works like that
21:24 fdobridge: <k​arolherbst🐧🦀> let's check ptx...
21:25 fdobridge: <k​arolherbst🐧🦀> `bar.warp.sync will cause executing thread to wait until all threads corresponding to membermask have executed a bar.warp.sync with the same membermask value before resuming execution.`
21:25 fdobridge: <k​arolherbst🐧🦀> `For .target sm_6x or below, all threads in membermask must execute the same bar.warp.sync instruction in convergence, and only threads belonging to some membermask can be active when the bar.warp.sync instruction is executed. Otherwise, the behavior is undefined.` 😄
21:25 fdobridge: <k​arolherbst🐧🦀> figures
21:26 fdobridge: <k​arolherbst🐧🦀> so yeah.. assume my docs are trashy, but it could also be something that nvidia deals internally with
21:27 fdobridge: <g​fxstrand> For these sorts of things, PTX tends to match the hardware
21:27 fdobridge: <g​fxstrand> I'm willing to make that assumption
21:27 fdobridge: <g​fxstrand> What I don't know is what this looks like pre-Volta
21:27 fdobridge: <g​fxstrand> Or does that text mean you already have to be re-converged
21:28 fdobridge: <k​arolherbst🐧🦀> I think you have to push/pop in order
21:28 fdobridge: <k​arolherbst🐧🦀> there is a hierarchy though
21:28 fdobridge: <g​fxstrand> Yes, that's fine
21:28 fdobridge: <k​arolherbst🐧🦀> like... I think a break also pops all precont entries
21:28 fdobridge: <k​arolherbst🐧🦀> ehh maybe not all
21:28 fdobridge: <k​arolherbst🐧🦀> but anyway.. something like that was going on
21:30 fdobridge: <k​arolherbst🐧🦀> mhhhh
21:30 fdobridge: <k​arolherbst🐧🦀> actually...
21:30 fdobridge: <k​arolherbst🐧🦀> what if it needs to be the same _barrier_
21:30 fdobridge: <k​arolherbst🐧🦀> because that's trivial to check in hardware
21:31 fdobridge: <k​arolherbst🐧🦀> or rather.. easier then checking if each thread passed the same mask
21:31 fdobridge: <k​arolherbst🐧🦀> because then it would make sense
21:32 fdobridge: <k​arolherbst🐧🦀> which also explains why break doesn't return a barrier
21:32 fdobridge: <k​arolherbst🐧🦀> @gfxstrand ^^ I'd verify this theory if I were you
21:33 fdobridge: <m​henning> Yes, pre-volta a break will pop the precont entries for you
21:34 fdobridge: <k​arolherbst🐧🦀> and if that means if you have two waiting points, if it needs to get the exact same barrier passed in as well (which I'd assume you'd have to do)
21:34 fdobridge: <m​henning> Note that for that blog post, I'd guess that `__syncwarp()` becomes WARPSYNC in hardware, which could plausibly have different semantics from BSSY
21:35 fdobridge: <m​henning> Note that for that blog post, I'd guess that `__syncwarp()` becomes WARPSYNC in hardware, which could plausibly have different semantics from ~BSSY~ BSYNC (edited)
21:35 fdobridge: <k​arolherbst🐧🦀> ohh yeah..
21:35 fdobridge: <m​henning> Note that for that blog post, I'd guess that `__syncwarp()` becomes WARPSYNC in hardware, which could plausibly have different semantics from ~~BSSY~~ BSYNC (edited)
21:35 fdobridge: <k​arolherbst🐧🦀> `WARPSYNC` waits explicitly on the same instruction
21:36 fdobridge: <k​arolherbst🐧🦀> it also has way stronger wording than `BSYNC`
21:36 fdobridge: <k​arolherbst🐧🦀> like it explicitly guarantees that the active mask of threads executing the _next_ instruction is the same as the mask passed to `WARPSYNC`
21:37 fdobridge: <k​arolherbst🐧🦀> (- threads who have exited)
21:37 fdobridge: <k​arolherbst🐧🦀> yeah...
21:37 fdobridge: <k​arolherbst🐧🦀> I think `__syncwarp` == `WARPSYNC`
21:37 fdobridge: <k​arolherbst🐧🦀> it also has this memory barrier thing going on
21:39 fdobridge: <k​arolherbst🐧🦀> `BSYNC` also states that it doesn't wait on sleeping threads as they are considered to be yielded...
21:40 fdobridge: <k​arolherbst🐧🦀> and `NANOSLEEP` doesn't even have an input barrier
21:43 fdobridge: <g​fxstrand> Can you throw a __syncwarp at the cuda compiler and find out?
21:45 fdobridge: <k​arolherbst🐧🦀> never done that...
21:46 fdobridge: <k​arolherbst🐧🦀> let's see...
21:47 fdobridge: <k​arolherbst🐧🦀> `cuda_runtime.h: No such file or directory` 🥲
21:49 fdobridge: <k​arolherbst🐧🦀> `error: #error -- unsupported GNU version! gcc versions later than 12 are not supported! The nvcc flag '-allow-unsupported-compiler' can be used to override this version check; however, using an unsupported host compiler may cause compilation failure or incorrect run time execution. Use at your own risk.` 🥲
21:49 fdobridge: <k​arolherbst🐧🦀> let's see if 12.3 is any better
21:50 fdobridge: <k​arolherbst🐧🦀> ehh 12.4 actually
21:51 fdobridge: <k​arolherbst🐧🦀> ahh that worked
21:53 fdobridge: <k​arolherbst🐧🦀> `/usr/local/cuda-12.4/bin/nvcc -arch=sm_75 test.cu --cubin`
21:54 fdobridge: <k​arolherbst🐧🦀> @gfxstrand perfect... nvidia optimizes it to a `NOP ;` 🥲
21:54 fdobridge: <k​arolherbst🐧🦀> even with `-O0`
21:54 fdobridge: <g​fxstrand> Awesome!
21:55 fdobridge: <k​arolherbst🐧🦀> lemme grab some demo code 😄
21:55 fdobridge: <m​henning> It's a little tricky to get it to avoid it DCEing the warpsync, but I get
21:55 fdobridge: <m​henning> ```
21:55 fdobridge: <m​henning> /*00c0*/ MOV R4, 0xffffffff ; /* 0xffffffff00047802 */
21:55 fdobridge: <m​henning> /* 0x000fe40000000f00 */
21:55 fdobridge: <m​henning> /*00f0*/ CALL.ABS.NOINC `(__cuda_sm70_warpsync) ; /* 0x0000000000007943 */
21:55 fdobridge: <m​henning> /* 0x000fea0003c00000 */
21:55 fdobridge: <m​henning> ```
21:55 fdobridge: <m​henning> where that function is:
21:55 fdobridge: <m​henning> ```
21:55 fdobridge: <m​henning> __cuda_sm70_warpsync:
21:55 fdobridge: <m​henning> /*0000*/ WARPSYNC R4 ; /* 0x0000000400007348 */
21:55 fdobridge: <m​henning> /* 0x000fe80003800000 */
21:55 fdobridge: <m​henning> /*0010*/ RET.ABS.NODEC R20 0x0 ; /* 0x0000000014007950 */
21:55 fdobridge: <m​henning> /* 0x000fea0003e00000 */
21:55 fdobridge: <m​henning> ```
21:56 fdobridge: <g​fxstrand> Right, so we could implement it with vote and warpsync
21:57 fdobridge: <k​arolherbst🐧🦀> okay.. sooo
21:57 fdobridge: <k​arolherbst🐧🦀> yeah...
21:58 fdobridge: <k​arolherbst🐧🦀> @mhenning yes.. just have it inside an if :ferrisUpsideDown:
21:58 fdobridge: <k​arolherbst🐧🦀> apparently
21:58 fdobridge: <k​arolherbst🐧🦀> ah!
21:58 fdobridge: <k​arolherbst🐧🦀> https://gist.github.com/karolherbst/67cda39755f35b86372786addd2e73dc
21:58 fdobridge: <k​arolherbst🐧🦀> it also uses BSYNC
21:58 fdobridge: <k​arolherbst🐧🦀> after the if/else
21:58 fdobridge: <g​fxstrand> So... crazy plan... We have warpsync on Maxwell, right? We have vote, too. We can do the same thing for both.
21:58 fdobridge: <k​arolherbst🐧🦀> we don't
21:58 fdobridge: <k​arolherbst🐧🦀> it's Volta+
21:59 fdobridge: <k​arolherbst🐧🦀> WARPSYNC is the wrong thing here anyway
21:59 fdobridge: <k​arolherbst🐧🦀> BSSY+BSYNC is the right thing to converge around CF
21:59 fdobridge: <k​arolherbst🐧🦀> it just has funky semantics which don't matter for nvidia, because their compiler just optimizes the hell out of it
22:01 fdobridge: <g​fxstrand> Sure but I'm still confused on the semantics.
22:02 fdobridge: <k​arolherbst🐧🦀> anyway
22:02 fdobridge: <k​arolherbst🐧🦀> nvidia loop merges
22:03 fdobridge: <k​arolherbst🐧🦀> uhh
22:03 fdobridge: <k​arolherbst🐧🦀> and unrolls
22:03 fdobridge: <k​arolherbst🐧🦀> funky
22:03 fdobridge: <k​arolherbst🐧🦀> nvidia optimizes from the C++ side :ferrisUpsideDown:
22:03 fdobridge: <k​arolherbst🐧🦀> like constant arguments to the kernel
22:03 fdobridge: <k​arolherbst🐧🦀> what a pain
22:05 fdobridge: <k​arolherbst🐧🦀> omg is this impressive
22:10 fdobridge: <k​arolherbst🐧🦀> yeah.. so the issue is that nvidia only inserts BSSY+BSYNC when they absolutely have to
22:10 fdobridge: <k​arolherbst🐧🦀> and they optimize loops and ifs in a way that they don't really break up threads
22:11 fdobridge: <k​arolherbst🐧🦀> like nested loops are just one loop with predication inside, and one bra to break out
22:11 fdobridge: <k​arolherbst🐧🦀> that's it
22:11 fdobridge: <k​arolherbst🐧🦀> well and if threads diverge inside it doesn't matter because there is nothing needing converged threads in the first place
22:11 fdobridge: <k​arolherbst🐧🦀> so they only sync once, even it's all nested and everything
22:12 fdobridge: <k​arolherbst🐧🦀> so yeah.. as I said: loop merging and predication, that's what nvidia is doing
22:15 fdobridge: <k​arolherbst🐧🦀> but anyway.. cuda is too much to actually RE anything here, because they are doing crazy shit
22:17 fdobridge: <k​arolherbst🐧🦀> but they also blow up my single line loop into ~500 instructions?
22:18 fdobridge: <k​arolherbst🐧🦀> maybe 200..
22:18 fdobridge: <k​arolherbst🐧🦀> anyway
22:18 fdobridge: <k​arolherbst🐧🦀> a lot of things are going on there and I think ptx is the easier target 😄
22:21 fdobridge: <k​arolherbst🐧🦀> mhhh.. maybe I shouldn't have used an idiv...