16:51 fdobridge: <g​fxstrand> @zmike. Should I try running the GL or ES CTS again? I'm a bit lost on where everything's at at the moment.
17:36 fdobridge: <z​mike.> @gfxstrand what's your motivation for asking?
17:37 fdobridge: <z​mike.> I don't think anything has significantly changed in terms of CLs being merged in CTS
17:37 fdobridge: <z​mike.> and I don't think I've merged anything that would affect nvk this week-ish?
18:16 fdobridge: <g​fxstrand> Okay, I saw a WSI thing go in
18:17 fdobridge: <z​mike.> oh
18:17 fdobridge: <z​mike.> yeah I guess there's that
18:17 fdobridge: <z​mike.> but EGL caselists aren't required for...some versions of cts
18:54 fdobridge: <g​fxstrand> Okay, I'll try again now that those are fixed and see where we're at.
19:25 fdobridge: <g​fxstrand> ES tests are dying in EGL. Some sort of GPU hang that doesn't give us any useful information. It's in the `multi_context` tests and looks like what happens when we have too many contexts active in the GSP at the same time.
19:26 fdobridge: <g​fxstrand> Anyway, we'll see how my desktop GL run goes
19:28 fdobridge: <g​fxstrand> Of course my sparse fixes aren't merged yet so I can't actually submit anything I run today
19:28 fdobridge: <g​fxstrand> But we'll see how it goes
20:08 fdobridge: <z​mike.> I run cts pretty regularly
20:08 fdobridge: <z​mike.> everything is passing for me, though that's with all cts changes applied and whatever the hell has gathered in my branches after 2 weeks of nir fugue
20:09 fdobridge: <z​mike.> I don't think I run the EGL list though since historically that one always had issues so I never added it to my scripts
20:13 fdobridge: <g​fxstrand> Yeah, we need to figure out EGL for ES conformance. We can do GL without.
20:13 fdobridge: <g​fxstrand> Which, honestly, is the one I care about
20:17 fdobridge: <z​mike.> if I ever finish nir hell I'll get back to it in the course of fixing some of the issues that have been piling up
21:12 fdobridge: <g​fxstrand> @karolherbst @mhenning I'm poking about with `bssy`. It appears that the result, when copied to a GPR via `bmov` is, indeed, simply `ballot(true)`.
21:12 fdobridge: <g​fxstrand> I'm not sure how to get at bsync, though.
21:12 fdobridge: <k​arolherbst🐧🦀> bsync doens't write to the barrier afaik
21:12 fdobridge: <g​fxstrand> Incidentally, this means one could optimize `bssy+bmov` to `vote`
21:13 fdobridge: <k​arolherbst🐧🦀> I wouldn't bet on it, as there might be some internal magic they are doing
21:13 fdobridge: <g​fxstrand> Eh, if they're doing internal magic than my spilling strategy is hosed.
21:13 fdobridge: <g​fxstrand> But that's totally testable...
21:14 fdobridge: <k​arolherbst🐧🦀> if one could optimize that to vote, why doesn't nvidia then?
21:14 fdobridge: <k​arolherbst🐧🦀> mhh well.. vote can't write to a barrier anyway
21:14 fdobridge: <g​fxstrand> 🤷🏻‍♀️
21:14 fdobridge: <g​fxstrand> You wouldn't want to most of the time
21:14 fdobridge: <k​arolherbst🐧🦀> also, bssy has a constant thread mask as the input
21:15 fdobridge: <k​arolherbst🐧🦀> ehh wait
21:15 fdobridge: <k​arolherbst🐧🦀> is it the mask?
21:15 fdobridge: <k​arolherbst🐧🦀> ehh no, it's the jump target...
21:15 fdobridge: <k​arolherbst🐧🦀> it had an input predicate
21:15 fdobridge: <k​arolherbst🐧🦀> but vote has that as well.. but also an output predicate
21:16 fdobridge: <k​arolherbst🐧🦀> the one thing I'm still curious about is, that apparently there is an aliasing bit _somewhere_ but I have no idea what it is even doing
21:16 fdobridge: <g​fxstrand> I suspect nvidia doesn't because those registers are magic and have special scheduling rules and it's easier to just use them for barriers and only bmov when you need to spill
21:16 fdobridge: <k​arolherbst🐧🦀> yeah.. probably
21:17 fdobridge: <g​fxstrand> But it's good to know that they are masks.
21:17 fdobridge: <k​arolherbst🐧🦀> I wonder if the order is funky
21:18 fdobridge: <k​arolherbst🐧🦀> like.. there is an aliasing bit, but it's not part of the barrier, so much I know
21:18 fdobridge: <k​arolherbst🐧🦀> and it gets cleared on `BMOV.CLEAR`
21:19 fdobridge: <k​arolherbst🐧🦀> any BMOV writing zero actually
22:33 fdobridge: <g​fxstrand> Actually, it looks like `~ballot(true)`
22:35 fdobridge: <g​fxstrand> No, just `ballot(true)`.
22:35 fdobridge: <g​fxstrand> Ugh. Writing tests is complicated
22:56 fdobridge: <g​fxstrand> Okay, so figuring out all this sync stuff is tricky. It looks like the hardware has some sort of deadlock detection where, if every thread is blocked, it kicks all the active `bsync`s.
22:57 fdobridge: <g​fxstrand> This is good for not getting the GPU stuck. Tricky for R/E
23:01 fdobridge: <k​arolherbst🐧🦀> mhhh
23:01 fdobridge: <k​arolherbst🐧🦀> but that basically means, that after a `bsync` you are not guaranteed to have all threads converged?
23:02 fdobridge: <g​fxstrand> Not if you do it wrong and deadlock, you aren't.
23:03 fdobridge: <k​arolherbst🐧🦀> right, just means that bsync kicking threads waiting on other bsyncs is indeed how the hardware works
23:03 fdobridge: <g​fxstrand> Maybe?
23:03 fdobridge: <g​fxstrand> It just means the HW has deadlock detection
23:03 fdobridge: <k​arolherbst🐧🦀> I mean.. the docs state if all threads are blocked/sleeping/exited in the mask, it kicks them all
23:04 fdobridge: <g​fxstrand> Yes
23:04 fdobridge: <g​fxstrand> Which is fine
23:04 fdobridge: <k​arolherbst🐧🦀> regardless of where the threads currently wait
23:04 fdobridge: <g​fxstrand> Sure
23:04 fdobridge: <k​arolherbst🐧🦀> only `WARPSYNC` guarantees that after unblocking _all_ threads execute the next instruction
23:05 fdobridge: <k​arolherbst🐧🦀> next relative to `WARPSYNC`
23:05 fdobridge: <g​fxstrand> And I've determined that it only kicks the current set of hung threads. If you wait again, that wait works.
23:05 fdobridge: <g​fxstrand> So it's not like one hang disables bsync or something (not that I would expect it to.
23:06 fdobridge: <g​fxstrand> That sounds like a different but fairly important distinction.
23:06 fdobridge: <k​arolherbst🐧🦀> I wonder if what the hardware does if you have multiple mask fullfilled
23:06 fdobridge: <k​arolherbst🐧🦀> like.. different masks, or even subsets
23:06 fdobridge: <k​arolherbst🐧🦀> yeah.. `WARPSYNC` is like the cuda `__syncwarp` thing
23:07 fdobridge: <g​fxstrand> That's tricky to figure out because most of the cases where that would happen are also deadlock cases. 😭
23:07 fdobridge: <k​arolherbst🐧🦀> including the memory barrier
23:07 fdobridge: <k​arolherbst🐧🦀> well..
23:07 fdobridge: <k​arolherbst🐧🦀> worst case we trust the docks who say that bsync doesn't care about anything besides the threads status
23:08 fdobridge: <g​fxstrand> That's important because with bsync nothing actually guarantees that a subgroup op actually executes in lock step. Depending on `$details`, you may have threads converged but still executing separately.
23:08 fdobridge: <k​arolherbst🐧🦀> I wonder how `YIELD` plays into all of this
23:08 fdobridge: <k​arolherbst🐧🦀> like...
23:08 fdobridge: <k​arolherbst🐧🦀> nvidia puts that infront of a loop continue e.g.
23:09 fdobridge: <k​arolherbst🐧🦀> well
23:09 fdobridge: <k​arolherbst🐧🦀> sometimes
23:09 fdobridge: <k​arolherbst🐧🦀> it's one example
23:09 fdobridge: <g​fxstrand> Which means my subgroup implementation may not be 100% correct. I may need to throw in some warpsync/bsync to ensure things don't get too far out of sync.
23:10 fdobridge: <k​arolherbst🐧🦀> yeah...
23:10 fdobridge: <k​arolherbst🐧🦀> I think that's what nvidia is doing for subgroup ops.. at least Ben threw some of those in front of them for codegen because I think that's what nvidia was doing
23:11 fdobridge: <k​arolherbst🐧🦀> but that brings us the question: what's the actual purpose of bssy+bsync
23:12 fdobridge: <k​arolherbst🐧🦀> though as long as you have nothing nested going on it's good enough
23:15 fdobridge: <k​arolherbst🐧🦀> but yeah.. that kinda depends on what happens if you have subsets of masks going on
23:17 fdobridge: <k​arolherbst🐧🦀> @gfxstrand ohhh.. I have an idea
23:17 fdobridge: <k​arolherbst🐧🦀> what if at arrival it determines what happens
23:18 fdobridge: <k​arolherbst🐧🦀> because that would entirely explain the semantics
23:18 fdobridge: <k​arolherbst🐧🦀> like.. the thread either blocks or it unlocks the group
23:18 fdobridge: <k​arolherbst🐧🦀> so if you arrive in an inner bsync, the outer one does nothing
23:18 fdobridge: <k​arolherbst🐧🦀> because there is no active thread arriving
23:18 fdobridge: <k​arolherbst🐧🦀> (in case some lonely thread waits on the outer one since forever)
23:20 fdobridge: <k​arolherbst🐧🦀> and you can't have multiple threads being executed in different places within a subgroup afaik, so there is only one active group of threads per subgroup
23:20 fdobridge: <k​arolherbst🐧🦀> so if half the threads arrive at the outer bsync, they block and transfer execution to the other half
23:21 fdobridge: <k​arolherbst🐧🦀> and they loop until they arrive at the outer one as well, converging and unblocking everything
23:21 fdobridge: <g​fxstrand> Nope! Not on Turing+. Turing can totally have multiple groups of threads in different parts of the program going at the same time.
23:22 fdobridge: <g​fxstrand> Or maybe Volta?
23:22 fdobridge: <k​arolherbst🐧🦀> in a subgroup?
23:22 fdobridge: <g​fxstrand> Yup
23:22 fdobridge: <k​arolherbst🐧🦀> mhhh
23:22 fdobridge: <g​fxstrand> There might be an enable bit for it somewhere
23:22 fdobridge: <k​arolherbst🐧🦀> might explain why they have the barrier file now...
23:22 fdobridge: <g​fxstrand> And it might be compute-only
23:22 fdobridge: <k​arolherbst🐧🦀> but yeah.. that's kinda funky
23:23 fdobridge: <k​arolherbst🐧🦀> maybe that's why they added WARPSYNC?
23:23 fdobridge: <k​arolherbst🐧🦀> because it didn't exist before
23:23 fdobridge: <g​fxstrand> I think that's more because they want independent forward progress where things may not be nicely nested.
23:24 fdobridge: <k​arolherbst🐧🦀> they added YIELD for forward progress
23:24 fdobridge: <g​fxstrand> It's certainly why they added `__syncwarp()`
23:25 fdobridge: <k​arolherbst🐧🦀> maybe bssy+bsync is just good enough for 99% of all cases and they simply ignore you can get very unlucky timing, and for the cases where it really matters you use WARPSYNC just to be sure
23:26 fdobridge: <b​utterflies> Volta
23:28 fdobridge: <g​fxstrand> I kinda suspect that `warpsync` and `bsync` are the same under the hood, just targeting different registers and with `bsync` having the deadlock detection.
23:29 fdobridge: <k​arolherbst🐧🦀> mhh
23:29 fdobridge: <g​fxstrand> I've definitely seen `warpsync` hang the GPU
23:29 fdobridge: <k​arolherbst🐧🦀> I doubt it, because warpsync also acts as a memory barrier
23:29 fdobridge: <g​fxstrand> What kind of memory barrier?
23:30 fdobridge: <k​arolherbst🐧🦀> shared I think... let's see
23:30 fdobridge: <k​arolherbst🐧🦀> mhhh
23:31 fdobridge: <k​arolherbst🐧🦀> it doesn't say
23:31 fdobridge: <k​arolherbst🐧🦀> just that memory ordering of participating threads is identical as if you'd execute MEMBAR
23:31 fdobridge: <g​fxstrand> I expect it's `membar.cta` then
23:31 fdobridge: <k​arolherbst🐧🦀> well
23:32 fdobridge: <k​arolherbst🐧🦀> only participating threads
23:32 fdobridge: <g​fxstrand> Sure
23:32 fdobridge: <k​arolherbst🐧🦀> so even weaker I'd say
23:32 fdobridge: <k​arolherbst🐧🦀> but maybe it's just membar.cta
23:32 fdobridge: <k​arolherbst🐧🦀> whatever `__syncwarp` says 😄
23:33 fdobridge: <k​arolherbst🐧🦀> there is a funky `.EXCLUSIVE` flag on `WARPSYNC` though
23:33 fdobridge: <g​fxstrand> What's that do?
23:34 fdobridge: <k​arolherbst🐧🦀> only a single set of threads pass at once
23:34 fdobridge: <k​arolherbst🐧🦀> like..
23:34 fdobridge: <k​arolherbst🐧🦀> `EXCLUSIVE` operates on a gpr only
23:34 fdobridge: <k​arolherbst🐧🦀> so there can be different sets on threads
23:34 fdobridge: <k​arolherbst🐧🦀> *of
23:35 fdobridge: <k​arolherbst🐧🦀> apparently allows for funky access controls for `REDUX`, `SHFL`, `VOTE`, etc...
23:36 fdobridge: <k​arolherbst🐧🦀> so you can prevent different set of threads to mess up their subgroup ops
23:50 fdobridge: <m​henning> yeah, I was thinking about this yesterday, and I think a reasonable guess for the semantics is something like:
23:50 fdobridge: <m​henning>
23:50 fdobridge: <m​henning> each thread is in one of three states:
23:50 fdobridge: <m​henning> - active (executing the current instruction),
23:50 fdobridge: <m​henning> - inactive (eligible to execute instructions, but not executing the current instruction), or
23:50 fdobridge: <m​henning> - blocked (waiting on a bsync)
23:50 fdobridge: <m​henning>
23:50 fdobridge: <m​henning> and a bsync is something like:
23:50 fdobridge: <m​henning> ```
23:50 fdobridge: <m​henning> bsync(int mask) {
23:50 fdobridge: <m​henning> if all threads in mask are either blocked or active {
23:50 fdobridge: <m​henning> unblock all threads in mask
23:50 fdobridge: <m​henning> } else {
23:50 fdobridge: <m​henning> block all active threads
23:50 fdobridge: <m​henning> }
23:50 fdobridge: <m​henning> }
23:50 fdobridge: <m​henning> ```
23:50 fdobridge: <m​henning> which then wouldn't require any kind of checking for "is this the same barrier?" - that's all implicit from the masks
23:50 fdobridge: <m​henning> yeah, I was thinking about this yesterday, and I think a reasonable guess for the semantics is something like:
23:50 fdobridge: <m​henning>
23:50 fdobridge: <m​henning> each thread is in one of three states:
23:50 fdobridge: <m​henning> * active (executing the current instruction),
23:50 fdobridge: <m​henning> * inactive (eligible to execute instructions, but not executing the current instruction), or
23:50 fdobridge: <m​henning> * blocked (waiting on a bsync)
23:50 fdobridge: <m​henning>
23:50 fdobridge: <m​henning> and a bsync is something like:
23:50 fdobridge: <m​henning> ```
23:50 fdobridge: <m​henning> bsync(int mask) {
23:50 fdobridge: <m​henning> if all threads in mask are either blocked or active {
23:50 fdobridge: <m​henning> unblock all threads in mask
23:50 fdobridge: <m​henning> } else {
23:50 fdobridge: <m​henning> block all active threads
23:50 fdobridge: <m​henning> }
23:50 fdobridge: <m​henning> }
23:50 fdobridge: <m​henning> ```
23:50 fdobridge: <m​henning> which then wouldn't require any kind of checking for "is this the same barrier?" - that's all implicit from the masks (edited)
23:51 fdobridge: <k​arolherbst🐧🦀> yeah
23:51 fdobridge: <k​arolherbst🐧🦀> but also
23:51 fdobridge: <k​arolherbst🐧🦀> the active thread's mask decides which threads are relevant
23:51 fdobridge: <k​arolherbst🐧🦀> so you enter a bsync and that mask is the only one that matters
23:51 fdobridge: <k​arolherbst🐧🦀> ohhh wait
23:52 fdobridge: <k​arolherbst🐧🦀> what if it cascades?
23:52 fdobridge: <k​arolherbst🐧🦀> like..
23:52 fdobridge: <k​arolherbst🐧🦀> if a thread was blocked on a bsync
23:52 fdobridge: <k​arolherbst🐧🦀> and it gets woken up
23:52 fdobridge: <k​arolherbst🐧🦀> does it check again?
23:52 fdobridge: <k​arolherbst🐧🦀> and blocks if one of the threads in that mask is running?
23:53 fdobridge: <m​henning> I would guess not - that sounds like it's harder to implement and I'm not sure what it buys you
23:53 fdobridge: <m​henning> but also I don't know that we know those details
23:53 fdobridge: <k​arolherbst🐧🦀> nested bsyncs actually working
23:54 fdobridge: <k​arolherbst🐧🦀> maybe
23:54 fdobridge: <m​henning> You don't need to re-check for nested to work
23:54 fdobridge: <k​arolherbst🐧🦀> well
23:54 fdobridge: <k​arolherbst🐧🦀> if you have some threads on an outer, and some in an inner
23:54 fdobridge: <k​arolherbst🐧🦀> mhhh
23:54 fdobridge: <k​arolherbst🐧🦀> though they'd still all pass regardless if you are unlucky
23:55 fdobridge: <k​arolherbst🐧🦀> so yeah.. probably doesn't change anything
23:55 fdobridge: <m​henning> Only the inner will be able to arrive with all threads active or blocked
23:55 fdobridge: <k​arolherbst🐧🦀> well
23:55 fdobridge: <k​arolherbst🐧🦀> depends on how unlucky the timing is
23:55 fdobridge: <k​arolherbst🐧🦀> the inner one could arrive later
23:55 fdobridge: <k​arolherbst🐧🦀> ehh
23:55 fdobridge: <k​arolherbst🐧🦀> or earlier
23:55 fdobridge: <k​arolherbst🐧🦀> depends on the code really
23:56 fdobridge: <m​henning> If the inner arrives first, then those threads get woken up again and the outer doesn't pass. If the outer arrives first, the inner is still active and the outer doesn't pass
23:57 fdobridge: <k​arolherbst🐧🦀> mhhh
23:57 fdobridge: <k​arolherbst🐧🦀> yeah so they could only arrive all at the same time
23:57 fdobridge: <k​arolherbst🐧🦀> but yeah.. the outer wouldn't be able to, because the inner ones would already be unblocked at some point