00:32 imirkin: JayFoxRox: depends on the format - if you pick UVYV or VYUY, then UVPLANE isn't used
00:32 imirkin: JayFoxRox: if you pick NV12 or NV21, then UVPLANE is used
00:33 imirkin: JayFoxRox: have a look at how i drive the overlay in nouveau -- https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/gpu/drm/nouveau/dispnv04/overlay.c?h=v4.19-rc2
00:34 imirkin: JayFoxRox: should work with the drm driver as an overlay plane, not sure what your target platform is
00:35 imirkin: (and yes, confirmed that there are two sets of regs for double-buffering... afaik you can't even enable both at once)
00:35 imirkin: let me know if you still have questions after reading over the nouveau impl
00:36 JayFoxRox: imirkin: writing python code to poke registers on an xbox [our open-source compiler doesn't even have a C stdlib, so running nouveau won't be possible]
00:36 JayFoxRox: the existing code is very helpful tho! I will definitely play around with it some more
00:38 imirkin: ok cool
00:38 imirkin: there are funny stride requirements, so be caerful
00:39 imirkin: that took a bit to work out
00:39 imirkin: also the brightness/saturation stuff had to be fit within the parameters of the kernel - i.e. no floats, but had to compute sin & cos...
00:48 imirkin: er, make that hue/saturation
03:52 rhyskidd: imirkin: that nvbios disp ver 0x22 fix for GP108 also works on Volta, so thanks for that
03:52 imirkin: rhyskidd: cool.
03:52 imirkin: i didn't check very carefully if anything new existed
03:53 imirkin: i just wanted to see the display scripts :)
03:54 rhyskidd: sure
06:22 pmoreau: RSpliet: If you are already over 32 and the coalescing does not bring you over 64, I would expect the driver to do the coalescing.
06:25 pmoreau: RSpliet: FWIW, here are a few primitives I used to force the driver to coalesce loads/stores: https://hastebin.com/eguwiwugov.cpp You’ll probably need to change a few things as it’s in CUDA, but it should still be doable in OpenCL.
12:10 RSpliet: pmoreau: Thanks man. I ended up performing some manual labour
12:10 RSpliet: pmoreau: https://hastebin.com/emukowelis.nginx
12:13 RSpliet: Didn't end up with a fantastic shader, uses 46GPRs... but at least I didn't fail at doing what I wanted to :-P
12:16 karolherbst: RSpliet: and is the shader performing better or worse or the same?
12:17 RSpliet: I'll test that in a bit. First trying to do an apples to apples comparison between my optimised shader and the so-called NVIDIA optimised one :-P
12:18 RSpliet: (ergo: doing the same thing with float2)
12:22 RSpliet: Inconclusive... the time it takes to execute the shader is too short :')
12:29 RSpliet: It is like 40% smaller though in number of insns
12:31 RSpliet: ~740->473 insns
12:34 imirkin: skeggsb: let me know if there's anything else those hdmi2 patches need for merging
12:37 karolherbst: RSpliet: that must be a shader with quite a lot of load/stores
12:38 karolherbst: or rather was one
12:44 RSpliet: It did 5*(vec length) loads, and 2*(vec length) stores. Additionally, there's a loop that contains four individual loads that could be coalesced into a single vec4 load. For register pressure reasons I went for float2 vectors, so I reduced 18 ldst ops to 8.
12:46 RSpliet: I... oh. Yeah, they also unrolled a loop manually. Duh, that'll lead to a lot of extra code
12:49 karolherbst: extra code won't matter anyway ;)
12:49 karolherbst: just that less instructions are executed
12:49 RSpliet: Yeah, I can play that game too
12:49 RSpliet: with #pragma unroll
12:54 RSpliet: With sufficient parallelism you're right btw, but for large GPR, a high instruction count unfortunately has a negative effect on the icache. Sad, because high-GPR programs tend to have more instructions :-P
12:56 imirkin: simpler programs run faster than more complex ones...
12:58 karolherbst: RSpliet: well, but just unrolling loops doesn't mean you also have to increase the GPR count, it isn't like you need more space to store the state, you basically just copy some instructions around and skip the CFG ones ;)
12:59 karolherbst: of course further optimizations could lead to higher GPR counts or lower, but unrolling itself shouldn't have a significant impact here
13:00 RSpliet: Oh no blind unrolling shouldn't affect GPR at all. But that's also pointless in most cases, branching overhead is well maskable by other threads.
13:01 karolherbst: you still execute more instructions in total ;)
13:03 karolherbst: no idea how much nv hw does branch prediciton or if they do it at all, but with unrolling you also need less of that (and less headache with the issues around branch prediciton in general anyway)
13:03 RSpliet: The way kepler is set up that doesn't have to make a difference though.
13:03 RSpliet: one warp scheduler dealing with control flow still leaves three dual-issuing warp schedulers to saturate your FPUs if you needed to
13:04 karolherbst: okay sure, but kepler is special here anyway.
13:09 RSpliet: There's limited bandwidth to the register file, there's the probability of the instruction being an SFU instruction leading you to block. There's plenty of reasons why branching overhead can be negligible on more modern GPUs. Not saying it's never there, but I'd expect the impact of that visible in power consumption more than in perf for most moderately sized loops :-P
13:28 RSpliet: Heh, it doesn't seem to want to issue B128 reads from constmem.
13:29 RSpliet: nvdisasm implies this 128b read exists. Could it be wrong? :-D
13:29 karolherbst: RSpliet: ld vs mov I think
13:29 karolherbst: you can mov from 32 but not 128? something like that?
13:30 HdkR: Ooo, load coalescing? +1 :)
13:30 RSpliet: Nah, it's "LDC" according to nvdisasm
13:31 RSpliet: HdkR: it comes with alignment constraints that make it impractical in the general case
13:32 HdkR: aye
13:41 imirkin: and requires waiting on a barrier for it to complete on maxwell+
13:42 imirkin: RSpliet: and yeah, LDC with 128-bit stopped existing on kepler
13:42 imirkin: nvdisasm claims it's a thing, but it's not
14:11 RSpliet: imirkin: That confirms my observation, thanks
14:31 RSpliet: So much creativity in public benchmark kernels... "col = (ei+1) / d_Nr + 1 - 1;" is an absolute gem from rodinia
14:48 karolherbst: RSpliet: well somehow you have to benchmark compilers :p
14:53 RSpliet: I guess you can't complain about code quality for benchmarks. At least they present a nice case for floating point atomics.
14:53 RSpliet: They just don't know it...
14:54 karolherbst: well, benchmarks devs don't do questionable "optimizations" because their goal is to produce crappy code in the first place :p
14:54 karolherbst: or at least so I hope
14:54 RSpliet: You're absolutely right. And they are absolutely wrong :-D
14:55 karolherbst: :D
14:55 RSpliet: They're supposed to create code representative of the real world.... with benchmarks like these they are insulting real-world devs
14:55 karolherbst: or telling them they should rather produce readable code, because compilers are good enough :p
14:55 karolherbst: there is a lot of crappy C code out there just because a dev tried to be "smart"
14:56 RSpliet: The kernels I'm looking at are crappy OpenCL C code because a dev tried to be "smart"
14:56 karolherbst: ;)
14:56 RSpliet: So... I guess this benchmark is representative of the real world ;-)
14:56 karolherbst: I guess it works both ways
14:56 karolherbst: :D
15:00 karolherbst: pendingchaos: there is one thing regarding xmad, which I didn't really check. If we have a mul(a & 0xffff, b & 0xffff) where a and b are ints, we could translate that into one XMAD instruction, right?
15:01 pendingchaos: right
15:01 karolherbst: I am wondering how common something like that is, as this could be a potential optimization you could do inside the shader
15:01 karolherbst: (adding & 0xffff to hint the compiler about valid values)
15:02 RSpliet: or casting to half-float?
15:02 karolherbst: in GL?
15:02 RSpliet: ehh
15:02 RSpliet: half int :-P
15:02 karolherbst: also, integer
15:02 karolherbst: we don't have shorts in GL, do we?
15:03 karolherbst: I know vulkan has it, or spirv at least
15:03 karolherbst: or so I think
15:03 RSpliet: Mmm, apparently not indeed.
15:03 HdkR: AMD_gpu_shader_int16
15:04 karolherbst: HdkR: do you know if games are using it?
15:04 HdkR: NV_gpu_shader5 as well
15:04 karolherbst: ahh
15:04 karolherbst: HdkR: is this interesting for dolphin?
15:04 karolherbst: could improve integer muls on maxwell/pascal
15:04 HdkR: Nah, Dolphin needs 24bit integers
15:05 karolherbst: sure, but for everything?
15:05 pendingchaos: karolherbst: not promising: https://hastebin.com/kipesizose.txt
15:05 HdkR: The main bit that is heavy is the fragment TEV stages which all operate at 24bit
15:05 karolherbst: I see
15:05 karolherbst: HdkR: I was more thinking about shader inputs though
15:06 HdkR: Could be a neat idea in the future to use uint16_t directly on the vertex side to remove some CPU overhead on vertex processing I guess
15:06 karolherbst: pendingchaos: yeah.. sad. I guess it isn't really worth the effort
15:06 pendingchaos:disappears for a few minutes or so
15:07 karolherbst: HdkR: yeah, might be
15:07 RSpliet: That 16-bit extension is quite new it seems
15:08 RSpliet: August 2017
15:21 l2y: Is it okay that when I try to 'X -configure', X exits with error (drm failed to open device, first section of Troubleshooting), but still generates a config? X starts just fine through startx
15:22 l2y: And early kms is enabled as per archwiki, and it's not blacklisted of course
15:29 l2y: Never mind, with sudo the error doesn't happen, and the exit code 2 is the same
15:35 karolherbst: l2y: or don't use a X config at all if you don't configure it yourself anyway
15:35 karolherbst: in most cases you don't need one
15:36 l2y: Well, I am going to configure it myself. I want to use two separate X Screens. Nvidia blob attempts were mostly unsuccessful, so I want to see if nouveau can do better, karolherbst
15:37 karolherbst: ohh I see
15:37 l2y: Nouveau has a Zaphod option for it, currently merging configs from blob and free and going to try it out
15:38 l2y: With blob one can generate separate X Screens config using a GUI (nvidia-settings)
15:38 karolherbst: okay so is it about dual GPU setups or you really simply just to want to have two seperate things on two displays without being able to move things between both displays?
15:39 pendingchaos: mwk: I think doing scheduling in a preprocessor is working out to be better than doing it in envyas
15:39 pendingchaos: it feels simpler overall
15:39 l2y: I have one GPU and two monitors, and I want these monitors to have different DPI, and since this is only possible by using a Screen per display...
15:41 karolherbst: l2y: mhhhhh, right
15:41 karolherbst: l2y: I thought under wayland or that maybe gnome were able to handle something like that? Or maybe it is still some WIP work?
15:42 karolherbst: just thinking... I thought there is something supporting this, but I don't know what it was for sure
15:44 karolherbst: pendingchaos: will you also add support for pushing envydis output with sched opcodes through it? I am thinking about envydis orig.bin | envysched | envyas and then to diff both envydis outputs (original and what envysched ended up inserting)
15:45 pendingchaos: that can be done
15:45 karolherbst: nice
15:45 pendingchaos: perhaps also a --schedule_all (or maybe just -a) argument to envydis?
15:46 pendingchaos: so you don't have to manually add .beginsched/.endsched
15:46 karolherbst: maybe we could then get tons of nvidia generated shaders and have it as a test to see if we aren't breaking things
15:46 karolherbst: pendingchaos: I ignore details until I see patches :p
15:46 karolherbst: or at least some mockup about how the input should look like
15:47 l2y: karolherbst: I read that nvidia cards' performance is very poor under Wayland due to refusal of EGL support from open-source projects, or whatever, so I haven't even tried it
15:47 karolherbst: but yeah, maybe having a parameter to control that might make sense
15:48 karolherbst: l2y: yeah, but I think compositors started to support it? Dunno, but at least with nouveau that might work out
15:48 karolherbst: never tried it as I never had this situation
15:48 l2y: Okay, thanks, will try
16:45 RSpliet: In OpenCL 1.2, is the literal 1.0 defined to be of type double? That seems to be how my version of the blob interprets them... but you'd expect the default to be float...
16:48 pmoreau: I would expect the rules to be similar to C/C++, which default to double.
16:49 RSpliet: It's undocumented in the 1.2 spec, but that's indeed what I'm observing
16:49 RSpliet: Which is another cock-up in Rodinia SRAD costing valuable cycles :')
16:51 pmoreau: I had that bite me once, with a PI constant defined without the 'f' suffix... :-/
16:52 pmoreau: Since then, I always turn on the compiler warnings about double usage in kernels (I don’t remember if it’s an option to nvcc or to ptxas).
16:52 RSpliet: I'm unsure, but another kernel that I reduced the code size of by 20-25% just by sprinking f's around
16:53 RSpliet: I can't believe how many f's I'm giving!
16:53 pmoreau: :-D
16:54 pmoreau: You’re paying respect too
16:57 pmoreau: But wow, 20-25% code reduction is not too shabby!
16:59 RSpliet: Well, it's because they're doing divisions with those literals. For some reason NVIDIA imports a whole routine used for lowering that is unneccesary with single precision FP numbers (because really what the parboil bench is doing is a reciprocal)
16:59 RSpliet: pardon, Rodinia
16:59 RSpliet: Parboil has other issues :-P
17:00 pmoreau: From the OpenCL C 2.0 specification: “The OpenCL C programming language (also referred to as OpenCL C) is based on the ISO/IEC 9899:1999 C language specification (a.k.a. C99 specification) with specific extensions and restrictions. Please refer to the ISO/IEC 9899:1999 specification for a detailed description of the language grammar.”
17:01 RSpliet: Heh, yeah, well... the dialect of choice is still OpenCL 1.2. NVIDIA doesn't go beyond that I don't think... do they?
17:01 pmoreau: (And I think it is the same for earlier versions as well)
17:02 pmoreau: It does support nowadays some functions from 2.0, but not that many I think; Phoronix had an article on that about a year ago IIRC, and I think Michael talked a bit about it in his latest OpenCL benchmarking article.
17:03 pmoreau: I just checked the OpenCL 1.2 specification (which includes the spec for OpenCL C as well), and it has the same sentence as the 2.0 one)
17:03 pmoreau: Even the OpenCL C 1.0 specification had it
17:05 RSpliet: That clarifies. Thanks
17:05 pmoreau: (Looking at the examples in the spec, >99% of the time they define a float literal, they do put the 'f' at the end.)
17:06 RSpliet: That's probably because unlike application developers, the spec writers are competent :-P
17:06 RSpliet: Sorry, I'm ranting a bit.... people did put serious effort into this stuff and I should be more grateful O:-)
17:08 pmoreau: Ah ah ah :-D Well, you still find mistakes in specs too, like the spec for OpenCL 1.0, 1.1, 1.2, 2.0, 2.2 say something about action "foo" in context "bar", but the OpenCL 2.1 spec says nothing.
17:17 RSpliet: This one I'll forgive them: don't open-code a clamp, you'll waste 7 instructions :-P
17:18 HdkR: clamp is a scary function to call. I'll just code my own </s>
17:21 RSpliet: Reminds me of the time mwk proudly claimed there's an NVIDIA ISA that lets you encode "clamsex porn"
17:21 RSpliet: (clamped, sign-extended, predicated or-not?)
17:22 HdkR: hm
17:27 mwk: that was clampsex
17:27 mwk: it was the motion vector processor for VP2/VP3/VP4
17:28 mwk: hm
17:28 mwk: or not, the instruction is called "clamps" in envydis
17:28 mwk: unfortunate
17:28 mwk: porn is definitely encodable though
17:29 mwk: though it's for *writing* to the predicate file, not for predicating an instruction
17:30 RSpliet: does the "n" encode "not"? In that case I guess the boolean operation is most often referred to as NOR... but don't want to be the one spoiling the fun :-D
17:31 mwk: nope
17:31 mwk: NOR is ~(a | b)
17:31 mwk: ORN is a | ~b
17:32 mwk: also known as ORC (or complement) in some ISAs... powerpc IIRC
17:33 RSpliet: Right!
17:34 mwk: porn basically does $pX |= ~(instruction predicate output)
17:57 karolherbst: RSpliet, pmoreau: Nvidia actually implements CL 2.0 afaik allthough they don't advertise it
17:57 karolherbst: but the compiler supported the CL 2.0 language for quite a long time already
17:58 linkmauve: Some user has a gt710 and GNOME is apparently very sluggish, is this expected or could there be some known issue?
17:58 linkmauve: On Debian testing.
17:59 karolherbst: linkmauve: uhm, depends on how many effects are enabled and the resolution
17:59 karolherbst: the gt710 is quite slow
17:59 karolherbst: linkmauve: reclocking is worth a shot
17:59 karolherbst: as it should work on those
18:00 linkmauve: It’s a fully vanilla Debian apparently.
18:00 linkmauve: karolherbst, any easy instructions for that?
18:01 karolherbst: linkmauve: "echo 0xf | sudo tee /sys/kernel/debug/dri/0/pstate"
18:02 karolherbst: but maybe it isn't as stable on his GPU, but then we could look into why not
18:02 linkmauve: 0xf is auto reclocking?
18:03 karolherbst: no, just the highest one
18:03 karolherbst: most/all keplers come with 0x7 (lowest) and 0xf (highest)
18:03 linkmauve: Ok.
18:03 karolherbst: 0xa, 0xd and 0xe are also seen sometimes
18:03 karolherbst: the file can be read out for all available perf levels
18:04 karolherbst: last line being power supply: clocks...
18:04 karolherbst: kind of like a current state line
18:07 karolherbst: linkmauve: "nouveau.config=NvClkMode=15" can be set for setting it on boot automatically
18:57 pendingchaos: karolherbst, imirkin: about Maxwell ISA: do write barriers always signal after read barriers?
18:57 pendingchaos: so: https://pastebin.com/raw/LiS8J7rY
18:58 karolherbst: imirkin: currently working on implementing GetGraphicsResetStatusARB and now I am thinking about how to implement it on the kernel side. Not sure if new nvif ioctl or just adding another NOUVEAU_GETPARAM_ variant
18:58 karolherbst: but the param stuff doesn't really fit as we can't set the reset status
18:59 karolherbst: pendingchaos: I think so, yes
18:59 karolherbst: pendingchaos: the other way around would be weird and wouldn't matter anyway
18:59 karolherbst: reading regs after writing the result?
19:01 karolherbst: pendingchaos: ohh, do you plan to implement reuse as well? currently I don't think we actually use it though
19:01 pendingchaos: codegen uses it
19:01 karolherbst: yeah
19:01 pendingchaos: I'm not sure if I'll implement it
19:01 karolherbst: but not in our hand written stuff, or do we?
19:01 karolherbst: yeah... sounds quite messy to implement it
19:01 pendingchaos: we don't in gm107.asm
19:02 karolherbst: anyway, something which we might want to keep in mind
19:02 pendingchaos: IIRC, it didn't seem to give much benefit when I was experimenting with replacing some imuls with xmads
19:02 pendingchaos: but maybe that's just a Pascal thing
19:04 karolherbst: well, the benefit isn't that big, but I think it reduces the latency a bit of the instruction
19:04 karolherbst: or maybe lower stall count needed?
19:06 karolherbst: "// Reuse a register from the second blocking registers" inside maxas
19:07 karolherbst: ohhhh
19:07 karolherbst: ohh that is quite smart
19:07 pendingchaos: I don't think it lowers the stall count needed
19:07 karolherbst: no, it caches the content of the register
19:07 pendingchaos: after looking at some random nvidia-generated code
19:08 karolherbst: sooo
19:08 karolherbst: if you have a write barrier on insn0, you can wait on that instead of having to setup a new read barrier on insn1
19:08 karolherbst: in cases you need both
19:09 pendingchaos: MaxAs assumes write barriers always signal after read barriers?
19:09 karolherbst: like opcode $r0 $r1 $r2 (write barrier) opcode2 $r2 $r1.reuse $r2 (no barriers) opcode3 $r1 $r0 (wait on barrier from opcode)
19:10 karolherbst: so normally you would have to wait on the second instruction to signal the read barrier, but maybe with reuse you don't have to as the content is already fetched
19:11 karolherbst: maxas also writes something about bank conflicts: https://github.com/NervanaSystems/maxas/wiki/SGEMM#calculating-c-register-banks-and-reuse
19:12 pmoreau: Reducing register bank conflicts: that sounds like a topic for RSpliet! :-)
19:13 karolherbst: yeah
19:14 karolherbst: seems like .reuse is just a hint to cache the register
19:14 karolherbst: and the GPU sometimes does it automatically
19:15 karolherbst: pmoreau: I seriously don't want to know what those maxas guys are up to, so that it is a valid business concern to optimize the builtin math library by 1%
19:17 pmoreau: :-D
19:18 karolherbst: imagine how much hardware you actually have to have to let people work on something with a questionable outcome in the first place, so that even if that fails the risk would be worth it instead of jsut buying more GPUs
19:23 karolherbst: interesting, those are actually intel guys
19:28 pmoreau: Interesting indeed. Were there getting some inspiration for their future discrete GPU? :-D
21:20 RSpliet: .reuse Is probably for "register bypass" logic. Quite common in in-order CPUs... a bit more complex for superscalar
21:22 RSpliet: pmoreau, karolherbst: ^ I'm a little surprised it's implemented (but not too...), as it mainly reduces issue latency. Wouldn't help with sufficient warp-parallelism
21:22 RSpliet: Although... well, okay, there's register bank throughput to consider, and I guess power consumption too :-P
21:27 karolherbst: RSpliet: I assume it might be able to skip some barriers as well
21:27 karolherbst: *allow
21:37 l2y: There is nothing else I can do to prevent tearing except setting GLXVBlank, right?
21:39 l2y: Got the Zaphod working, finally. And setting DisplaySize more or less correctly calculates DPI. Tearing is the only thing that remains
21:42 l2y: Also, not sure what is going on, but without setting DisplaySize Xorg more or less correctly reported my monitor size in mm (through xdpyinfo), but when I set these numbers in config, it has reported bigger numbers in mm -- incorrect ones. I expected it to adjust the DPI from 96x96 to 101x101, instead it has adjusted the display size in mm to match 96x96 DPI :D What the...
21:46 l2y: I must say the above happens only for one display, while the other one gets the more or less correct DPI of 182x182, which is the expected behaviour
21:47 l2y: Should I open a bug somewhere, with all the additional info attached?
22:05 RSpliet: karolherbst: what kind of barriers are these?
22:05 RSpliet: I haven't looked into post-Kepler ISA much unfortunately...
22:05 karolherbst: RSpliet: read/write barriers on register read/writes
22:06 karolherbst: RSpliet: like you have to create a read barrier if you read from registers, same for writing if the instruction has a variable execution length
22:06 RSpliet: All reg writes? Or just ld/st/tex?
22:06 karolherbst: and instructions consuming those registers have to wait on those barriers
22:06 RSpliet: Ahh ok!
22:06 karolherbst: like imul has to do it
22:07 RSpliet: They really stripped down HW scheduling to the bone... wow
22:07 karolherbst: so if you have imul $r2 $r0 $r1; iadd $r0 $r3 $r4, imul has to set a read barrier on $r0 and iadd has to wait on it
22:08 karolherbst: because the iadd could actually execute faster and overwrite $r0 while imul is reading it ;)
22:08 karolherbst: I had to fix a carry bit issue caused by something like that
22:08 RSpliet: Makes sense. In case there was any doubt, that also completely rules out them doing register renaming :-P
22:08 RSpliet: (but I suspect there was no such doubt :-D)
22:09 karolherbst: RSpliet: https://cgit.freedesktop.org/mesa/mesa/commit/src/gallium/drivers/nouveau/codegen/lib?id=e4f675dc42887734b43b549784955e81d284b202
22:09 karolherbst: "With significant big work groups"
22:09 karolherbst: which was quite fun as smaller work grups didn't trigger that issue
22:09 karolherbst: *groups
22:10 karolherbst: "rd 0x1" read barrier 2 enabled
22:10 karolherbst: wt 0x2 wait on read barrier 2 (bitmask)
22:10 karolherbst: so you can wait on multiple barriers at once ;)
22:11 karolherbst: RSpliet: also, you don't want to track down bugs like those :D
22:11 RSpliet: karolherbst: heh, robclark must be able to help you with that ;-)
22:12 RSpliet: But that takes some serious codegen work...
22:13 RSpliet: You *could* wait for a read two/three instructions back... but equally the distance could guarantee you don't need a barrier. And in the end there's only a handful of them, so how do you handle running out? :-D
22:14 karolherbst: luck?
22:14 karolherbst: but we have quite a few of them actually
22:15 karolherbst: 8 or something?
22:16 karolherbst: RSpliet: or you can just wait on the oldest one and reuse the barrier slot
22:16 RSpliet: Luck... with a capital F :-P
22:17 karolherbst: anyway, with good scheduling you don't run into those problems anyway
22:17 RSpliet: Yeah, I'm sure there's good strategies, but... a lot of decision logic. And good analysis. No need to enable a barrier if there's no WAR hazard
22:17 karolherbst: and nvidia seems to use two nops + full stall to flush any barriers
22:18 karolherbst: RSpliet: enabling them is for free basically
22:18 karolherbst: minimum stall counts just goes up by 1
22:18 karolherbst: from 2 to 3 or something
22:18 karolherbst: or +1 generally
22:18 karolherbst: but again, only the minimum stall count
22:18 karolherbst: if you need to stall more anyway, there is no penalty
22:19 RSpliet: Near-free in HW, but if your analysis somehow assumes that barrier cannot be re-used. Which is why solid analysis could help reduce the need for barrier flushes :-)
22:19 karolherbst: yeah dunno
22:19 karolherbst: the alternative is worse anyway :p
22:20 karolherbst: you kind of get a 50% perf penalty if you wait on all barriers, max stall count, set all barriers every instruction
22:20 RSpliet: For graphics... nobody'll notice, right? :-D
22:20 karolherbst: soo even if you need to reuse some barrier slots from time to time ;)
22:21 RSpliet: Oh no it
22:21 RSpliet: 's not that you need to re-use them... but your algorithm needs to know which one it can re-use :-)
22:22 RSpliet: *the algorithm. Sorry, not trying to give you even more work :-D
22:22 karolherbst: sure
22:22 karolherbst: all
22:22 karolherbst: you can always wait on them and enable it
22:22 karolherbst: even if you always choose the worst one
22:22 karolherbst: doesn't matter
22:23 karolherbst: not setting them is dangerous, or not waiting on them ;)
22:26 RSpliet: I *guess* if you have guaranteed in-order issue, then you could also just *shift* the barrier set forward, and the barrier wait backwards
22:26 RSpliet: Insn 4 depends on Insn 1, Insn 3 depends on Insn 2. If 3 waits on 2, 4 has implicitly waited on 1...