00:57airlied[d]: skeggsb9778[d]: even submitting a push buf with just SET_OBJECT in it fails
01:14skeggsb9778[d]: that's really strange considering it works from the kernel
01:18airlied[d]: maybe busted IRQs?
01:19skeggsb9778[d]: *possibly*
01:20skeggsb9778[d]: guess you could test that by forcing priv->base.uevent to false in nv84_fence.c
01:20skeggsb9778[d]: that should fallback to the polling path
01:24airlied[d]: that won't help with syncobjs will it?
01:57airlied[d]: vfn seems to be firing for gsp at least, but not seeing nonstall I don't think
02:59airlied[d]: skeggsb9778[d]: any idea if we are handling the pushbuf extended base stuff?
03:08skeggsb9778[d]: no - but any pushbufs should be allocated low enough that it's not a problem
03:11skeggsb9778[d]: nvk puts the reserved area at (1<<39) .. (1<<40), which fits in the "normal" range
03:13mangodev[d]: orangecolors: what's with the addition on irc? i can't tell if this is a routine gpu dev thing or a copypasta
03:14orowith2os[d]: karolherbst[d]: ^
03:14mangodev[d]: damn, good to know
03:15mangodev[d]: i usually wake up to see walls of text in this chat, have been wondering for a while what was going on
03:25skeggsb9778[d]: skeggsb9778[d]: oh, i'm just realising this might not be true - it works for the kernel's own pushbuf (and the GL driver because it's vma is allocated by the kernel), but not necessarily true for nvk's
03:27skeggsb9778[d]: i'd planned on posting the patches, but i'll implement that first - thanks!
03:29skeggsb9778[d]: though, how does nvk avoid this on earlier GPUs? they're 49-bit addr space, but only 40-bit pushbuf addrs
03:30airlied[d]: I wonder by luck
03:31airlied[d]: oh we only use 40-bits VM
03:32skeggsb9778[d]: ah, that works too.
03:32skeggsb9778[d]: do you still want me to fix the kernel for >=HOPPER_CHANNEL_GPFIFO, or save that for nova?
03:36airlied[d]: probably not worry unless it's causing this strangeness 🙂
03:37skeggsb9778[d]: it only adds the extra 17 address bits to the pushbuf address
03:37skeggsb9778[d]: so, can't imagine it's causing it
04:00airlied[d]: looks like irq handling is acting strange, like the intr gets blocked by the storm code at the testing phase
04:15airlied[d]: oops screwed the kernel on the machine, no idea how to recover it 😛
04:19airlied[d]: oh I found a console
05:09mangodev[d]: i'm curious
05:09mangodev[d]: are there any performance or software compatibility implications behind https://gitlab.freedesktop.org/mesa/mesa/-/commit/2fc4c98aaff49d54187724f0452fce6df23c60bf ? from an outsider perspective, it sounds like a really good thing
05:46airlied[d]: it's just a correctness fix, might fix some misrendering, but who knows where
05:51airlied[d]: I'm going doing down the maybe some aarch64 memory model bug in the event/irq handling road, not sure
06:18airlied[d]: skeggsb9778[d]: appears fence on chan 80 work, chan 88 fail
06:18airlied[d]: never get an irq
06:25skeggsb9778[d]: are those channels on different engines?
06:26skeggsb9778[d]: and - is it consistent? or is it just "chan X fail" randomly?
06:27airlied[d]: it's consistent, https://paste.centos.org/view/raw/4bafdf73 is some hacky debug
06:27airlied[d]: it emits fence 10 on channel 88, but only sees 9 on readback
06:31skeggsb9778[d]: hmm, can you post with debug=gsp=debug somewhere please?
06:31skeggsb9778[d]: i think i might know what's going on
06:32airlied[d]: https://paste.centos.org/view/raw/b423dedd ignore my debug 🙂
06:33airlied[d]: also GART: 0 seems wrong
06:33skeggsb9778[d]: yeah - i was pondering just deleting both those lines
06:33skeggsb9778[d]: nvkm already prints out a "fb: " line with vram size
06:33skeggsb9778[d]: and the gart one hasn't made sense since nv4x 😛
06:45airlied[d]: sticking fence context in GTT didn't help, random try 🙂
06:46skeggsb9778[d]: ok - so, gh100/gbxxx has 16 intr leaves (vs 8), and i thought that might be related
06:47skeggsb9778[d]: but. all the nonstall vectors reported by gsp are lower than that
06:47skeggsb9778[d]: so, probably *not* related
06:47skeggsb9778[d]: (you can hack the 8 to 16 in nvkm_vfn_new_() though if you want to try anyway)
07:10airlied[d]: yeah doesn't change it
07:15airlied[d]: I suspect the doorbell is busted, I've hacked userspace to not vm map the push buf, and I don't see a vm fault
07:15airlied[d]: which suggests maybe nothing is reading it
07:19skeggsb9778[d]: airlied[d]: haven't looked into it (and likely won't tonight) - but https://github.com/NVIDIA/open-gpu-kernel-modules/blob/main/src/nvidia/src/kernel/gpu/mem_mgr/channel_utils.c#L534 ?
07:21airlied[d]: oh nothing like a flush harder hack, I'll see if I can hack that now, but maybe tomorrow also
07:40airlied[d]: looks messy to implement, like needs new nvif interface messy
07:53airlied[d]: though hacking a vram read in didn't seem to help
08:01airlied[d]: okay looks like tomorrow problem 🙂
10:40homosapiensalien: the thing is not highly ungraspable at all when you consider that you stash inverse index at the hash, then you eliminate and you must get the correct answer indeed, however the final research goes into when does it go wrong possibly when we do not insert inverse index and we do not delta encode, i think it never goes wrong until you do one more arithmetic, and you are safe with one
10:40homosapiensalien: index stashed on the hash too. However people think i am healing from something, which i am not doing, it's concious decision that with such knee they cracked up for me, it's with all sideeffects no longer realistically possible to compete, other than that i need not to heal from anything , cause i am incredibly strong in mentality already.
13:46gfxstrand[d]: mangodev[d]: Not really. It's just a bug fix. It's possible there's an app it fixes but I don't think we know of one.
16:13gfxstrand[d]: https://www.collabora.com/news-and-blog/news-and-events/nvk-enabled-for-maxwell,-pascal,-and-volta-gpus.html
16:13gfxstrand[d]: The MR is next in Marge's queue.
16:19gfxstrand[d]: First I post the blog post. Then I get a dozen comments on various social media about all the typos. :blobcatnotlikethis:
16:19gfxstrand[d]: I is vary good at the Englarsh.
16:19tiredchiku[d]: Try Grammarly!
16:19tiredchiku[d]: -# this portion of the channel is not sponsored by Grammarly
16:20lru: are we all underwater? </youtube grammarly commercial>
16:21zmike[d]: > We have a community **memory** that's made good progress on the Kepler B compiler
16:21zmike[d]: nice
16:21mohamexiety[d]: now you can add discord to the list of social media
16:22mhenning[d]: gfxstrand[d]: As a nit pick, "Maxwell (GTX 700, 800, and 900 series)" feels a little misleading. "Maxwell (some GTX 700 and 800 series, most 900 series)" might be more accurate
16:22mohamexiety[d]: oh yeah I missed this one. you can also say GTX 750 series instead since it was really just the 750 and 750Ti
16:58mangodev[d]: gfxstrand[d]: so now that <=Maxwell support is mostly there, and Kepler support seems to be a long-term stretch goal
16:58mangodev[d]: …what next? blackwell support? better conformance (greater extension support)? better performance (more focus on PRs like the prepass optimizer, zcull, etc)? more bugfixes (making sure already implemented features work properly, such as usage of certain hardware components or WSI)? what's the next primary target goal, assuming NVK is soon ready to move on from support for older gpus?
17:01asdqueerfromeu[d]: tiredchiku[d]: "Writing's not that easy"
17:06mohamexiety[d]: blackwell is currently wip fwiw
17:08karolherbst[d]: marysaka[d]: the fails on the coop matrix MR are expected, right?
17:09marysaka[d]: karolherbst[d]: with NAK_DEBUG=serial there should only be stuffs around matrixmuladd_cross
17:09karolherbst[d]: though uhm.. I should post it on the MR before I force push the branch
17:09karolherbst[d]: ahh.. let me try that
17:10karolherbst[d]: rebasing the MR was fun.. you did the work before the metadata rework..
17:11karolherbst[d]: `Unsupported op: {%r677 %r678} = imma.m8n8k16.u8.u8 %r665 %r671 {%r597 %r620}` mhh
17:11karolherbst[d]: I get a bunch of those
17:12karolherbst[d]: Passed: 2002/31414 (6.4%)
17:12karolherbst[d]: Failed: 582/31414 (1.9%)
17:14gfxstrand[d]: mhenning[d]: Yeah, Maxwell is really hard to describe in text. :blobcatnotlikethis:
17:14karolherbst[d]: marysaka[d]: e.g. `dEQP-VK.compute.pipeline.cooperative_matrix.khr_r.subgroupscope.matrixmuladd.float16_float32.buffer.rowmajor.linear` fails
17:14asdqueerfromeu[d]: mangodev[d]: ~~Maybe increasing the Mesamatrix score could be nice?~~
17:14asdqueerfromeu[d]: Right now these seem to be the extensions that ANV supports but NVK doesn't: VK_KHR_acceleration_structure, VK_KHR_cooperative_matrix, VK_KHR_deferred_host_operations, VK_KHR_performance_query, VK_KHR_ray_query, VK_KHR_ray_tracing_maintenance1, VK_KHR_ray_tracing_pipeline, VK_KHR_ray_tracing_position_fetch, VK_EXT_attachment_feedback_loop_dynamic_state, VK_EXT_device_memory_report,
17:14asdqueerfromeu[d]: VK_EXT_external_memory_host, VK_EXT_fragment_shader_interlock, VK_EXT_global_priority, VK_EXT_global_priority_query, VK_EXT_mesh_shader, VK_EXT_shader_atomic_float, VK_EXT_shader_atomic_float2, VK_EXT_shader_stencil_export, VK_ANDROID_external_memory_android_hardware_buffer, VK_ANDROID_native_buffer, VK_AMD_buffer_marker, VK_AMD_texture_gather_bias_lod, VK_INTEL_shader_integer_functions2 and
17:14asdqueerfromeu[d]: VK_EXT_legacy_dithering
17:15karolherbst[d]: ohh
17:15karolherbst[d]: I haven't updated `as_sm70_op_match`
17:15marysaka[d]: karolherbst[d]: that's weird :blobcatnotlikethis:
17:17gfxstrand[d]: There's even some 700-series that are Fermi. :blobcatnotlikethis:
17:17karolherbst[d]: wild...
17:17karolherbst[d]: so 2000 of those tests don't really run any shaders?
17:17karolherbst[d]: or is it all lowered to non MMA stuff
17:18karolherbst[d]: gfxstrand[d]: yeah
17:18karolherbst[d]: it's wild
17:32karolherbst[d]: it passes now
17:38karolherbst[d]: marysaka[d]: btw.. it seems like you missed a few tests, because things like `dEQP-VK.compute.shader_object_spirv.cooperative_matrix.khr_r.subgroupscope.convert.input_uint32_t_output_float32_t.physical_buffer.rowmajor` also assert :blobcatnotlikethis:
17:39karolherbst[d]: `cmat_convert`
17:40karolherbst[d]: maybe I focus on the mentioned pattern for now and force push once that's without a regression..
17:40karolherbst[d]: but anybody having looked into cmat_convert already?
17:41marysaka[d]: huh that's new :nya_confused:
17:41karolherbst[d]: ohh looks like that's something new?
17:41karolherbst[d]: mhhh
17:41mohamexiety[d]: mhenning[d]: so this led me to try a lot of different values and I don't think it's an enum, but it's also a bit weird. here's what I found:
17:41mohamexiety[d]: - the value tracks the size increases. e.g., doubling in size doubles the size
17:41mohamexiety[d]: - the value starts at 0x802 (with 0x800 being reserved for no shared mem)
17:41mohamexiety[d]: - it goes up in increments of 2s. so e.g. I get 0x802 for everything up to 256B, if I move up to 384B or even smaller than that (but bigger than 256), I get 0x804. then 0x806 for >512B, etc etc.
17:41mohamexiety[d]: - the above leads me to conclude that we get shared memory in blocks of 128B, but the HW can only allocate 2 blocks? hence 256B granularity.
17:41mohamexiety[d]: - for the maximum of 48KiB allowed in Vulkan, the maximum is 0x980, this fits in with the prior point, because 48KiB is 0x180 times of 128B.
17:41mohamexiety[d]: what do you think? (also cc: gfxstrand[d])
17:42karolherbst[d]: yeah... uhh...
17:42karolherbst[d]: guess there is work for me to do
17:42mohamexiety[d]: mohamexiety[d]: as an aside, are we also sure that on Ada and older our granularity is 1KiB? because when I did look through the Ada dumps, I saw smaller values like 256B
17:46gfxstrand[d]: mohamexiety[d]: That doesn't sound crazy
17:46mohamexiety[d]: I also have no clue at all what the initial '8' stands for, but it does get incremented when we go over 0x99
17:47mohamexiety[d]: gfxstrand[d]: my pet peeve here is why does it only get incremented in 2s. like, that's kind of a waste of encoding here if the point is to be efficient :thonk:
17:47gfxstrand[d]: 🤷🏻♀️
17:47mohamexiety[d]: I tried all the intermediate odd values I could think of and I couldnt get it to give me an odd number at all
18:13karolherbst[d]: marysaka[d]: familiar with any of the subgroup scope stuff?
18:14karolherbst[d]: or is that the normal stuff?
18:16karolherbst[d]: seeing fails with like `dEQP-VK.compute.pipeline.cooperative_matrix.khr_r.subgroupscope.matrixmuladd.float16_float16.physical_buffer.rowmajor.linear` and I'm wondering if that's me messing up the rebase or something else
18:24karolherbst[d]: ahh.. it's fixed with `NAK_DEBUG=serial` :blobcatnotlikethis:
18:24karolherbst[d]: maybe I should fix those first...
18:27marysaka[d]: karolherbst[d]: oooh wait so the CTS tests were renamed a bit then
18:28marysaka[d]: I remember the physical_buffer one being broken but that "subgroupscope" in the string wasn't ringing a bell
18:28karolherbst[d]: I see
18:30karolherbst[d]: `Failed: 114/31414 (0.4%)` it's getting better
18:31karolherbst[d]: yeah.. so now all fails are that cmat_convert stuff
18:33karolherbst[d]: pain..
19:05mangodev[d]: asdqueerfromeu[d]: i'm not super concerned with rt support because the driver can barely run most games that have optional support for rt
19:05mangodev[d]: but full ogl support would be nice
19:19airlied[d]: karolherbst[d]: I think convert is new tests but also there was some changes to the mesa cmat spirv code either landed or in MR from Intel recently
19:20karolherbst[d]: right... I've already lowered cmat_convert
19:20karolherbst[d]: passes even
19:21karolherbst[d]: airlied[d]: though I've rebased Marys branch on top of main + enabling the scheduling and it seems there are still some issues in regards to scheduling
19:21karolherbst[d]: though I suspect you might have fixes on one of your branches for that?
19:22karolherbst[d]: I'll probably fix the remaining fails + wire up the shorter bypass latency and try to get that in first
19:25karolherbst[d]: `Failed: 24/31414 (0.1%)` getting somewhere
19:43gfxstrand[d]: We need to merge cmat
19:44gfxstrand[d]: Which is to say that I need to sit down with it in a couple weeks and go over all the pieces. I want to give a good hard think about what it does to the compiler just to be sure
19:44gfxstrand[d]: And I need to think about cmat2
19:44gfxstrand[d]: I just have no brain right now
19:49karolherbst[d]: well it's not really doing much to the compiler tho
19:53karolherbst[d]: anyway.. gonna fix the remaining fails, shouldn't be too hard
20:11karolherbst[d]: marysaka[d]: ever looked into HMMA.884 for turing?
20:11karolherbst[d]: was it somehow broken or something?
20:23airlied[d]: karolherbst[d]: I was looking at the serial stuff last week when I got into blackwell, so it's not fixed, though my wip branch might work around it by optimising other compiler stuff
20:24airlied[d]: gfxstrand[d]: not all cmat2 needs the function pointer stuff, we could do the basic stuff without it, but not sure it's all the useful without the bits that needs func ptrs
20:24karolherbst[d]: airlied[d]: did you fix the `matrixmuladd_cross` tests?
20:24airlied[d]: but it mostly seems like NIR could do most of it once the NAK basic coopmat stuff lands
20:25karolherbst[d]: but yeah.. let's land the basic stuff first anyway, it's not that much anyway
20:25airlied[d]: no those are wierd and I was going to trace the nvidia driver to work out what they are doing
20:25karolherbst[d]: okay
20:25airlied[d]: it seems like the two bits for unsigned/signed choosing of A/B are not as simple as we use them
20:25karolherbst[d]: mhh
20:25airlied[d]: the cross tests are the only ones that do different signs across A/B/C/R matricies
20:26karolherbst[d]: yeah.. I suspect the lowering is mildly broken
20:26karolherbst[d]: ahh
20:26airlied[d]: and I was whack a mole with them, fix a bunch, break a different bunch
20:26karolherbst[d]: different signs in what way?
20:26airlied[d]: I was at the write a truth table out and dump the shaders from the blob stage
20:27airlied[d]: nvidia hmma can have signed/unsigned int8 inputs and says accum and output are always signed32
20:27airlied[d]: imma rather
20:27airlied[d]: however nvidia advertise wider support than that
20:27karolherbst[d]: mhhh
20:27airlied[d]: they offer unsigned 32 accum/outputs
20:27airlied[d]: at the API level
20:27airlied[d]: and the tests pass there
20:28karolherbst[d]: there is the `.SAT` flag, but not sure how that's related
20:28karolherbst[d]: I'm on Turing btw
20:28airlied[d]: there is also API saturate bits but they don't advertise those
20:28airlied[d]: I did nearly all the work on turing so far
20:29karolherbst[d]: it's a bit weird, because the instruction certainly doesn't have any flags on the output besides .SAT
20:29karolherbst[d]: well..
20:29mhenning[d]: mohamexiety[d]: That sounds plausible. The bottom bit being zero doesn't bother me too much, it's common for some of these fields to have requirements like that
20:29karolherbst[d]: C/D is always signed
20:30airlied[d]: yes but nvidia exposes both signed/unsigned at the API
20:31karolherbst[d]: ohhh
20:31airlied[d]: I'm also not sure if signs mean mch on 2's complement addition
20:31karolherbst[d]: mhhh
20:31karolherbst[d]: maybe we should disable unsigned u32 output for now and fix s32
20:31karolherbst[d]: and then add all the more funky things on top
20:34airlied[d]: yeah that's another option, I think we just wanted to API match the binary
20:34mohamexiety[d]: mhenning[d]: yeah my main pet peeve is really just the increment being in 2 and also that 8 in the middle that gets incremented
20:34mohamexiety[d]: but I think this is fine for now
20:34mohamexiety[d]: thanks!
20:34karolherbst[d]: I'd rather do small iterative steps than landing all at once, especially if the other things need weirdo lowering
20:36karolherbst[d]: there are still the `dEQP-VK.compute.pipeline.cooperative_matrix.khr_a.subgroupscope.matrixmuladd_cross.sint8_sint32.buffer.rowmajor.linear` tests that fail
20:36karolherbst[d]: but at least that matches more what the hardware is capable off
20:36karolherbst[d]: *of
20:36airlied[d]: https://gitlab.freedesktop.org/airlied/mesa/-/commit/e97d315ac5ba610b5c7f181e483d243846d82506 did that in my wip branc
20:37karolherbst[d]: mhhhh
20:38karolherbst[d]: looking at the code...
20:38karolherbst[d]: we could enable u -> s
20:38karolherbst[d]: but let's fix s -> s first 😄
20:39karolherbst[d]: mhhhh
20:50karolherbst[d]: airlied[d]: one thing I'm a bit confused about is why are 16x16x32 and 16x8x32 int matrices advertised, even tho IMMA is only 8x8x16 in hardware? Shouldn't we rather start with supporting 8x8x16 first and then go from there?
21:01gfxstrand[d]: airlied[d]: They're pretty integral. But also, as long as we can lower quick enough, I'm not actually worried about the function pointers. They'll get inlined. I'm mostly worried about the potential mess it makes in NIR.
21:09karolherbst[d]: how can I make nvk print the final shader binary?
21:09airlied[d]: NAK_DEBUG=print
21:09airlied[d]: MESA_SHADER_CACHE_DISABLE=1
21:11karolherbst[d]: okay... mhh, so the cross test tests with a u8 and a s8 input
21:12karolherbst[d]: I think I'm not entirely happy how the `NAK_CMAT_TYPE_*` enum works, because there is a bit of implicit int/float stuff going on...
21:16karolherbst[d]: `nvdisasm error : Unrecognized operation for functional unit 'uC' at address 0x00000150` great 🙃
21:18karolherbst[d]: that's cctl...
21:20karolherbst[d]: the `CCtlOp`
21:22karolherbst[d]: soo `IVAll`, `IVAllP`, `WBAll` and `WBAllP` make nvdisasm unhappy
21:24karolherbst[d]: I see....
21:25karolherbst[d]: yeah.. they need the `.C` or `.I` flags
21:25karolherbst[d]: and then have a minimum wait of 8/11
21:26karolherbst[d]: well.. or `.D`
21:30karolherbst[d]: marysaka[d]: airlied[d] any tips on how to figure out to implement `NAK_CMAT_TYPE_M8N8K16` for `compute_matrix_offsets`?
21:31marysaka[d]: karolherbst[d]: I think PTX have docs around the layout
21:31karolherbst[d]: or rather.. how do I create the mental model of mapping all the elements to the threads
21:32karolherbst[d]: ahh
21:32karolherbst[d]: right
21:32marysaka[d]: https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#matrix-fragments-for-mma-m8n8k16
21:32karolherbst[d]: anyway.. work for tomorrow 🙃
21:32marysaka[d]: *nods nods*
21:32karolherbst[d]: nice nice
21:32karolherbst[d]: yeah.. I decided to ditch the 16x8x32 int matrix thing and just do 8x8x16 for now 🙃
21:32marysaka[d]: I'm not a fan of `NAK_CMAT_TYPE` too but wasn't too sure how to handle that mess in a clean way in general :blobcatnotlikethis:
21:33marysaka[d]: The other sizes that are lowered was to match the blob mostly but yeah
21:33karolherbst[d]: and if 8x8x16 works, because it's like a single imma, then figure out how to properly implement the bigger ones
21:33karolherbst[d]: yeah...
21:33marysaka[d]: tho those ones were working last time I tried
21:33karolherbst[d]: well
21:34karolherbst[d]: you never added 8x8x16 int matrices
21:34karolherbst[d]: so all the code is missing for that one
21:34marysaka[d]: did I forgot that uurgh
21:34karolherbst[d]: yeah..
21:34karolherbst[d]: you added 16x8x8 float tho
22:30airlied[d]: I think the one tricky thing is B matrix have to be transposed
22:30karolherbst[d]: in the cross test?
22:31karolherbst[d]: though we do have `MOVM.MT88` to move and transpose a 8x8 matrix
22:32karolherbst[d]: well.. 16 bit matrices that is
22:32airlied[d]: if the API gives you B as a row matrix you have to transpose it
22:32karolherbst[d]: ahh
22:32airlied[d]: same as if it gives you A as a column one
22:32karolherbst[d]: right
22:35gfxstrand[d]: mohamexiety[d]: That's fine. Code up what you have. Once we have NVK dispatching shaders, we'll have a lot more tools at our disposal to R/E it the rest of the way.
22:36karolherbst[d]: anyway.. something is odd with the CCTL code.. anybody already looking into it? Or knows why nvdisasm is unhappy?
22:38gfxstrand[d]: What gen?
22:39karolherbst[d]: turing
22:39karolherbst[d]: karolherbst[d]: and later messages
22:40karolherbst[d]: but I don't really know where those flags are, so I couldn't really figure out what encoding would make it happy
22:40karolherbst[d]: or well.. I was too lazy to try out all the bits
22:41karolherbst[d]: docs imply that the hw is happy to turn anything invalid into a nop
22:42karolherbst[d]: it even says that the hw won't complain
22:42karolherbst[d]: it just won't do anything
22:45airlied[d]: mohamexiety[d]: it might not be an even/odd, it might be a packed bitfield offset by a bit 🙂
22:47mohamexiety[d]: airlied[d]: hmm what do you mean? I am not following, sorry
22:49airlied[d]: you might think you are looking at 8 bits packed into 0..7 it might be 7 bits packed into 1..7
22:49airlied[d]: bit 0 could be something else
22:49mohamexiety[d]: ahh.. hm
22:49airlied[d]: it would be pretty aggressive packing to do that, but not insane
22:50mohamexiety[d]: yeah
23:11gfxstrand[d]: karolherbst[d]: Yeah, CCTl is annoying. I seem to remember struggling with it but I never teased out quite the right thing.
23:11karolherbst[d]: I see
23:23gfxstrand[d]: CCTL and BAR