00:20 cmarcelo: anyone that knows / maintains margebot: wondering if I just put marge in an odd state by pushing on an MR that marge pushed to.
00:21 cmarcelo: (is there a "marge log" somewhere I can peek at in those cases?)
00:22 airlied: cmarcelo: what it processing that MR?
00:22 airlied: if so you should unassign it, and kill any pipelines it was running
00:25 cmarcelo: oh, I missed canceling the pipeline (it was already failing)
00:25 airlied: yeah I think if it get cancelled, marge might wake up before the 1hr expiry
00:25 cmarcelo: it did wake up
00:25 airlied: though it may not
00:25 cmarcelo: thanks
01:03 memleak: Hey I was debugging PREEMPT_RT latency spikes with amdgpu and radeon DRM drivers, I finally have a consistent stack trace now which is exceeding 30-50 microseconds (occasionally even spikes to above 200 microseconds)
01:04 memleak: I discovered a tool called timerlat and it's been a huge help. trace: https://dpaste.com/42WHYK5EQ
01:04 memleak: With `nomodeset` the spikes go away, radeon_ib_schedule radeon_cs_ib_vm_chunk and/or radeon_cs_ioctl must be the culprit
01:06 memleak: 6.5.2 is my kernel version
01:15 memleak: dpaste.com is acting up, new link: https://dpaste.org/G3y6y
01:22 airlied: memleak: might be worth trying the amdgpu driver instead of radeon, though it might end being the same or worse
01:23 airlied: that radeon trace shows it latency when it interacts with the hw interface
01:23 airlied: which is kinda hard to avoid
01:23 airlied: cik_gfx_set_wptr
01:23 airlied: is mostly just a register write
01:30 memleak: I'll get a stack trace with amdgpu one sec
01:42 memleak: new trace: https://dpaste.com/FZRVR327P
01:42 memleak: https://dpaste.org/7eSaD
01:43 airlied: okay so yes you are hitting a hw register and hw takes time to react
01:43 memleak: ok :)
01:43 airlied: not sure there's much can be done about it
01:43 airlied: mmio register reads/writes can stall the cpu, don't think there's any nice way around it
01:44 memleak: well hey! that at least solves the mystery!
01:44 memleak: and it's not user error!
01:52 memleak: Thank you! :D
04:04 memleak: hey airlied I just wanted to come back and say I'm sorry for possibly annoying the shit out of you years ago, I was really hyper, I talked too much and I was a handful for everybody.
04:05 memleak: I was in junior high when I first started dabbling with X.org, anyways, thank you for everything.
08:14 karolherbst: itoral: any clues? https://gist.github.com/karolherbst/407dea07c0d8fd9ff04b28d81823614f
08:15 karolherbst: or rather ideas..
08:15 karolherbst: apparnetly setting "V3D_DEBUG=" starts to trigger gpu memory faults
08:44 itoral: karolherbst: doesn't make any sense to me... if you don't V3D_DEBUG at all, don't you see any mem faults?
09:11 karolherbst: itoral: correct
09:11 karolherbst: maybe something something VM placement or something
09:12 karolherbst: I think the shader accesses OOB no matter what, but I'll debug more thouroughly today on what's going on here... I was able to get rid of that error by doubliing buffer sizes
09:12 karolherbst: it's just _very_ confusing that setting that env var makes a difference :D
09:13 karolherbst: the value of v3d_mesa_debug doesn't change, but I suspect something changes in that handling of that env var which changes something else? dunno.. it's just very odd :D
09:15 itoral: yeah, I think that what happens is that for some reason when the envvars are set some allocation patterns change and that makes some OOB accesses land into valid memory addresses
09:21 itoral: karolherbst: do these tests use global address intrinsics to read memory from a buffer that is then used to compute global addresses for other global reads/writes?
09:23 itoral: I ask because if that is not happening then you can do a simple trick to identify the bad access(es): you drop all global reads/writes from the kernel (for example by not emitting the global intrinsic from the compiler) and then start putting them back into the kernel one by one until you see the OOB error again
09:25 itoral: actually, you could also use this tactic even if you use the results from a read to compute the address for follow-up reads, since you are adding later global intrinsics progressively one by one
09:25 itoral: one you know the first global intrinsic that causes the problem then we can just look at how the address is generated to figure out what is wrong
10:00 karolherbst: itoral: yeah, it's just one sized buffer bound and then read/write to
10:01 karolherbst: the kernel is really trivial
10:01 karolherbst: it's literaly this:
10:01 karolherbst: int tid = get_global_id(0);
10:01 karolherbst: dst[tid] = ((1<<16)+1);
10:04 karolherbst: I wonder if the test is slightly buggy... maybe I pass the buffer size into it and see what I can do with that
10:09 karolherbst: mhhhhh
10:09 karolherbst: itoral: I think it has something to do with how the kernel is launched
10:10 karolherbst: there is an OOB read and if I cap the tid to the buffer size it doesn't cause those
10:10 karolherbst: _but_
10:10 karolherbst: the kernel also launches threads according to the buffer size so that should be impossible
10:10 karolherbst: however.. CL has a cursed feature.. :D printf
10:11 karolherbst: ahh I can't use it as it needs global atomics, which I haven't looked at yet
10:15 karolherbst: huh....
10:23 karolherbst: itoral: mhhh... maybe it's also something to do with me overclocking the rpi with +400 MHz...
10:23 karolherbst: let me try without it first
10:25 karolherbst: ahh no..
10:25 karolherbst: ahh no..
10:26 karolherbst: but higher CPU load does make it more likely at least.. yeah so something odd is going on
10:32 karolherbst: I think the test is doing silly things...
11:03 glehmann: how do the fdot_replicated opcodes work? can they have any number of output components or does e.g. fdot4_replicated always have a 4 component output?
11:06 itoral: overclocking should't really have any impact
11:06 karolherbst: well.. I'm quite close to the point where increasing the clock a bit further causes the CPU to do wrong things :D
11:06 karolherbst: I've configured it in a way to not increase the voltage over the limit
11:07 karolherbst: but yeah.. the setting is fine it seems and never caused any problems
11:07 karolherbst: it clearly reads OOB but I have no idea why...
11:08 itoral: so capping the TID fixes the issue? mmm...
11:08 karolherbst: ehh.. no
11:08 karolherbst: I just got (un)lucky
11:08 itoral: ah :)
11:09 itoral: is that write to dst the only global address access in the kernel?
11:09 karolherbst: now I'm running "stress -c8" int the background and things are a bit more interesting
11:09 karolherbst: https://gist.githubusercontent.com/karolherbst/0258aba25982ebf84001d09cd8e3423e/raw/602f91c1d69de6a079083f65c76ab5a16c0475af/gistfile1.txt
11:09 karolherbst: yes
11:11 itoral: interesting, in that case the only way we can have an OOB is that tid is out of bounds.... have you tried making the dst buffer larger and write the tid into it?
11:11 itoral: then inspect te buffer when you trigger the mem faults and check it the tids are sane
11:11 itoral: I don't quite imagine why they wouldn't be, but something weird is happening so...
11:12 karolherbst: maybe something with the shader?
11:14 itoral: can you dump the kernel with V3D_DEBUG=cs?
11:14 karolherbst: yeah... something is odd
11:16 karolherbst: doing this instead makes the fault go away: if (&dst[tid] < 0x70000 || &dst[tid] >= 0x80000) dst[tid] = ((1<<16)+1);
11:16 karolherbst: at least it seems that way
11:16 karolherbst: itoral: the odd thing is, the test passes no matter what, so maybe it's just more threads running than expected? Anyway, will dump the plain shader
11:24 itoral: wht would that if fix anything? isn't dst bound to different addresses in various iterations? At least it looks like that from thaces you pasted
11:25 karolherbst: yeah.. there are three pre allocated buffers in that test
11:25 itoral: karolherbst: does v3d_csd_choose_workgroups_per_supergroup return a number other than 1?
11:26 karolherbst: each 16384 elements big, once with int/int2/int4
11:28 karolherbst: itoral: nah, that's always 1 it seems
11:28 itoral: ok
11:29 karolherbst: tried to denoise the V3D_DEBUG=cs output as much as possible: https://gist.githubusercontent.com/karolherbst/2a2b981e458d59119debeeaf2f9d3e01/raw/cbab9a385ecec66c2362c614f9ff8bc16e4d7c09/gistfile1.txt
11:30 karolherbst: mhhh.. maybe I should add an option to disable that offset nonsense...
11:35 karolherbst: huh
11:40 itoral: karolherbst: what is the workgroup size and the dispatch size for that kernel?
11:41 karolherbst: 256x64
11:41 karolherbst: uhm.. 256 blocks and 64x1x1 block size
11:45 itoral: ok, thats within the limits
11:47 itoral: I don't see anything obviously wrong with the shader, but I'll give it a deeper look tomorrow
11:52 karolherbst: huh!
11:52 karolherbst: I think I know what's up....
11:52 karolherbst: uhhhhhhhhhhhhhhhhhhhhh
11:54 itoral: karolherbst: before I go, I noticed this:
11:54 itoral: con 32x2 %71 = load_const (0x00010001, 0x00010001) = (0.000000, 0.000000) = (65537, 65537)
11:54 itoral: @store_global (%71 (0x10001, 0x10001), %70) (wrmask=xy, access=none, align_mul=8, align_offset=0)
11:54 karolherbst: nah... it has nothing to do with that
11:54 karolherbst: I think I found it
11:54 itoral: ah, cool
11:54 karolherbst: dst contains the address of the wrong buffer
11:54 itoral: what is it?
11:54 karolherbst: not sure why
11:54 itoral: interesting
11:55 karolherbst: but it points to the int buffer for the int2 run sometimes
11:55 karolherbst: and then it accesses invalid memory
11:55 itoral: uh
11:55 karolherbst: the test all passes as it just reuses the buffers from the first run
11:55 karolherbst: :D
11:55 karolherbst: yeah.. probably something like that
11:55 karolherbst: not quite sure, but maybe there is a sync issue with the ubo0 thing I'm using
11:56 itoral: ok, at least now we know what to look for
11:56 itoral: I gotta go now
11:56 karolherbst: yeah...
11:56 karolherbst: I hope I'll figure out what's wrong
11:56 karolherbst: ohhh....
11:56 karolherbst: duh
11:56 karolherbst: it's a user const buffer at 0 :)
11:56 karolherbst: and I suspect v3d doesn't copy it
11:56 karolherbst: or something something
11:57 karolherbst: I'll dig a bit deeper
11:57 itoral: ok, I'll be back tomorrow if you don't find it
11:57 itoral: good luck! :)
11:57 karolherbst: yeah...
11:57 karolherbst: thanks
11:57 itoral: np
12:16 karolherbst: yeah... it's definetly that
13:08 karolherbst: or maybe not? uhhhh
13:28 zmike: mareko: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/25180
13:37 MrCooper: daniels: FWIW, sending plain-text e-mails with Thunderbird works mostly fine for me (still on 102 though, since one extension I use doesn't support 115 yet); I disabled mailnews.send_plaintext_flowed and set mailnews.wraplength to 0
13:39 austriancoder: what is the official definition of a nir system value?
13:51 mareko: austriancoder: a value that doesn't come from the user
14:00 austriancoder: mareko: thanks
19:21 lina: DemiMarie & others: You might be interested in this ^^ https://www.youtube.com/shorts/ToulgVAofw8
19:23 ccr: \o\
19:30 DemiMarie: lina: I will, though it will be 10PM my time then!
19:34 lina: Not sure how long it will be yet, but hopefully not 5 hours ^^
19:35 airlied: if a stream is less than 5 hours is it really a stream :-P
20:15 Lynne: the amd dc code somehow got worse in 6.6-rc1
21:04 anholt__: eric_engestrom: looks like pipelines in forks are busted? https://gitlab.freedesktop.org/anholt/mesa/-/pipelines/984424
21:38 zf: hi! I'm encountering an assertion failure in Mesa, with radv, while trying to use a geometry shader that uses clip distances:
21:38 zf: d3d10core\tests\x86_64-windows\d3d10core_test.exe: ../mesa/src/compiler/nir/nir_validate.c:1390: validate_var_decl: Assertion `glsl_type_is_array(type)' failed.
21:38 zf: I hesitate to immediately file a bug, since this could be our bug, although the validation layers don't complain... but I'd appreciate if someone could give me some pointers where to look in the source?
21:39 zf: since I am wholly unfamiliar with nir
22:14 DemiMarie: zf: if the validation layers don’t complain and your program isn’t corrupting memory (use Address Sanitizer to check that), it’s a Mesa bug
22:15 DemiMarie: That error message means that Mesa is generating invalid IR; it’s the equivalent of an internal compiler error in GCC, Clang, or MSVC.
22:15 zf: well, it could always be a bug in the validation layers, i.e. a missing validation
22:16 DemiMarie: is your program open source?
22:16 zf: but I can certainly file a mesa bug on the assumption that it's safe
22:16 DemiMarie: yeah
22:17 zf: yes, this is actually something we're running into in the Wine self test suite
22:17 DemiMarie: Ah, so that is why you have Windows-style pathnames :)
22:18 DemiMarie: If you have a small reproducer that should help the Mesa developers fix the issue.
22:18 zf: yeah, that's... the hard part
22:18 zf: it's Vulkan, so "small reproducer" isn't really a thing
22:19 zf: and Wine is not exactly a lightweight piece of software
22:19 zf: if this was GL, I could record an apitrace, but I guess no such thing exists for Vulkan
22:19 DemiMarie: It probably should
22:19 Sachiel: gfxreconstruct exists
22:20 Sachiel: you can go back a while and see if the same issue exists and if not, try bisecting
22:20 DemiMarie: BTW geometry shaders are generally not very efficient, so if this shader comes from Wine then it is probably best to use something else
22:21 zf: we're a translation layer, so we do need the geometry shaders :-)
22:21 zf: thanks, I'll try gfxreconstruct
22:22 zf: I don't see the issue with stock distribution Mesa, but I wouldn't be surprised if that's because it's built with NDEBUG
22:24 DemiMarie: To quote the Arm Mali docs (or possibly an old version of them): “Most use-cases for Geometry shading are better handled by compute shaders.” and “Find a better solution to your problem. Geometry shaders are not your solution.”
22:24 zf: trust me, I'm well aware of the problems with geometry shaders, but we don't really have a choice in the matter
22:25 DemiMarie: It is actually possible to emulate geometry shaders using nothing but compute shaders.
22:25 DemiMarie: AGX will need to do that because Apple hardware doesn’t support geometry shaders at all.
22:25 Sachiel: not being in control of either the source of the shaders nor the driver, it'd be a ton of work to probably get a ton of weird failure cases
22:26 zf: and worse performance
22:26 DemiMarie: That said, I understand you not wanting to take that route.
22:28 DemiMarie: Probably