00:09 anholt:wonders what the chances are of hitting this intermittent CTS failure while dumping CLIF files, before the remaining 50g of disk fills up.
00:18 robclark: anholt, didn't look at how clif dumping works, but I dump cmdstream from kernel (cat $debugfs/dri/0/rd > foo.rd).. and a while back added a 'hangrd' that only dumped submits/batches that triggered gpu hangs.. which was supremely useful..
00:19 robclark: (for exactly this problem)
00:19 robclark: (or well maybe s/CTS/piglit/ but really the same problem)
00:25 anholt: no hang, just something broken with varyings.
00:26 anholt: I'm generating my CLIFs up in userspace -- the kernel only has to feed OOM buffers and otherwise doesn't really do anything.
00:30 robclark: ok.. if no hang that is harder.. but hangrd made me a fan of dumping from the kernel..
03:27 imirkin: anholt: make it hang when the bad condition happens? :)
03:27 imirkin: i.e. infinite loop in shader, etc
06:12 curan: MrCooper_: is there any chance for a new amdgpu (DDX) release? The last one was almost five months ago (18.0.1 on 2018-03-15)
07:19 MrCooper: curan: yeah, the release schedule is about every 6 months
07:21 curan: MrCooper: ah, ok; thanks!
07:45 MrCooper: jekstrand: "an expression that takes in an internet (sic) and sometimes returns a different one with a no-op seems rather unexpected" indeed :)
12:48 jekstrand: MrCooper: That's what I get for responding to patches on a phone. :/
12:49 jekstrand: MrCooper: I mean, we do sometimes run the internet through our shader compiler (WebGL)
13:24 maelcum: anholt: continuing from yesterday - there is no /dev/dri/card0/hvs_regs with raspbian latest, only /dev/dri/card0. i guess i need to build a kernel with more debug support... so i'll do that.
13:26 pq: maelcum, umm, should you have been looking in sysfs or debugfs instead of /dev? In /dev card0 is never a directory.
13:28 maelcum: that's true, but who knows what is going to change. i also searched "hvs" in sysfs with no relevant looking result.
13:30 maelcum: specifically, there seems to be no hvs_regs anywhere under /sys/class/drm/card0
13:31 maelcum: (but i've seen weird results with find in sysfs before)
13:32 MrCooper: jekstrand: hehe, yeah but doing so replacing the internet with a different one would definitely be rather unexpected :)
13:32 pq: maelcum, did you also check debugfs? It sounds a bit debug'y to me.
13:35 maelcum: pq: thanks, that works! /sys/kernel/debug/dri/0/hvs_regs
13:49 maelcum: anholt: results http://ix.io/1j5V
17:09 Putti: Hey, I'm progressing a bit with the Samsung Galaxy S3 mainline graphics project on AOSP master that I started earlier. So far I have gotten SwiftShader + gralloc.default + hwcomposer.default combination working, i.e. it does OpenGL ES rendering in software without GPU and then outputs it to the /dev/graphics/fb0 framebuffer. Today I tried using hwcomposer.drm (drm_hwcomposer freedesktop project) but ran into some problems: http://paste.openstack.org/r
17:09 Putti: aw/727260/. So yeah now the combination is SwiftShader + gralloc.default + hwcomposer.drm, you can find the source code for this built at https://redmine.replicant.us/issues/1882, or if you just want to see the device tree source code clone this repo: git://git.putti.eu/aosp/device_i9305.git
17:10 Putti: If somebody knows a) do I need to use something other than gralloc.default (I have really still no idea what gralloc is) b) what you think the problem is if you read the log I linked
17:12 Putti: Some general pointers to what I should focus my research on would be appreciated
17:28 CosmicPenguin: Putti: gralloc is the Android memory allocator - it manages the allocation of dma-buf able memory. On Android they usually use Ion but I think other folks have gralloc frontends for DRM too
17:40 Putti: CosmicPenguin, so if I understand right: SwiftShader needs a DMA buffer where to write its renderings and then from that DMA buffer the DRM driver (or drm_hcomposer?) reads it?
17:40 Putti: CosmicPenguin, and gralloc is the one creating that buffer between SwiftShader and drm_hwcomposer
17:45 CosmicPenguin: correct - gralloc does the nitty gritty of creating the buffer which gets shared between the various processes via binder
17:46 Putti: aha, and with drm_hwcomposer I think there is more than one buffer. Now the gralloc.default just opens /dev/graphics/fb0 and only SwiftShader writes there (correct me if i'm wrong)
17:48 Putti: so I better find another gralloc module
17:48 Putti: Anybody know which one might work in my case?
17:50 Putti: https://android.googlesource.com/device/linaro/hikey/+/master/gralloc/ – those hikey boards seem mainlinish too and they a mention of the ION thing
17:50 Putti: they have*
17:51 CosmicPenguin: I know somebody has at least dabbled in a GEM version of gralloc but I have no idea where you would find such a thing
17:53 Putti: I would have to allocate the memory region in this case from the RAM because there is no GPU
17:53 daniels: CosmicPenguin: i think robertfoss could tell you, but not until monday
17:53 Putti: daniels, let's hope so :)
17:54 Putti: oh that was for CosmicPenguin
17:54 Putti: I know robertfoss knows about this topic too so maybe he can help me too
17:55 daniels: Putti: it's currently 8pm on friday night for him, so he probably won't be around
18:03 robclark: CosmicPenguin, Putti, not sure if this is the "upstream", but https://github.com/robherring/gbm_gralloc
18:04 robclark: (or at least that is what we use with mesa)
18:05 Putti: robclark, is /dev/dri/renderD128 some RAM buffer or actual memory region on a physical device?
18:05 Putti: I mean on a GPU
18:05 Putti: the gbm gralloc uses that device
18:05 ajax: it's... not that
18:06 robclark: it is a device file
18:06 ajax: you can open it and allocate memory on the GPU, send it commands, etc. by issuing ioctl() calls on that device
18:06 Putti: but if there is no GPU then what should I use?
18:06 ajax: but it's not some dedicated subset of the device, it's multiplexed among multiple processes
18:10 Putti: well I think the S3 actual has that node there soo even if i don't use the GPU for doing any rendering I might still be able to use that buffer
18:11 Putti: let's try!
18:11 robclark: gbm_gralloc is really only going to work with mesa drivers.. although I suppose it would work with llvmpipe, possibly
18:21 ajax: xexaxo1: ooh, EGLDevice patches! thanks for picking that up.
18:26 seanpaul: Putti: https://android.googlesource.com/platform/external/drm_gralloc/+/master might be closer to what you want if you're not using mesa
18:29 seanpaul: Putti: i don't think anyone has used it in years, so it's probably rotted a bit, but it might be salvageable
18:30 Putti: thanks for the link
18:31 Putti: I also looked into https://android.googlesource.com/platform/external/minigbm/+/master as it has some Exynos related code, which could be useful as S3 phone's SoC is Exynos 4412
18:33 Putti: So the SwiftShader software render as far as I understand is currently only used in Android Things (or IoT something) and soo I guess they have some sort of framebuffer setup only and no DRM
18:34 Putti: I tried to look up on the internet "swiftshader + gralloc" but nothing :/
18:36 chrisf: Putti: you're going to have to do some work
18:45 seanpaul: Putti: yeah, not sure how swiftshader works in this setup, unfortunately
18:47 Putti: seanpaul, I don't know if you read my earlier messages but I got it already working with the default gralloc from AOSP, i.e. swiftshader is writing directly to the fb0 memory. So that is at least one way to use it :)
18:48 Putti: but I think it might be the reason why the performance was so bad
18:48 seanpaul: Putti: ehh, fbdev probably isn't a perf bottleneck
18:48 seanpaul: if you use drm + drm_hwc, it's the same principle
18:49 seanpaul: swiftshader is going to be the bottleneck
18:49 Putti: with DRM I think swiftshader only has to composer maybe just two instead of all ~5
18:49 seanpaul: yeah, true, you will save a bit
18:51 seanpaul: so i'd suggest starting with drm_gralloc. swiftshader likely doesn't need anything more complex than map/unmap from gralloc, so diving into gbm is probably a waste of time
18:52 Putti: gbm also depends on mesa and mesa then needs to have a driver written for it in order to use libgbm, so drm_gralloc sounds good.
18:53 seanpaul: right
18:54 seanpaul: Putti: you _might_ need to add a platformdrmgralloc.cpp to drm_hwcomposer to interface with drm_gralloc, but that should be mostly copy/paste from platformdrmgeneric.cpp
18:54 seanpaul: i'm not sure if gbm_gralloc and drm_gralloc share the same bo definitions
18:55 seanpaul: but other than that and fixing the compilation errors, etc, it shouldn't be too much work
18:55 Putti: ok
19:04 jstultz: Putti: you might also consider writing an drm_hwc importer for the default gralloc code? Though I'm not sure what the default gralloc's handle looks like.
19:12 Putti: jstultz, what's a drm_hwc importer? Anyways the default gralloc's code is here: https://android.googlesource.com/platform/hardware/libhardware/+/master/modules/gralloc/gralloc.cpp (I just did like two dummy functions for it to work on my setup, you can clone it from git://git.putti.eu/aosp/libhardware.git).
19:14 seanpaul: jstultz: just looking at your z-map change. what i had envisioned was to move the provisioning step up to validate, and then keep that around for the commit. so the plan pipeline would only run once, and only at the validation step
19:14 seanpaul: as it stands currently it seems like we do z-order generation multiple times and then a plan pass?
19:15 jstultz: seanpaul: ok, i had taken an earlier shot at something like that, but it didn't work, so i went back to a smaller appraoch
19:15 jstultz: Putti: https://gitlab.freedesktop.org/drm-hwcomposer/drm-hwcomposer/blob/master/platformdrmgeneric.cpp and https://gitlab.freedesktop.org/drm-hwcomposer/drm-hwcomposer/blob/master/platformhisi.cpp are importer examples
19:16 jstultz: Putti: basically take a (implementation specific) gralloc handle and imports it to the hwc_drm_bo_t
19:17 seanpaul: i'm just worried about piling on more incremental changes. i understand if you don't want to spend too much time on this, but things are getting pretty messy. i have an intern starting in a few weeks that i was hoping to set loose on refactoring this stuff, not sure what your timeline is like
19:17 jstultz: Putti: if you look at the hisi one, and look at the private_handle.h from the hikey gralloc code in AOSP, you can see how we map one to the drm_hwc case.
19:18 seanpaul: jstultz: but i think we need at least a flowchart or some design to understand how we can tackle this in a platform-generic way
19:18 jstultz: seanpaul: well, I'm trying to make sure its iterative.. i'm fine if the end result has to be larger..
19:18 jstultz: seanpaul: i just want to make sure i'm not breaking things for others
19:18 seanpaul: jstultz: for sure. i think the north star in this case should be to move as much as possible from the backend of the frame commit into the validate step
19:19 Putti: jstultz, aha, I have no idea what this all means but I will store your messages and try to decrypt them with some time :)
19:19 Putti: Thanks everybody for the help, I will continue with this tomorrrow
19:19 jstultz: seanpaul: the difficulty is that if we don't do some early planning we end up doing silly work trying to import buffers we can't..
19:21 seanpaul: yes, understood, that needs to be part of the decision process
19:21 jstultz: seanpaul: i agree that the validate should be where the work is done..
19:22 jstultz: seanpaul: but other then the repitition of work done in validate/present, is the change in how we validate i'm making look better to you? or am i still off in the weeds?
19:22 jstultz: my big thing is i want to get the hikey boards converted over to using drm_hwc by default
19:22 jstultz: and right now that's blocked on the error flood in the logs
19:23 seanpaul: not at all, i think everything you're doing is righteous. i am just feeling anxious about adding more hooks into the planner when i feel like we could probably use the hooks we already have to do the same work in a different place
19:24 jstultz: yea, the difficulty is there is some stuff we have to have an imported buffer to plan on.. but there are other bits where we can plan before spending work importing everything
19:26 seanpaul: jstultz: right, so we need something like pre-import planstages to vet the layers
19:26 jstultz: and theres' still your thought about doing the iterative layer by layer build and check in the validate, which i've only spent a little bit of time thinking on.
19:26 seanpaul: jstultz: have you considered telling SF not to allocate buffers you can't display?
19:26 jstultz: seanpaul: that's basically what my patch is trying to add, a plan for the HWCLayers before we get to the drmhwclayers
19:27 jstultz: seanpaul: ?
19:27 jstultz: seanpaul: not sure i'm following..
19:27 seanpaul: the import issue is that your DC can't display certain formats, right?
19:27 jstultz: seanpaul: on the hikey(s) its bascailly a cma framebuffer.. so everything has to be gpu compositied
19:28 jstultz: seanpaul: theres' only one hardware layer..
19:28 jstultz: seanpaul: so the problem is when we get normal buffers that are system buffers and they are going to have to be gpu composited down but we import them and since they aren't cma the imports fail
19:30 jstultz: so doing the pre-planning on the hwclayers helps us cut down to just one layer, which we use the client buffer for and we skip all the importing
19:32 jstultz: even if we did allocate everything out of cma, so we could import everything, we'd still be burning time with the current code since we'd be compositing it all down anyway.
19:32 seanpaul: jstultz: hmm, so could you just basically stub your importer for all system buffers and add a planstage that punts everything to the client layer?
19:33 jstultz: and the current code does the layer->plane mapping, but just super late in the validate code.
19:33 jstultz: seanpaul: hrm.. let me think on that a bit..
19:34 jstultz: seanpaul: about to head to lunch, but i'll take a shot at that and see how it goes..
19:34 jstultz: seanpaul: thanks for the feedback! really appreciate it!
19:34 pendingchaos: where should I post patches for shader-db? mesa-dev@lists.freedesktop.org?
19:35 seanpaul: jstultz: sorry to keep pushing back on you, i don't mean to be wishy-washy. and this solution that kind of sucks, but it _might_ unblock you
19:35 seanpaul: jstultz: enjoy lunch, and jfyi i'll be afk mon/tue
19:35 seanpaul:is going to canada before it gets too cold
19:35 dcbaker: pendingchaos: yeah, just set the subject prefix to "shader-db patch" or similar
19:36 pendingchaos: dcbaker: thanks
19:36 dcbaker: np
20:31 jstultz: seanpaul: no worries, have a good trip!
20:51 jstultz: seanpaul: on your idea... any sense of how we can tell which is the client layer in the plan stage (if its even there)? we create the zmap which may or may not include the client_layer_, then import all the layers (which with the stub we just pretend works for everything).. and then we call SetLayers to copy them to the composition, and call plan.. but I'm not sure how we do anything at that stage that will back-propogate to the HWCLayer type (forcing
20:51 jstultz: them to client)...
20:56 jstultz: seanpaul: part of what i was trying to rework w/ my changes is moving from the sense of CreateComposition taking what SF requested, and it doing something other then just the binary "that worked" or "that failed".. in the current validate code, in the failed case we try to recoup assuming the we had more device layers then planes, but i'm sure there are other failure modes that would still be busted
20:59 jstultz: seanpaul: i guess at the current plan stage i'm not sure what we can really "plan"...