00:55 damo22: ajax: Hello, I have a small suggestion for libpciaccess and I want to understand why this check is present https://gitlab.freedesktop.org/xorg/lib/libpciaccess/-/issues/12
01:04 mattst88: I know nothing about this, but, you can't just return the existing mapping?
01:07 mattst88: might need to check that region/map_flags are the same
01:08 damo22: i think you cant return the same mapping because if one of them unmaps it, the other will silently lose access
01:09 mattst88: since this is HURD, shouldn't you have a pciaccess process that arbitrates access? :)
01:09 damo22: the only thing i could think of was dropping the check entirely
01:09 mattst88: sorry, Hurd. (I hate when people write MESA, so I shouldn't mistakenly write HURD)
01:10 damo22: im not too fussed what you call it
01:10 mattst88: more my neurosis than anything :)
01:11 imirkin: i get pretty annoyed by MESA too for some reason
01:11 damo22: i guess i could ensure unmapping is done correctly so only one mapping is present at any given time
01:11 MrBIOS: hey folks
01:11 damo22: but is there really a problem with having duplicate mappings?
01:12 mattst88: I don't know, myself
01:12 mattst88: do you know why there are multiple (attempted) mappings?
01:12 mattst88: MrBIOS: hi!
01:12 MrBIOS: hey, how goes?
01:12 damo22: yes, one is probing and then it doesnt unmap then it tries to use the pci device and remaps it
01:13 damo22: i kept getting 0x0
01:13 damo22: on the second mapping
01:13 mattst88: damo22: so there's a driver or something that maps, probes something, and fails to unmap?
01:13 damo22: yes
01:14 damo22: its quite convoluted on hurd
01:14 mattst88: okay, yeah, I'd guess figuring out why it's not unmapping would be my first step
01:14 MrBIOS: Might anybody here be interested in some COVID-19 contract work? I’m looking for someone who can investigate the feasibility of writing an AMD Geode driver (yes, _that_ Geode)
01:14 MrBIOS: DRM driver, that is.
01:15 damo22: MrBIOS: check coreboot src
01:15 imirkin: not a lot of those geodes running around...
01:15 damo22: i think they threw out the geodes from the tree
01:15 mattst88: yeah, I want to know what geode hardware is still in use that warrants spending money on it :)
01:15 MrBIOS: imirkin: there are actually tens of thousands of them running around, at a bare minimum.
01:16 mattst88: XO 1 :)
01:16 MrBIOS: mattst88: OLPC. They sold two million of them.
01:16 damo22: but there is probably support for the gfx in the tree
01:16 imirkin: oh, OLPC used geode? didn't know that
01:16 mattst88: MrBIOS: I've got a stack of them in my garage...
01:16 MrBIOS: imirkin: yes, the original model, the XO-1, was Geode based
01:16 MrBIOS: mattst88: so do I. I probably have you beat. I have about 200
01:16 mattst88: though I think I don't have the one with Geode; unclear if I misremebered receiving one or what
01:17 imirkin: MrBIOS: you have a big garage.
01:17 mattst88: MrBIOS: and I'm glad you do!
01:17 MrBIOS: imirkin: office, but yes :)
01:17 mattst88: so the contract work is for XO laptops?
01:17 imirkin: MrBIOS: how did your thing go for getting power-be working?
01:17 imirkin: i remember we talked about that like ... 3-5 years ago?
01:18 MrBIOS: imirkin: it went okay, for a while
01:18 MrBIOS: more like six, yes
01:18 imirkin: time flies
01:18 mattst88: when you're stuck indoors
01:18 MrBIOS: the hardware I sent to the developer resulted in functional r600g and Mesa patches
01:19 mattst88: guy that worked for RH and did some pixman stuff for a while?
01:19 MrBIOS: mattst88: so, these days, there’s mainline support for the XO-1 and XO-1.5
01:19 mattst88: MrBIOS: oh, that's good to hear
01:19 MrBIOS: mattst88: yep, that’s him
01:20 MrBIOS: anyways, there’s an old directfb geode driver
01:20 MrBIOS: of course, directfb won’t even compile with modern gcc
01:20 imirkin: if i didn't have 100 other things i was working on, sounds like it could be easy and fun to do
01:20 mattst88: yeah, I bet it would be simple
01:21 MrBIOS: BTW, the OLPC’s are OpenFirmware, no legacy PC BIOS.
01:21 mattst88: I hope someone is able to take you up on your offer
01:21 MrBIOS: https://www.coreboot.org/OpenVSA
01:22 mattst88: MrBIOS: does the OLPC Association or Foundation still exist?
01:22 MrBIOS: mattst88: yes, barely.
01:23 MrBIOS: OLPC Foundation does exist, they effectively service legacy customers. After about ~2012, they transitioned to selling re-badged Chinese ODM netbooks.
01:23 MrBIOS: They shut down their office in Miami not too long ago, maybe a year, and one of their oldest employees left
01:24 MrBIOS: OLPC Foundation has been owned/financed by a Nicaraguan banker, bizarrely enough.
01:24 imirkin: i was at the media lab when all that stuff was getting kicked off
01:24 MrBIOS: imirkin: you probably know Walter Bender, then
01:25 imirkin: nope, didn't know too many of the people involved
01:25 imirkin: negroponte left to head it, that's the main thing i remember
01:25 mattst88:contracted for OLPC 2011~2012
01:26 imirkin: that explains why you have a garage-full of them then :)
01:27 MrBIOS: mattst88: I have a good ~5-10 prototypes I have been given over the years. I picked some up from ‘dilinger’ in Seattle a number of years ago
01:27 MrBIOS: 3 or 4
01:27 mattst88: imirkin: yep
01:27 mattst88: (https://mattst88.com/blog/2012/07/06/My_time_optimizing_graphics_performance_on_the_OLPC_XO_1,75_laptop/)
01:27 MrBIOS: https://www.media.mit.edu/people/walter/publications/ is Walter. He was a founding member of MIT Media Lab, and ran it for a while.
01:27 MrBIOS: http://news.mit.edu/2000/medialab-0927
01:27 MrBIOS: mattst88: nice, are most of your OLPC’s 1.75’s?
01:28 mattst88: MrBIOS: yeah
01:28 MrBIOS: mattst88: some guy came out of the woodwork a year or two ago and did some kernel dev work for 1.75
01:28 mattst88: I thought I had an XO1 and a 1.5, but IIRC when I looked last I couldn't find one or the other, so now I'm not sure I ever had both
01:28 MrBIOS: mattst88: https://www.phoronix.com/scan.php?page=news_item&px=OLPC-XO-1.75-Linux-5.4
01:29 mattst88: very cool
01:29 mattst88: I was able to get some gcc patches from Marvell unstuck (and ultimately upstreamed) by saying "OLPC needs this" -- that was pretty fun
01:30 mattst88: ugh, that's the XO 1.75 that I gave to Michael and then he wrote a bad benchmarking article about
01:31 MrBIOS: Anyways, these days, the XO-1 is still usable, albeit grossly underpowered. My goal is to graft low-end devices, such as Pi Zero/Pi3/Pi4/etc etc. on to the original XO-1 hardware, via usbcdc/eth, basically using it as a glorified KVM/dumb terminal, for I/O
01:31 MrBIOS: mattst88: yeah dumb, benchmarkers gotta benchmark
01:31 damo22: he usually benchmarks power usage of mobos with hugeass gfx cards attached
01:31 imirkin: MrBIOS: hm. i was there like 2002-2005ish
01:31 mattst88: damo22: lol
01:31 MrBIOS: yeah, I’m familiar with his schtick
01:32 imirkin: but i don't recognize him.
01:37 mattst88: MrBIOS: FWIW, it might be worth emailing the dri-devel@ mailing list as well
01:38 MrBIOS: it might
02:27 mareko: krh: my plan isn't to remove classic from Mesa. My plan is to move classic to src/mesa_classic, so that nothing changes from the build, install, and packaging perspective
04:05 krh: mareko: yeah I know... I think that's not idea eit
04:05 krh: Ideal either
04:08 jekstrand:sends mail
05:11 mareko: krh: dude, a controversial idea can never have an ideal solution :)
07:27 tango_: is this the place to ask questions about clover? I'm seeing some very strange behavior with an RX580, amdgpu driver, on debian sid (kernel 5.4.0, mesa 20.0.2) using the `reduction` test from this repository https://github.com/Oblomov/cldpp
07:28 tango_: basically, the initial data upload (host -> device) is very fast (as if it was zero-copy, and simply mapping the host memory into the gpu memory space), and the throughput of the actual reduction is abysmal (12 to 14GB/sec)
07:29 tango_: _moreover_, when the host data ptr at the end is freed, the program segfaults, which doesn't happen with any other opencl platform (FLOSS and not)
07:38 tango_: (the question being: as the author of the test, am I doing something wrong that only hits on amdgpu, or is amdgpu doing something funky based on the way the buffers are set up?)
08:09 HdkR: tango_: Mapping as a host visible pointer or device pointer?
08:10 tango_: HdkR: well, I wouldn't know what's actually happening behind the scenes, I can only see what's happening “superficially”
08:11 HdkR: When you clCreateBuffer are you using the *_HOST_PTR bits?
08:11 tango_: I allocate and initialize on host, then I create an opencl buffer with CL_MEM_READ_ONLY, , no HOST_PTR bits
08:11 tango_: and then I'm calling clEnqueueWriteBuffer
08:11 tango_: this is extremely fast
08:11 tango_: wait, I'll push the slightly more verbose version
08:12 HdkR: Fun, sounds like it is still mapping on the host. Since 12-14GB/s is pretty good for a PCIe device streaming memory over the PCIe bus :P
08:12 tango_: (ok, pushed)
08:12 tango_: exactly
08:13 tango_: I'm wondering if I could force a migration with clEnqueueMigrateEtc, but I wondering why a Write would be turned into a mapping
08:13 tango_: and why it would cause a segfault further down the line
08:17 tango_: HdkR: BTW, I was looking at the mesa issue tracker and https://gitlab.freedesktop.org/mesa/mesa/-/issues/2702 sounds suspiciously like the same issue
08:21 HdkR: Sadly I'm not versed in Clover to know if there is an explicit problem with memory management there
09:22 tango_: hm how do I tell which gallium driver is being used for a specific hardware?
09:23 tango_: (by clover)
09:23 bnieuwenhuizen: does clinfo not give you a name?
09:26 tango_: bnieuwenhuizen: apparently not: Device Name Radeon RX 580 Series (POLARIS10, DRM 3.35.0, 5.4.0-4-amd64, LLVM 9.0.1)
09:26 tango_: I was actually a bit surprised by this, now I was looking for where the string is set, to see if I could add it
09:27 tango_: this is starting to feel a bit like the yak shaving scene in malcolm in the middle
09:27 tango_: (this one, for reference: https://www.youtube.com/watch?v=AbSehcT19u0)
09:32 tango_: ah, AMD_DEBUG=info,compute spews a lot of info
09:34 bnieuwenhuizen: tango_: with that name you can be pretty sure it is "radeonsi"
09:34 tango_: bnieuwenhuizen: thanks, I was starting to suspect as much
09:35 tango_: still would be nice if it was retrievable in a relatively easy way
09:48 tango_: ah this is even more interesting: if I run up to 32*1024*1024 elements, I get up to 120GB/sec, even though the upload is way too fast to be the actual transfer timing
09:48 tango_: and there's no segfault
09:49 tango_: however if I do more (e.g. 64*1024*1024), the perforance drops again to “pci-express” bandwidths
09:49 tango_: but I don't get a segfault on free
10:52 tango_: (eh, “detected potential spam”. maybe because I linked the cldpp repo?) anyway, submitted as #2703
13:50 arora: jekstrand: Hey, I know you are generally unavailable on weekends, but it's urgent so I am leaving a message here. Reply whenever you can. I recieved feedback from Trevor Woerner on the draft, and he wants me to add a week-by-week break-down of the project, from start to finish and also deliverables for each week. The deadline for submission is 31st March. I understand that it's highly implementation
13:51 arora: based, but it would be helpful if you could provide some vague idea for like a week-by-week breakdown.
15:22 ldiamond: I opened this issue yesterday: https://gitlab.freedesktop.org/mesa/mesa/-/issues/2702
15:23 ldiamond: I was hoping I could provide more information and maybe do some testing of potential patches. Is there a guide on setting up a dev environment for mesa and have vaapi working?
15:27 pepp: ldiamond: basically you need to build mesa (and drm if the version from your distro isn't recent enough). Then make sure LIBVA_DRIVERS_PATH variable points to your install dir
15:28 pepp: (ie: I'm building mesa and installing it using the /opt/mesa prefix. And I have LIBVA_DRIVERS_PATH=/opt/mesa/lib/x86_64-linux-gnu/dri)
15:28 ldiamond: Ho, you actually responded to that specific issue.
15:28 ldiamond: I'm assuming you don't have a RX 580 handy
15:28 pepp: yes
15:29 pepp: I have a RX590, I'll test tomorrow. But testing on a laptop (raven1) I get similar performance (speed = 0.5X)
15:32 ldiamond: I unfortunately can't test it on Windows since I have not had a windows pc in years.
15:37 ldiamond: unless I can usb boot it, we'll see
15:41 ldiamond: I will test clearlinux and try on windows too if I can.
18:51 jekstrand: arora: Ugh... Yeah, that's a tight deadline...
18:52 jekstrand: arora: I don't think I'm the one who should be making the week-by-week breakdown. I think the reason why they want a week-by-week is because they want to ensure that you've thought through the problem and know what you're getting into so I think you should be the one to write it.
18:53 jekstrand: arora: Note that you don't actually have to follow it exactly. The whole week-by-week thing will get blown away by week 2, I'm sure.
18:54 jekstrand: arora: However, it provides an indication that you know roughly what you're getting into and that we've planned about the right amount of work for a summer.
18:54 jekstrand: I'm happy to provide feedback and help you think through things but I think this one needs to be yours.
18:57 arora: jekstrand: Hey, glad that you replied :). Oh yea, about that, I, in a bit of adrenaline rush, went ahead and made a week-by-week breakdown and a lot of other changes as well lol.
18:57 arora: It's here: https://drive.google.com/file/d/1t5YjO7m01dFlwOed9inVKTvnvb4cLaEW/view?usp=sharing
19:00 arora: jekstrand: Except the class-taking bit, I have scheduled everything into the draft
19:09 jekstrand: arora: I think it probably needs to be more specific. Right now it's just 2 weeks of "do the thing", one week of "buffer", and a blog week.
19:10 jekstrand: arora: So, for instance, for the third milestone, we'd like to get the "final" layer config file and all merged by the end.
19:11 jekstrand: So we need to think about when the final MR needs to be posted in order to allow plenty of time for review.
19:11 arora: Oh ok, right.
19:12 jekstrand: Also, it will almost certainly take more time for review than you think. :-)
19:13 jekstrand: So plan lots
19:13 jekstrand: And then work backwards from there.
19:13 jekstrand: If you think it'll take a month for review, for instance, that that means you have to have stuff working by that point.
19:13 jekstrand: You'll still do engineering work during that month because the code will be constantly changing to adjust to the review feedbabck.
19:14 jekstrand: What all do you need to have working by then?
19:15 jekstrand: One thing that may be helpful is to set aside the schedule for a bit and just come up with a list of every feature you think there will have to be.
19:15 jekstrand: Then you can try to put it all in order to get a sense of how things will have to flow.
19:16 jekstrand: Again all of this will get thrown out by week 2 but it's good to have sat down and thought through the problem as best as you can.
19:17 arora: jekstrand: How long does it usually take for reviews?
19:17 jekstrand: I would expect the config file format review to take at least a month, probably more.
19:19 jekstrand: There are going to be lots of questions such as "How do you specify which GPU to select?" and "How do we specify detection criterion?"
19:20 jekstrand: I expect that there will be far more discussion about those types of issues than about the code itself.
19:32 arora: Ok, I will try to work backwards from the review month.
19:33 arora: How do I break down the understanding and improving part?
19:33 arora: Should I like mention the faults in other projects and possibly the function names?
19:35 jekstrand: Venemo: Trying to correct the internet? :-P
19:35 jekstrand: arora: First off, I really don't think the understanding and improving part should take a month
19:36 arora: okay
19:37 jekstrand: I'm not sure how airlied and bnieuwenhuizen would want to handle things. One of the first things that needs to happen is landing that MR.
19:37 jekstrand: I went through and gave a bunch of comments on the first patch
19:38 jekstrand: I don't know if bnieuwenhuizen wants to make the adjustments or if he'd want you to take over it.
19:38 jekstrand: I've not looked at airlied's second patch yet though.
19:38 bnieuwenhuizen: what adjustments am I supposed to have an opinion on?
19:40 jekstrand: bnieuwenhuizen: The device selection layer MR
19:46 bnieuwenhuizen: oh .. totally missed that the first patch is actually mine ...
19:50 airlied: jekstrand: I might squash them
19:50 airlied: I fixed lots of stuff in my patch :)
19:50 airlied:is saying that like finding time to do it isn't near impossible
19:52 airlied: bnieuwenhuizen: any objections to squashing and just leaving your name in the commit msg? :)
19:54 bnieuwenhuizen: airlied: nope
19:59 airlied:will try and reconcile it a bit today, keep on juggling, keep on juggling :-P
20:02 arora: jekstrand: Umm, what does this MR mean for me and the project?
20:02 jekstrand: arora: Merge request
20:03 arora: oh no, not in that way, I mean does landing that MR affect the gsoc project?
20:03 arora:feels that timezones are hard
20:05 jekstrand: arora: I think we should try to land it. I was hoping that the first step could be you taking ownership of the MR and addressing any review feedback.
20:05 MrBIOS: do you guys participate in GSoC by chance?
20:05 jekstrand: And then everything else will build on it
20:05 jekstrand: MrBIOS: We have, from time to time
20:05 MrBIOS: this year?
20:06 jekstrand: X.org also has their own EVoC thing
20:06 jekstrand: MrBIOS: There are a few project proposals this year
20:10 arora: jekstrand: Alright, I will work for a bit. Any other changes in the draft?
20:13 jekstrand: arora: Not really. I don't have a lot of experience GSoC mentoring so I don't really know what's expected
20:14 airlied:has enough experience to knw I'm not good at it :)
20:22 arora: jekstrand: Does that MR allow the user to provide GPU name?
20:25 jekstrand: arora: Not yet
20:57 arora: jekstrand: It's almost 3am here, here's some changes, https://github.com/ashok-arora/gsoc-vulcan-gpu/blob/master/xorg-final.pdf
21:10 arora: Provide your feedback, I will see them in the logs, I am gonna go to sleep now.
21:30 Venemo: jekstrand: sometimes I try :O
22:32 airlied: bnieuwenhuizen: one q, the instance unordered_map locking, I'd likely get it wrong (i.e. not use C++ correctly)
22:32 airlied: any suggestions welcome :)
22:34 HdkR: std::scoped_lock? :)
22:34 bnieuwenhuizen: I was about to suggest std::lock_guard: https://en.cppreference.com/w/cpp/thread/lock_guard
22:35 bnieuwenhuizen: not that i matters much for a single mutex :)
22:36 jekstrand: And here I was going to suggest you ditch C++ and use c11_thread.h for the mutex and util/hash_table.h
22:36 jekstrand: :-P
22:36 airlied: jekstrand: I could take that suggestions, not sure C++ wins much here :)
22:36 bnieuwenhuizen: that is the other option :P
22:36 jekstrand: If we're going to be doing any serious string manipulation, std::string might buy us something
22:37 jekstrand: But, yeah, the only real use of C++ is a single unordered_map. Doesn't seem all that necessary. :-)
22:37 bnieuwenhuizen: yeah
22:37 jekstrand: Better yet, we should re-write it in Rust. :-P
22:37 airlied: jekstrand: once you write anv in it first :-P
22:38 jekstrand: airlied: Don't tempt me!
22:38 airlied: it would be all fun and games until you had to link the compiler :-p
22:38 jekstrand: airlied: That said, I have been contemplating how we can start using Rust in mesa. Something nice ane self-contained like a layer is a very good place to start.
22:39 airlied: the other ssw vulkan project for risc-v uses rust I believe
22:39 jekstrand: If Rust had more competent link-with-C support, I would totally start rewriting bits in Rust
22:40 bnieuwenhuizen: as someone who doesn't really know rust that much, what is missing in C-linking support?
22:41 jekstrand: bnieuwenhuizen: It can link with c, technically. However, you have to re-write your headers to produce Rust things; it can't just consume them.
22:41 jekstrand: Also, the Rust build system, cargo, assumes that it's the only build system in play and isn't really designed to integrate into anything
22:42 jekstrand: So you either have to invoke cargo from meson (you can't just call rustc) or meson from cargo.
22:42 jekstrand: So, yeah, we could probably start using rust but we would have to keep things very tightly contained so because the Rust/C boundary is way more painful than the Rust/C++ boundary
22:42 jekstrand: C/C++ boundary, I meant.
22:42 ascent12: Meson has its own Rust "support", but due to the nature of Rust and how everything is heavily tied to cargo, it really is a second class citizen.
22:43 bnieuwenhuizen: heh, that is likely going to get fun with generated source files?
22:43 ascent12: i.e. you basically have get no way of using Rust dependencies.
22:43 jekstrand: bnieuwenhuizen: Yeah....
22:44 jekstrand: bnieuwenhuizen: So we could, for instance, rewrite ISL in Rust. It's got a fairly small interface (still dozens and dozens of functions, structs, and enums) and is very self-contained.
22:45 jekstrand: However, there's no way we could bind all of NIR to rust and start writing NIR passes in Rust.
22:45 bnieuwenhuizen: WSI another candidate?
22:45 bnieuwenhuizen: or is that annoying because external deps?
22:45 jekstrand: WSI could be done, probably.
22:45 jekstrand: I should clarify what ascent12 said about external deps
22:46 bnieuwenhuizen: (I meant here external deps == C headers for the windowing system)
22:46 jekstrand: Cargo uses the NPM/Maven model of dependencies where you specify them in your manifest file and cargo automatically downloads the exact version you requested, builds it, and statically links it into your project.
22:47 jekstrand: Because meson and, more specifically, linux development doesn't work that way (meson does have wraps which are similar), it can't really pull in crates (cargo's packages) and use them easily.
22:47 jekstrand: bnieuwenhuizen: Yeah, binding Rust to X11 or Wayland could get "fun" though I suspect someone somewhere has written bindings.
22:50 ascent12: Rust's strict ownership model really doesn't play nicely with the way libwayland works. Bindings exist, but they essentially reimplemented it all, although I still think it can optionally wrap over libwayland.
22:50 ascent12: No idea about X11.
22:51 jekstrand: Looks like there are at least a couple x11 binding projects
22:51 jekstrand: Yeah, that's the other problem. Virtually any Rust binding to C code is going to have to use "unsafe" all over the place and may have a lot of trouble mapping Rust's ownership model to C.
22:52 bnieuwenhuizen: jekstrand: the ownership model would also be the question I'd have around the vulkan API
22:52 jekstrand: ascent12: I would think Wayland would be mostly ok. You'd just end up more-or-less building a smart pointer wrapper for everything.
22:53 jekstrand: bnieuwenhuizen: Yes, that gets tricky. Fortunately, however, virtually everything in Vulkan is effectively immutable which helps a great deal.
22:53 jekstrand: Though internally it may not be immutable so that's bad
22:53 bnieuwenhuizen: (+ how to deal with allocators)
22:53 ascent12: I know libwayland-server was where a lot of the real issues were, with all of the container_of type things. libwayland-client probably isn't quite as bad.
22:53 jekstrand: You'd have to throw the allocators away most likely
22:53 jekstrand: ascent12: Oh, yeah, wrapping libwayland-server would be a disaster
22:53 jekstrand: But libwayland-client should be mostly
22:54 bnieuwenhuizen: jekstrand: are allocators actually in common use anywhere?
22:54 jekstrand: bnieuwenhuizen: I don't know
22:54 jekstrand: bnieuwenhuizen: No one uses them in their compilers, I can tell you that.
22:54 jekstrand: Good luck trying to make LLVM use the allocator. :-P
22:55 bnieuwenhuizen: yeah
22:55 bnieuwenhuizen: I think CTS has some tests to check allocation failure
22:55 jekstrand: It does, but they only warn if you never call the allocator
22:55 bnieuwenhuizen: and I think allocators were also explicitly the mechanism to control pipeline cache sizes?
22:55 bnieuwenhuizen: (not that I've ever seen anyone use it for that)
22:57 bnieuwenhuizen: I guess it can't be much worse than virvulkan will do internally anyway
22:58 jekstrand: I think in some engines the allocators could actually matter.
22:59 jekstrand: In some of the more crazy threading systems that are used in game engines, calling malloc() from one of their light-weight threads is a no-no
23:19 krh: Librsvg might a good place to look. Federico who maintains it now pulled in rust a rewrote tiny bits at a time
23:19 krh: He has a good blog series about it
23:21 airlied: jekstrand, bnieuwenhuizen: pushed a cleaned up version using C only
23:23 krh: https://people.gnome.org/~federico/blog/librsvg-is-almost-rustified.html
23:30 MrBIOS: airlied: where?
23:34 airlied: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/1766