07:14bentiss: I'm tempted to just switch on fastly for gitlab.fd.o now. I've been running it locally for the past week and saw very few glitches (once I had an issue not refreshing correctly). I'll set a short TTL so if I get a storm of negative feedback we can switch back off
07:25bentiss: and it's done, all traffic is going through Fastly ATM
07:42bentiss: daniels: I wonder if I should bypass fastly on the runners we host
07:43mupuf: bentiss: let's hope this will also reduce the load thanks to the bot protection
07:43bentiss: well, AFAIU, bot protection is not enabled, yet
07:44mupuf: I see
07:44bentiss: but for starters, the cache hit ratio is 50/60%, so that removes some load
07:45bentiss: origin offload over the past 15 minutes, a wooping 4.36% on average :)
07:46mupuf: origin offload?
07:46bentiss: the load absorbed by the CDN which in theory is not forwarded to your server
07:46mupuf: the performance of gitlab in authenticated vs unauthenticated is staggering :D
07:48bentiss: really?
08:19slomo: is ssh on gitlab.freedesktop.org broken?
08:24bentiss: slomo: you have to use ssh.gitlab.freedesktop.org from now on
08:25slomo: bentiss: by manually changing the origins, or is that supposed to happen automatically?
08:25bentiss: (the TL;DR of https://gitlab.freedesktop.org/freedesktop/freedesktop/-/issues/2076#note_2831847)
08:25bentiss: slomo: either you do it by hand, either you add a .ssh/config snippet so it's transparent for all of your repos
08:26slomo: just saw that, all answers already prepared in the issue :) thanks and sorry for the noise
08:26bentiss: no worries
08:57jasuarez: Is there any connection problem with gitlab? can't push
08:58jasuarez: connect to host gitlab.freedesktop.org port 22: Network is unreachable
08:58jasuarez: oh, i'm reading we need to use prefix with .ssh
08:59bentiss: jasuarez: yep, exactly
09:20bentiss: I've updated the runner config and restarting them, hopefully the currently running jobs will be preserved
09:21alatiera: I was just about to ask about exactly that
09:21bentiss: eric_engestrom: ^^, it would be nice if we could have an inotify on the config file to reload it dynamically
09:21bentiss: alatiera: runners or ssh?
09:21alatiera: one of my jobs quitted with no output and no erros in the logs and I was wondering if I had broken it or the runner aborted
09:21bentiss: sorry :(
09:21alatiera: no worries
09:21alatiera: its not even an issue, retries are free
09:21alatiera: kinda
09:22bentiss: the runners in theory are gracefully shutting down the running jobs, but sometimes they are not :(
09:22alatiera:is almost done with nuking root our of docker build jobs
09:22bentiss: nice
09:22alatiera: bentiss fyi https://gitlab.freedesktop.org/freedesktop/ci-templates/-/merge_requests/223
09:23alatiera: the first commit
09:24bentiss: right, I saw this one float by
09:33alatiera: I've been thinking how we could go about making a user by default in the templates
09:34alatiera: and I've though of something like `FDO_EXEC_SCRIPT_AS_USER: true` and also make it the default, and then we first create a user, probably in sudoers, and then run the EXEC
09:34alatiera: it would break the build but the old behavior could be restorted by setting it to false, and everyone porting and new setups would hopefully default to non-root users
09:36bentiss: technically breaking the build is not a problem for CI-templates, you're not supposed to bring 'main', but only a sha
09:37eric_engestrom: zmike, robclark: I think these s3 downloads (or uploads) being cut off are because of https://status.hetzner.com/incident/da6b6285-b8a3-450f-b54b-19849ee9a09e
09:38eric_engestrom: bentiss: re- inotify, agreed: https://gitlab.freedesktop.org/eric/gitlab-runner-priority/-/blob/main/TODO ^^
09:39bentiss: you don't need to restart actually, just re-read the config before creating a new job
09:39alatiera: yea that's why I was thinking we could change the default behavior as long the old one still works
09:39bentiss: so currently running jobs are still on the old config, but new use the new
09:40eric_engestrom: depends which part of the config actually
09:40eric_engestrom: changing the queues needs a re-creation of the runners
09:41bentiss: it just needs to be documented
09:41bentiss: like the fleeting plugin: if you change the schedule, that's reloaded, anything else is not
09:41eric_engestrom: "documented" that it doesn't auto-reload yet?
09:41eric_engestrom: ack
09:42bentiss: so we just need to say "changes in runners, dockers are autoreloaded"
09:44eric_engestrom: I don't follow?
09:45bentiss: there are multiple sections in the config.toml. Some changes are just a reload and a new parameter in `gitlab-runner run-single`, like the host override, or the cpu-set, but some (like changing the priority meaning that the runners need to be re-register) are harder to implement
09:46bentiss: so basically, anything that is a parameter to `gitlab-runner run-single` is documented to be hot reload, when the rest is not
09:46eric_engestrom: ok, I see what you mean
09:46eric_engestrom: so yeah, gitlab-runner's config.toml is read on each run, but the priority wrapper's config.toml is read only at the start
09:47bentiss: something like that
09:48eric_engestrom: is that true actually? I jsut re-read the code (remember I wrote it almost 2 years ago 😅) and I think everything is read at the start, including gitlab-runner's config.toml
09:49eric_engestrom: but that should be easy enough to change
09:49bentiss: correct, only read once
09:49bentiss: I thought you meant "gitlab-runner's config.toml is read on each run, but the priority wrapper's config.toml is read only at the start" in the planned feature
09:51eric_engestrom: I was rephrasing what I understod you saying... I guess I wasn't phrasing it well myself xD
09:51bentiss: heh, no worries
09:52eric_engestrom: but yeah, as a first step, that should become true, and as a second step the priority wrapper's own config.toml should also be hot-reloaded
09:52eric_engestrom: I tried doing this just now and it's not as trivial as I thought it would be
09:53eric_engestrom: need to do other things, but I'll get back to that after
09:53bentiss: yeah, it's not super important
10:48jani: so I can't fetch gitlab repos, known issue?
10:52svuorela: jani: did you see the changed ssh address ?
10:54jani: svuorela: uh no?
10:57svuorela: jani: ssh.gitlab.freedesktop.org
10:59jani: svuorela: right, saw it now, adding this in .ssh/config did the trick without having to change umpteen repos
10:59jani: Host gitlab.freedesktop.org
10:59jani: Hostname ssh.gitlab.freedesktop.org
11:00jani: svuorela: thanks
11:02svuorela: you're welcome
12:07bentiss: Looks like all the archives are broken, they return a 503 -> https://docs.fastly.com/en/guides/segmented-caching
12:32bentiss: sigh, I guess I need to disable fastly :(
12:41dwfreed: oof
12:47bentiss: not just the archives, anything served bigger than 20MB... And I can't seem to be able to enable this
12:48dwfreed: I know someone who works at fastly if you'd like me to ask them to point someone your way to provide configuration assistance
12:51bentiss: dwfreed: we have a contact there as well, but thanks :)
12:51dwfreed: good, didn't know what your support situation looked like, figured I'd offer
12:52bentiss: we basically have a dedicated person who helped us already a lot that we cc to support when we need ;)
12:52bentiss: FWIW, fastly is disabled the TTL of the DNS is 5 min, so it shouldn't take long to resolve
13:07bentiss: email sent, we'll see what's missing before the next attempt
14:35valentine: eric_engestrom, mupuf: No kernel+rootfs jobs involved :P https://gitlab.freedesktop.org/Valentine/mesa/-/jobs/74016715
14:35valentine: The rootfs was exported and uploaded at the end of the debian/arm64_test-vk job: https://gitlab.freedesktop.org/Valentine/mesa/-/jobs/74016502#L6020
14:36demarchi: bentiss: is the hostname to ssh permanently changing to ssh.gitlab.freedesktop.org or not? If so we may need to adjust the manifest for drm-tip rather than instructing everybody to change their ssh config
14:39bentiss: demarchi: will be permanent (it's temporary back up, but it's way safer to have it on a dedicated DNS entry)
14:45eric_engestrom: valentine: nice! are you close to an MR, or is this a hacky proof of concept and you'll need time to turn it into a mergeable change?
14:46demarchi: bentiss: thanks... let me check what's needed on the drm-tip manifest besides changing the url (just changing it wouldn't re-configure a previously setup env)
14:50valentine: eric_engestrom: for LAVA, pretty much the only thing left is to wire up the other jobs (and fix the DISTRIBUTION_TAG for each container) as far as I can see
14:50valentine: baremetal is a different story
14:50eric_engestrom: oh right, I forgot baremetal would also need to be fixed
14:51eric_engestrom: good luck :S
14:52eric_engestrom: I mean, maybe you can just keep the rootfs jobs around and have baremetal continue to use them, and lava moves on to your new solution?
14:52eric_engestrom: and we just delete the rootfs jobs when we delete the last baremetal farm
14:53eric_engestrom: or maybe someone will do the baremetal work before then, but at least you're not blocked by baremetal anymore
15:10mupuf: valentine: 🥳🥳🥳
15:11mupuf: Niiiiice,
15:11mupuf: 2025 is shaping out to be a good year for Mesa CI!
15:55daniels: eric_engestrom: yeah I’d be in favour of combining the rootfs & bm jobs and just leaving them there until we have no more bm
16:28valentine: I'm leaning that way too. Plus, we can drop the x86_64 rootfs job, which was the slowest anyway
17:56DemiMarie: alanc: I guess I assumed that the companies whose make billions of dollars off of fd.o would be willing to put in more effort into it.
18:06jenatali: Who makes billions off of fd.o?
18:08dwfreed: Google, for one
18:09dwfreed: RedHat, SUSE, Oracle
18:09alanc: I can't think of a single company making that much off fd.o
18:09dwfreed: not directly, no
18:09dwfreed: but without fd.o, how many Linux systems would have functional graphics and audio
18:09alanc: yes, those companies all make billions off their products, but for most desktop is not what's driving that business
18:10dwfreed: Well, Google has ChromeOS, that probably makes a decent chunk
18:11dwfreed: I believe SUSE still sells SLED, though I'm sure it's less popular then SLES
18:14alanc: and corporations not financially supporting open source projects is a very well known problem no one has found a solution for - https://x.com/FFmpeg/status/1775178805704888726 was a public example last year
18:16alanc: but also to DemiMarie's original point, even if all those companies contributed cash, it's unlikely they'd demand fd.o was run with the same very high level of security she would like
18:18alanc: (and while I know someone like me, the fd.o admins, or other maintainers of widely used packages could effectively backdoor the world, moving from hosted services to colocated machines won't stop that)
18:41jenatali: I wish I could convince Microsoft to contribute more, but despite having huge profits, getting funding for things like that is challenging
23:34DemiMarie: alanc: it would protect from the previous user of that machine being able to, which is what I was thinking of