12:17alatiera[m]: jenatali: awesome to hear! nothing special, you can take a look at the setup here https://gitlab.freedesktop.org/gstreamer/gstreamer-project/-/merge_requests/4
12:17alatiera[m]: I'd say you'd mostly really want to override the dns key, as the windows dns proxy thnigy in docker had caused a couple issues in the past
12:18alatiera[m]: oh and the other special thing is that we still use the kde gitlab-runner helper image, not sure if gitlab upstream fixed serve2022 support for it yet
12:19alatiera[m]: Set-NetTCPSetting -SettingName Datacenter on the host also helps
13:00daniels: jenatali: nice! I think the best thing to do would be to register it first on your fork of mesa/mesa and use that to run jobs, then when it's all good and working, just let me know which tags I should add for it and I'll send you a token
13:00daniels: (the flow's changed so an admin has to register type/tags/etc, then provide a token which individual runners can use to register, rather than just providing a global token which could be used to register anything)
13:39jenatali: daniels: Sounds good, I'll give that a try
14:14emersion: git@gitlab.freedesktop.org: Permission denied (publickey).
14:14emersion: hm
14:15emersion: seems like it went through after a few tries
14:15emersion: also hit this:
14:15emersion: ! [remote rejected] explicit-sync-v2 -> explicit-sync-v2 (pre-receive hook declined)
14:27MTCoster: Hmm I just got a different weird error when fetching
14:27MTCoster: https://www.irccloud.com/pastebin/q6BCMuMB/
14:32kxkamil: same here for igt-dev just now: ! [remote rejected] master -> master (pre-receive hook declined)
14:33kxkamil: key seems ok
14:33kxkamil: remote: calhttp post to gitlab api /pre_receive endpoint: Internal API unreachableling pre_receive endpoint:
14:34kxkamil: should be: remote: calling pre_receive endpoint: http post to gitlab api /pre_receive endpoint: Internal API unreachable
14:39kxkamil: it worked now
15:45jenatali: daniels: I think we're going to add a unique tag for the new runner we're adding, and we should add a different unique tag to the existing runners. Then if one of them is having problems we can update the YML to require one tag or the other instead of having to take the whole platform offline. What do you think?
15:46jenatali: That'll also let us target our runner for bringup/testing without having to disable instance-level runners, 'cause we still need the Linux runners for some of the jobs IIRC
15:46daniels: hmmm
15:46daniels: yeah, could be an idea
15:50__tim: in what case would you not want to just pause whatever runner has 'problems'?
15:50jenatali: Maybe I'm just not familiar with how it works but on our side, the folks who can administrate the runner might not be able to jump on something immediate
15:50jenatali: If it can be paused by an admin from the GitLab side then yeah there's no worries and we can just do that
15:51__tim: no harm in adding individual tags of course, was just wondering :)
15:52daniels: it can indeed be paused, yeah
15:54jenatali: Ok cool then that's probably the easiest thing to do instead of updating YMLs :) Still worth a unique tag for bringup at least though
16:17jenatali: alatiera: That MR is super helpful, thanks! Were you going to actually merge that? Or is it just going to sit as a MR forever? :)
22:07jenatali: Can I request user nadaouf be added to mesa/ci-ok? She's registered the runner as a project runner, but needs access to the the instance runners to be able to progress the pipeline to run actual tests
22:39jenatali: https://gitlab.freedesktop.org/nadaouf/mesa/-/jobs/59083019 looks good though :)
23:19jenatali: Nvm on ci-ok, we just worked around it. Pipeline's all green so I think we're okay to proceed with registering at the instance level: https://gitlab.freedesktop.org/nadaouf/mesa/-/pipelines/1184680
23:42DavidHeidelberg: bentiss: daniels could you wipe the python:3.12 image to get 3.12.3 instead of cached 3.12.0 we have now please?