00:26 leigh_2valid[m]: hey guys
06:27 DavidHeidelberg[m]: mupuf: (reminder) how do you generate the initramdisk for valve machines?
06:29 mupuf: DavidHeidelberg[m]: check out gfx-ci/boot2container
06:29 DavidHeidelberg[m]: thx!
06:30 mupuf: We just use the releases, but you can build it yourself too
06:34 DavidHeidelberg[m]: mupuf: https://gitlab.freedesktop.org/gfx-ci/boot2container/-/blob/main/.gitlab-ci/u-root-container-build.sh?ref_type=heads#L22 you can remove the line :D
06:35 mupuf: Typo, I meant 3.19
06:36 DavidHeidelberg[m]: then no.
06:36 mupuf: Hehe
11:03 eric_engestrom: https://indico.freedesktop.org/ has been returning a 503 for a while now
11:04 eric_engestrom: I figured maybe a lot of people were excited about the XDC schedule being published, but it's not getting better
11:05 daniels: ha
11:05 daniels: yeah, I updated the sponsorbox content, which requires restarting indico, then discovered that the container is missing from the registry since the migration
11:05 daniels: https://gitlab.freedesktop.org/mupuf/indico-k8s/-/jobs/49226186
11:06 daniels: (took me a while to notice that it was going badly since the average restart is ~5min)
11:49 eric_engestrom: daniels: fyi, right now it's returning a 500
11:49 eric_engestrom: different error == progress?
11:50 daniels: yeah
11:50 daniels: the container, obviously, no longer rebuilds cleanly
11:55 daniels: finally ...
11:55 daniels: mupuf: note also that the pipelines (which are specified as master-only) don't even run automatically on master anymore, so I had to trigger them manually
11:56 mupuf: daniels: :o
12:02 daniels: sorry about all the MR spam :P
12:04 mupuf: daniels: lol, np
14:20 zmike: I dunno what the hell's going on with ci lately
14:20 zmike: but it's working great
14:20 zmike: keep it up
14:22 daniels: I know koike was wondering if the stats were broken since there's not been a single false-fail pipeline in the last couple/few days
14:23 zmike: I've successfully merged full-tree pipelines on the first try more than once
14:23 zmike: I had to git fetch and check that the patches actually landed
14:24 koike: nice to hear that, I was indeed thinking it was a problem with the stats https://ci-stats-grafana.freedesktop.org/d/Ae_TLIwVk/mesa-ci-quality-false-positives?orgId=1&viewPanel=11
14:25 koike: the blue line is lower than usual
14:27 zmike: did all the stoney jobs get disabled or ?
14:55 Wallbraker: Has the Windows nodes fallen over? Trying to launch a job with the tags: windows, shell, 2022 but it just spins.
15:08 koike: windows jobs were running 1h ago https://gitlab.freedesktop.org/mesa/mesa/-/pipelines/990396
15:09 koike: zmike: I see stoney jobs here https://gitlab.freedesktop.org/mesa/mesa/-/pipelines/990443
15:09 zmike: it was a joke
15:09 koike: ops, hehe
15:10 zmike: :)
15:22 __tim: windows shell stuff seems to be under pressure at the moment
15:22 Wallbraker: Hmm strange, must have fallen off or... that :p
15:22 Wallbraker: Takes a queue number and waits.
15:24 __tim: hrm, the second windows runner seems to have fallen off the grid, but I didn't get an alert about that
15:35 __tim: ok, second windows runner should be back
15:39 Wallbraker: Thanks
17:34 mupuf: zmike: you jinxed it
17:34 mupuf: https://gitlab.freedesktop.org/mesa/mesa/-/jobs/49240148
17:35 mupuf: daniels, koike: ^
17:35 mupuf: Seems like hw failure
17:39 anholt: mupuf: there's an MR for that. I'll turn off 14 until that lands, though.
17:39 anholt: (it's always 14)
17:41 eric_engestrom: mupuf: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/25285 fixes that
17:41 eric_engestrom: ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate (_ssl.c:992)
17:41 eric_engestrom: when uploading artifacts to s3.freedesktop.org
17:42 eric_engestrom: weird failure mode where it ends up serving a self-signed certificate 🤨
17:42 eric_engestrom: https://gitlab.freedesktop.org/mesa/mesa/-/jobs/49242148 for an example of that failure
17:45 eric_engestrom: ok, something's wrong with s3, all the jobs are now getting a 503
17:57 eric_engestrom: s3 seems back, retrying all the jobs
18:24 eric_engestrom: (it's not back btw, look through the container & build jobs in https://gitlab.freedesktop.org/mesa/mesa/-/pipelines/990545 to see how many retries it will have taken to get it merged)
18:45 bentiss: eric_engestrom: yeah, sorry, trying to move out s3 from coreos, and it's not in a good shape
18:48 bentiss: alright. I think I'm going to revert s3 to the old server, we'll probably have some data loss, but nothing should be that important on s3.fd.o
19:01 bentiss: it'll take a littl ebit of time to propagate, but it's pushed now. Worst ETA: 6 hours to propagate
19:37 eric_engestrom: thanks bentiss!