03:44 mupuf: bentiss: took me a little longer to get to it than expected, but here is my first contribution in Rust: https://gitlab.freedesktop.org/freedesktop/terraform-gitlab-fastly/-/merge_requests/3
03:44 mupuf: I hope you don't mind I pushed directly to a branch on the repo rather than forking it
03:45 mupuf: It goes without saying, this is completely untested O:-)
06:35 bentiss: mupuf: thanks. I've deployed it, it should be good now
06:35 mupuf: bentiss: thanks! Giving it a try
06:36 bentiss: __tim: yeah, it looks like the same issues we were having when I first enabled fastly. However, this time I have the same timeouts. I'm afraid it's because it's using compute and they have some restrictions on how long a connection can be
06:38 mupuf: bentiss: looks good all around, thanks!
06:38 bentiss: \o/
06:38 bentiss: thank you!
06:38 mupuf: my pleasure, new skill unlocked... ish :D
06:40 bentiss: yeah, congrats!
06:41 bentiss: FWIW, once we get https://github.com/TecharoHQ/anubis/issues/468 fixed, deployment of that compute will be much easier as for now it relies on a very dirty hack on my laptop for rust-jwt-simple to accept anubis cookie
06:41 bentiss: so others will be able to deploy it
06:42 DemiMarie: Where do I report the missing SPF/DKIM/DMARC on members.x.org?
06:42 DemiMarie: I’m pretty sure that is either a DNS configuration problem or a mail server misconfiguration.
06:42 DemiMarie: Also, what is the right place for someone to announce they are hiring a graphics developer?
06:43 bentiss: re dns/spf/dmarc: that would be daniels or emersion ^^ (maybe they'll want an issue on gitlab.fd.o, don't know)
06:57 emersion: DemiMarie: i've noticed but had no time yet to fix it
07:33 MrCooper: karolherbst: oddly, I don't remember ever seeing the anubis portal for any significant amount of time with Firefox on this machine; I wonder if it might be due to using the official Firefox flatpak, I noticed before that can be significantly faster than distro builds
07:34 karolherbst: possibly
07:34 karolherbst: I should check that out
07:35 karolherbst: _huh_
07:35 karolherbst: that is indeed fasater
07:35 karolherbst: ehh it's also faster on non flatpak firefox
07:36 karolherbst: guess the difficulty is now lower or an update got deployed or anything
07:36 karolherbst: or random other reasons.. mhhh
07:36 karolherbst: I wonder...
07:37 karolherbst: mhh no idea
07:43 mupuf: karolherbst: was your laptop's battery close to being dead when you first experienced the slow behaviour?
07:43 karolherbst: no
07:44 karolherbst: it's not like the calc speed was shown as being any higher, I think the problem to solve was just esaier
07:46 MrCooper: maybe I just didn't hit the slow case by luck then
08:48 __tim: bentiss, so I'm pretty sure it's either fastly or anubis that's causing the issues with the large artifact uploads, because if I bypass fastly on the runner it works just fine
08:50 __tim: bentiss, re. what you said, i'm not sure where that leaves us. We currently can't get any MRs in, and we have a GStreamer hackfest next week. Perhaps we can disable Anubis again until there's a solution that works for us as well? Or can you add some runner IPs to a exemption list?
08:55 MrCooper: if bypassing fastly helps, doesn't that indicate a fastly issue rather than anubis though?
08:58 __tim: maybe, I'm not sure where/how things are deployed
08:58 __tim: but it started failing right after anubis was enabled afaik
09:27 jasuarez: I see some of the [graphana dashboards](https://grafana.freedesktop.org/dashboards) are empty, like the [Mesa driver performance](https://grafana.freedesktop.org/d/aH__CPd7z/mesa-driver-performance?orgId=1)
09:27 jasuarez: is this due the infra movement?
10:02 __tim: and just to be clear, we don't have a workaround, since it still affects the macos and windows runners
10:52 xe: MrCooper: that firefox issue is a known bug and is a strange interaction between anubis' multithreading and how firefox implements Worker
10:52 xe: it has been really annoying to debug lol
12:05 __tim: bentiss, so there's no quick-fix that can be done like whitelisting IPs for certain runners?
12:06 __tim: (or is someone else actually in charge of this? :))
12:22 karolherbst: btw https://members.x.org/ seems down
12:22 karolherbst: can't vote 😢
12:22 karolherbst: ohh wait
12:23 karolherbst: was just temporary ...
12:36 mfilion: all good now? loads well here
13:01 karolherbst: yeah, it works
14:16 bentiss: __tim: sorry I'm off today as well. Bypassing anubis and fastly on the runners seems like the best option
14:17 bentiss: (that's what I'm doing on the fdo htz runner as well, for different reasons)
14:20 mupuf: bentiss: no need to answer before monday, but if we wanted to do the same, we would use ssh.gitlab.freedesktop.org instead of gitlab.freedesktop.org? Or is there another DNS entry?
14:20 bentiss: MrCooper_: the problem is indeed in fastly, but because I'm using a compute deployment instead of a VCL like previously. Not sure why, but the compute seems to still be around while the body is not finished transferring, and there is a hard limit of 2 minutes
14:21 bentiss: mupuf: ssh.gitlab.fd.o is fine, but you need to add an overwrite in the gitlab-runner config (or a /etc/hosts). I haven't done any enforcing on that endpoint, so it should be working
14:21 __tim: bentiss, I'm not sure if I can bypass on the macos/windows runners (windows more likely, macos unsure how to do that right now)
14:22 bentiss: the only problem is that's a manual step needed to be done on all runners :(
14:22 bentiss: __tim: the macos runner is running docker?
14:22 __tim: no, it's a VM thing using tart or something
14:23 bentiss: first google link: there seems to be a /etc/hosts on Macos as well: https://kinsta.com/knowledgebase/edit-mac-hosts-file/
14:25 __tim: sure, something to try (though I had not much success doing that inside the docker on linux, but maybe that's because I didn't find how to clear the cache properly)
14:25 bentiss: for docker on linux, you need to add an exception in the gitlab-runner config
14:26 __tim: yes, I've done that, and that works
14:27 bentiss: a solution could be that we (fdo) host a dedicated dns server which overwrites the IP of gitlab.fd.o so it's configured once in case we need to change IP, but not today :)
14:28 __tim: I'll poke at it some more, thanks
14:28 bentiss: sorry
14:29 bentiss: but anubis had a nice impact on bots (I don't see any in the logs now). I'm not sure how I can trust the numbers reported by fastly, but they seem much better as well
14:29 __tim: not sure I understand why it can't just be switched off again until we have a solution that works for everyone :)
14:29 __tim: it's not like gitlab was unusable three days ago
14:30 bentiss: TL;DR: I need to fix the VCL config because there were too many caches issues. And the VCL config is just a mess to work with
14:31 bentiss: so switching back means me spending a couple of hours doing that
14:31 __tim: ouch, alright :)
14:31 bentiss: and if you add those overwrites to the gst runners, then we can do whatever we want on fastly, and you are not impacted
14:32 bentiss: meaning we can change some of the timeouts to be better
14:32 __tim: ack
14:32 bentiss: anyway, going AFK again
14:32 __tim: Thanks for your help
14:53 xe: __tim: would it be possible for you to get a gitlab runner to hit an arbitrary URL for me so I can improve a bloom filter?
14:55 __tim: you mean just run curl or wget on an url you give me?
15:56 gallo[m]: jasuarez: those dashboards are deprecated, please refer to https://ci-stats-grafana.freedesktop.org/
15:57 gallo[m]: jasuarez: Mesa performance tracking: https://ci-stats-grafana.freedesktop.org/goto/cLxoBjbNR?orgId=1
16:03 gallo[m]: fwiw, updates are slower than usual since the mesa-performance-tracking scheduled pipelines stopped working after a recent GitLab upgrade. We're tracking the issue here: https://gitlab.freedesktop.org/mesa/mesa/-/issues/13057. For now, updates are being done manually
16:09 karolherbst: bentiss: accounts sign ups seem to be down a bit as well
16:09 karolherbst: maybe we could even get rid of spam detection with that? would be nice
17:15 jasuarez: gallo: thanks! Maybe we should add that to Mesa documentation?