14:50 Venemo: good afternoon
14:50 Venemo: mesa gitlab no longer works with the android version of firefox
14:51 Venemo: it just shows an anime girl picture and "oh noes"
15:00 psykose: works on android firefox beta
15:09 karolherbst: Venemo: xe might be able to help with issues like that
15:10 xe: Venemo: are you blocking cookies?
15:10 xe: if so, try not
15:22 xe: it works fine on my grapheneos phone
18:51 Consolatis: hm.. although I personally don't have much of an issue with a weekly 40 second check for fdo, anubis really starts to get annoying: quick search for something, result points to arch wiki, 40 second wait. i really hope this doesn't start to spread even more.
18:53 Consolatis: also for gitlab.fdo in particular, couldn't the anubis check be prevented completely for users with a valid verified account cookie? i don't see any point to let them run through anubis in the first place
18:57 Consolatis: I really stand by my point: if you offer something via public HTTP GET request you have to deal with people (and bots) using it. if that makes everything slow then the software needs to be improved rather than annoying the users more and more. every increase in annoyance will be combated on the bot front sooner or later so the the only thing this approach archives in the long term is making the whole internet more annoying for people to use /r
18:57 Consolatis: ant
19:14 karolherbst: Consolatis: bots also create accounts
19:14 karolherbst: Consolatis: also.. those scrappers cause like 90% load
19:14 karolherbst: and somebody has to pay the servers
19:15 karolherbst: I'm sure nobody has anything against dropping it, if somebody pays for the bills
19:15 Consolatis: re bot accounts: right, but I assume they are not verified (e.g. fork rights)
19:16 karolherbst: maybe we could hook something up like that
19:16 karolherbst: just needs somebody to do it
19:16 karolherbst: but the token is valid for a couple of days
19:17 karolherbst: but those AI scrappers could also just stop and then nobody would have to deal with this nonsense issue
19:17 Consolatis: throwing more resources at things indeed sounds like the wrong approach. moving CPU / DB intensive paths behind a login (or maybe even just turning them into a POST request) might help already
19:17 karolherbst: they force everybody to pay more for their servers
19:18 karolherbst: yeah.... maybe it's good enough to only enable it for expensive APIs, but original plan wasn't to use anubis, but fastly for bot protection
19:18 karolherbst: so might even just be something temporary
19:19 karolherbst: but the point is, people have enough of those bots causing problems for everybody
19:20 karolherbst: and I'm not the kind of person who complains at projects trying to defend themselves, if you want to be angry, be angry at those bots and scrappers
19:20 Consolatis: from the potential options which combat the crawling (for the moment) anubis is definitely one of the better ones. IMHO there is nothing worse than captchas
19:21 karolherbst: yeah.. it just sucks that it's slow on firefox for whatever reason
19:21 karolherbst: it's like a lot faster with chromium
19:22 karolherbst: takes like a second on my smartphone with chrome
19:22 psykose: fwiw i use firefox and i've never had any anubis page take >1s ever
19:23 karolherbst: but yeah.. also fast on firefox with my phone... but sometimes firefox was super slow
19:23 karolherbst: no idea what's going on there
19:23 karolherbst: of course users of grapheneos are in misery here, but if you disable your JS JIT then yeah....
19:24 psykose: i use grapheneos and it's also fine there
19:24 karolherbst: ahh, nice
19:24 karolherbst: there have been reports of users there that it's broken
19:24 psykose: in firefox anyway, maybe vanadium has a disabled jit and sucks
19:24 karolherbst: yeah.. try vanadium 🙃
19:24 psykose: let's see
19:25 psykose: 722ms and passed
19:25 psykose: and it's a pixel 4xl from many moons ago
19:25 karolherbst: nice
19:25 psykose: maybe i made a deal with satan some time ago
19:25 karolherbst: sounds like it
19:25 Consolatis: re "not complaining at projects trying to defend themselves": I see your point but I think its important to mention these things because otherwise everything gets worse and worse by stacking workarounds (from my POV) on top of workarounds (e.g. manually verified accounts, cleartext MITM via fastly, anubis, ..)
19:26 karolherbst: there is some magic going on, and some get harder challenges than others
19:26 karolherbst: so it's also a bit of luck
19:26 karolherbst: I mean.. that sort of scrapping should be outlawed, but....
19:27 karolherbst: there is just so much you can do if the other player has billion of dollars and their life goal seems to make everybody elses life miserable
19:29 karolherbst: it's not a technical problem anyway.. the industry just decided to enable the most vile and toxic community to do their nonsense, so here we are and there is nothing we can do about it. If anybody wants to make gitlab faster, sure, go ahead, but not everybody has that luxury. Maybe we should move to something else, but the entire CI stuff we are
19:29 karolherbst: doing is huge and it's gotta be a lot of work.
19:29 karolherbst: or well.. somebody comes around, gives us a million dollar a month and we just get more hardware
19:30 karolherbst: but hey.. even github decided to throttle random bots, because it's just too much, so I don't think there is anything we can do. If our webpage is cheaper, they'll just scrape even more
19:32 karolherbst: anyway.. nobody likes it, there is just no good solution
19:32 Consolatis: i wonder what is actually causing that load.. i mean a static page like for anubis (+ js sources) could even be bigger than a static page from a redis/nginx/varnish cache
19:34 karolherbst: git blame pages for instance
19:34 karolherbst: there is a lot of stuff that isn't cached
19:34 Consolatis: well, those could easily be put behind a account requirement
19:34 karolherbst: and they open all the lnks, every commit, every file, every git blame page
19:34 karolherbst: but anyway...
19:34 karolherbst: it's like car traffic and streets. you build more streets, you have more traffic, no problem solved
20:22 bentiss: Consolatis: FWIW, I like your idea of bypassing anubis for authentified users with certain privileges. However, unless I messed up, I don't think to see any interesting value in the cookies stored in the browser. So mapping cookie_session/username and privileges is going to be tough
21:32 Venemo: xe: no, I'm not blocking them
21:32 Venemo: xe: I retired a few times and it eventually worked
21:34 xe: karolherbst: proof of work with random inputs is kinda inherently luck-based
22:17 karolherbst: fair enough
23:52 DragoonAethis: bentiss: and then you need to pass Anubis to authenticate in the first place, so it's a bit of a chicken-and-egg problem anyways
23:52 DragoonAethis: Even when you're authenticated, GitLab logs you out every now and then (at least it does that for me between work and personal computers)