Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My favorite story to tell in this area: many years ago in the early days of Chrome, there was some logic that followed HTTP redirects that had a cap on how many redirects to follow before giving up. (The rationale here is that if you hit too many in a row you probably had encountered a broken site , like one where /foo?q=1 redirects to /foo?q=2 redirects to q=3 and so on).

As I recall, the cap on redirects was 10, and then Darin (who had previously worked on Firefox) was like "no, you have to allow up to 30 or you break the New York Times". And so, it was upped to to 30. Intuitively I'd think a site that redirected you 10 times was broken, but I guess someone built one that needed more.

PS: Looking online now to confirm my facts, I see one claim that both Chrome and FF limit redirects to 20. So either my memory exaggerated it as 30 or they both managed to lower the limit at some point.



I had an edge case with an authentication server (think okta) wrapped around a school login system. In occasional cases, with certain clients of the server doing a couple redirects of their own, we'd hit that 20 cap. It's not any individual system being irresponsible, just reusing other systems as they're meant to be. It's kind of like saying having a call stack depth of 50 in software is never acceptable.


I kind of wish the cap had remained at 10, and the redirection on the NYT was left as a problem for them to fix...


Upstart browsers are not in the position to tell websites what to do; they will just throw up a "your browser is not supported" banner and forget about you. That was where Chrome was in its early days (you were supposed to be using IE 6), and that's where Firefox is now.

We never let economics solve this problem, because nobody pays for web browsers with money. If there was a $10 "sorry, The New York Times violates standards so we don't support it" and a $100 "we'll work around any bug large or small to make it work for our customer" browser sites would be spending money to let the $10-browser-owners see their ads. But, every browser is run as a charity, so nobody gets to pay the actual costs of the workarounds (except maybe in terms of more security vulnerabilities, higher RAM usage, etc.)

I'm not saying this is good or bad, it's just how it is. Browsers are free and will be blacklisted by sites that don't like how they behave. So the incentive for browsers is to do what the sites want, rather than to strictly conform to standards. Shrug.


Ha, that must be for academic articles as well. They all (well, almost all) have an ID that you can look up at doi.org, which is supposedly kept up-to-date and links to the article at the publisher's website, wherever that is. But then in practice, that publisher's website adds another bunch of redirects because why not.


What could they be doing with all those redirects?


The first two are often:

    nytimes.com > https://nytimes.com
    https://nytimes.com > https://www.nytimes.com
And from there, who knows. Some websites append a trailing slash; sometimes they removed it. Some websites send you to /index.php or /home.aspx rather than sitting in the root.

Subpages may use a redirect to update a slug if there's been a title change. Or maybe the website just moved to a new format altogether and is updating old links.

I have seen examples of 4 or 5 legitimate redirects to send users to the correct place. I'm having trouble coming up with 10 possible reasons, though.


Run you through all the spying services' cookies & other fingerprinters. Just like Twitter's t.co.


IIRC you can mitigate this by lowering network.http.redirection-limit


Wouldn't that prevent reaching the page?


It would prevent reaching the page, and still give your contact to trackers - you might just prevent one of the last ones from running.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: