IMHO your rant does boil down to the fact that making the browser "end-user-hackable" through extensions was an untenable situation. I mean, you're both saying
"This was a big problem because if you recall, every time a new Firefox release dropped, it would cause massive breakage because APIs would be obsoleted and removed within the same release cycle."
and
"it was stupid not to recognize that extensions were highly valued by both the vocal minority and silent majority that were responsible for the number of Firefox holdouts, and b) making an end-user-hackable browser was, like, the natural consequence to be working on if you were serious about all rhetoric the people from your camp were spewing about making something that really was aiming to be a "user agent" meant to "put end users in control" so they could "experience the web on their own terms"."
I don't buy the argument that a searchable source index is what made or broke this. For starters, the approach you describe obviously doesn't scale. It's fixing a leaking roof with buckets. Resources were in fact better spent elsewhere, so yes, it's exactly "run-of-the-mill lamentations of someone's pet use case not being prioritized and coddled".
The idea that Firefox's add-on ecosystem was an asset rather than a liability did much to set the browser and the users' experience with it back for years, and it'll continue to do so as users lash out when their unrealistic expectations of XUL addons continuing to work are broken.
> making the browser "end-user-hackable" through extensions was an untenable situation
This is in response to a blog post where extension support is being added. And those extensions exist in great numbers. Targeting the most popular browser in the world. And there's no sign that they are being deprecated. So how untenable are extensions? Is Chrome untenable?
> I don't buy the argument that a searchable source index is what made or broke this.
I'm very glad, then, that I didn't say that.
> it's exactly "run-of-the-mill lamentations of someone's pet use case not being prioritized and coddled"
I'm flummoxed at this comment. There's no way this is an instance of that, nor would it even be possible for it to be, because it's not even my use case. I think I made that clear enough in my original comment.
An extension API that has a well-delineated API border and limits the internal surface that is exposed is obviously tenable.
A full "end-user-hackable" browser where add-ons can access all browser internals is not.
There's no way this is an instance of that, nor would it even be possible for it to be, because it's not even my use case. I think I made that clear enough in my original comment.
Your use case was to try to limit the breakage caused by changing the exposed internals. You explicitly said so. This use case entirely hinges on maintaining a fundamentally broken design.
Firefox's XUL add-on system is a perfect example of sunk cost fallacy.
It sounds like you're trying to make arguments on my behalf, and then tear them down. I'm not interested in participating in a conversation like that, especially since they're either directly opposite of the ones I'd make for myself, or just weird and orthogonal to the points I was making.
> A full "end-user-hackable" browser where add-ons can access all browser internals is not [tenable].
Why not? Please explain.
You seem to imply that Mozilla's continously-breaking-extensions problem is proof that this is untenable, but I don't see any obvious connection between the two.
You seem to imply that Mozilla's continously-breaking-extensions problem is proof that this is untenable, but I don't see any obvious connection between the two.
Because the breakage is inevitable as internals invariably change. That's why Firefox extensions break: the internals change, and add-ons depend on them. This causes a huge add-on churn each time the browser significantly changes, leads to abandoned add-ons, user and ecosystem frustration. It causes a pushback on the developers to make far-reaching changes (e10s being a clear example). It adds overhead as others try to mitigate the compatibility impact of changes (which was what the original poster was attempting and failing at doing, as was Mozilla's AMO team). The unrestricted power of the add-ons also leads to huge review queues, causing more ecosystem frustration.
Mozilla has given up on it. I would say the burden of proof is now on someone else to say it can work, because there's no more examples left: everyone who came after Firefox saw the problem and enforced a stable add-ons API that limits the internals' exposure.
"This was a big problem because if you recall, every time a new Firefox release dropped, it would cause massive breakage because APIs would be obsoleted and removed within the same release cycle."
and
"it was stupid not to recognize that extensions were highly valued by both the vocal minority and silent majority that were responsible for the number of Firefox holdouts, and b) making an end-user-hackable browser was, like, the natural consequence to be working on if you were serious about all rhetoric the people from your camp were spewing about making something that really was aiming to be a "user agent" meant to "put end users in control" so they could "experience the web on their own terms"."
I don't buy the argument that a searchable source index is what made or broke this. For starters, the approach you describe obviously doesn't scale. It's fixing a leaking roof with buckets. Resources were in fact better spent elsewhere, so yes, it's exactly "run-of-the-mill lamentations of someone's pet use case not being prioritized and coddled".
The idea that Firefox's add-on ecosystem was an asset rather than a liability did much to set the browser and the users' experience with it back for years, and it'll continue to do so as users lash out when their unrealistic expectations of XUL addons continuing to work are broken.