Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Firefox is great on Linux. It kinda whoops Chrome's ass for what it provides (though Chrome on Linux is a nice experience), and Mozilla has consistently been one of the most charitable organizations towards the Linux foundation. Here's to another decade of Firefox and an open web!


> Firefox is great on Linux.

If you exclude most hardware acceleration on majority of Linux PCs that currently run FF and if you exclude following standards that good and normal software does[1].

Both issues exist nearly decades now. "Great"? Absolutely not.

[1]: https://bugzilla.mozilla.org/show_bug.cgi?id=259356


Chrome's sandbox is far superior to Firefox's from a security standpoint. I think the best bet is to run Ungoogled Chromium on Linux; it's got even less telemetry than Firefox and is more secure.


You're focusing on technicalities, but the real issue is political/social in my opinion. If Firefox did not exist, what other powers exist to prevent Google from shaping the Web for its own benefit more than it already has?

I personally don't even care whether or how much feature parity Firefox has or whether its GPU rendering is 20% slower. Not supporting Firefox comes at our own loss.


Firefox does nothing to prevent Google from shaping the web; the users of Firefox are largely irrelevant to the web.


Mozilla is a member of the W3C, and as long as it has a substantial share (and good leadership), it can influence things. Our job is to be that share.

Also, there is no such thing as an "ungoogled chromium". Google controls chrome/ium, and as long as people use it, those people remain subject to Google's control.


From what I've understood, it's not recommended to run Ungoogled Chromium: https://qua3k.github.io/ungoogled-chromium


In interesting article. Personally I use Ungoogled Chromium on my Mac occasionally, for things Firefox doesn't do well, but mostly I use Firefox and Safari.

In that article there's something I disagree with having tried:

> Most of the functionality of the patches are either in the best case minimally beneficial or can be reproduced with either a setting, a flag, or a switch,

Some years ago, on Linux, I tried finding all the command line flags necessary to use Chrome without having it talk back to Google. Unfortunately despite hunting for every option I could find (and there are a surprisingly large number of undocumented options), including with "strings", nothing I tried completely suppressed traffic to Google while using Chrome on sites not connected with Google.

My motivation at the time was to run my own local applications using Chrome as the UI, the way Electron is used now. It didn't work out because I failed to find a way to confidently suppress traffic to Google.

That experience is why I run Ungoogled Chromium now when I need Chrome functionality.


"third parties contribute prebuilt binaries, so there is no central party to trust."

For me Google _is_ a third party. And taken into account its behavior it receives 0 (null) trust from me. What i really don't understand is this blind trust in everything google does.


For the Chromium-relevant configuration changes applied automatically by Ungoogled Chromium: I would never have discovered them all by myself, and, now that I know about this article, I'm not convinced that I would be able to stay on top of any future switches and configuration changes in future versions of Chromium.

That said, the "binaries built by anyone" thing is pretty suspect.


Even as someone who doesn't use Ungoogled Chromium, those are pretty dumb reasons not to use the browser. The biggest draw (to me) is having the telemetry digitally stripped out before being compiled, so that Google can't remotely activate any tracking, fingerprinting or identification.


What do you think makes Chrome's sandbox "far superior" on Linux? AFAIK they're very similar.


This is typically what I cite when people ask me about FF vs Chrome security: https://madaidans-insecurities.github.io/firefox-chromium.ht...

And here's Theo de Raadt's opinion of Firefox from back in 2018: https://marc.info/?l=openbsd-misc&m=152872551609819&w=2


Your first link is a good resource. Thanks! I'm not a security researcher, but perhaps I can add a few comments on their discussion for other users who might not be familiar with the specifics:

> Firefox's sandboxing lacks any site isolation. Site isolation runs every website inside its own sandbox so that an exploit in one website cannot access the data from another.

It does seem that Firefox's site isolation is becoming more ready. From a Mozilla blog post two days ago, with instructions how to manually enable it on Firefox stable, beta, or nightly: https://blog.mozilla.org/security/2021/05/18/introducing-sit...

Also, from the same link, they mention X11:

> One example of such sandbox escape flaws is X11 — X11 doesn't implement any GUI isolation which makes it very easy to escape sandboxes with it.

Definitely true that X11 sucks (sorry NVIDIA users). So we have Wayland becoming more mainstream now. I've been using it for several years already, on GNOME and Sway. Working great... even Electron is native now (Signal, VS Code, etc).

And lastly, they rightly mention Pulseaudio:

> PulseAudio is a common sound server on Linux however, it was not written with isolation in mind, making it possible to escape sandboxes with it.

In the last few months PipeWire became a mature drop-in substitute for Pulseaudio in my experience. It was designed with isolation in mind, and a whole bunch of other things.

Hoping Firefox can bridge the gap in security with Chrome so we can wholeheartedly recommend it to people without caveats! We deserve a fast, full-featured, secure, and open source alternative to proprietary web browsers.


Your first link is a good resource, thanks; it points out many valid issues where Firefox needs to catch up. It is definitely slanted though; for example Firefox's Rust usage is compared to Chromium talking about maybe using Rust someday and the conclusion is "so that's a wash". Also, "the parts that are memory safe do not include important attack surfaces" isn't correct; all C++ code that manages dynamic memory is attack surface.

A slightly tangential issue is that the mitigations section is not super compelling to me because I think many mitigations are low-value. Evaluation of mitigations typically does not ask the right questions: How much work is it for an attacker to bypass the mitigation, assuming they're aware of it? Can that work be packaged and reused in multiple exploits? And how many bugs become completely non-exploitable due to the mitigation?


The rate at which escapes are found (and publicly disclosed) suggests to me that the Chrome sandbox is substantially more advanced than any other browser's.

Perhaps there are just as many escapes being found each quarter in Chrome as the other browsers and they are just being hoarded privately but I don't find that super plausible.


Where are you getting that data? I'm genuinely interested.

FWIW I think it is probable that the IPC APIs into and out of the sandbox have been more thoroughly fuzzed and otherwise tested in Chrome than in Firefox. I don't know how that translates into actual security though.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: