Closed Bug 1456308 Opened 6 years ago Closed 6 years ago

Make WebAssembly optional

Categories

(Core :: JavaScript Engine: JIT, defect)

59 Branch
defect
Not set
normal

Tracking

()

RESOLVED WONTFIX

People

(Reporter: u474838, Unassigned)

References

Details

User Agent: Mozilla/5.0 (X11; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0
Build ID: 20180323154952

Steps to reproduce:

Before WebAssembly is enabled by default, can we please introduce a switch like we have for WebGL or the collection of WebRTC switches? This would be a tremendous help in avoiding the situation we have with JavaScript where we need to fight ill-behaving or malicious JS by using extensions to filter and control page behavior.

WebRTC, WebGL, WebAssembler can all be useful, but unconditionally giving pages another angle to inject questionable payloads does not align with 2018 security standards. We have enough of those already, including pathological cases of CSS that can DoS a browser session.

Let's have pages request WebAssembly like they have to for persistent storage or notifications and integrate it with the planned overhaul of the permission popup/setting dialogs.

To be clear, I'm not trying to offend or troll, just making a suggestion how the feature can be introduced in a way that its power can be tamed by users without the need for another extension.

Another consideration, in addition to the permission, is, of course, introducing space and time bounds for WebAssembly, maybe similar the JS hang check.
I mean, observing the various projects that have begun releasing WebAssembly targets, it's clear that if we're not careful, we will soon be in a renaissance of ActiveX controls, Java applets, Flash applets. The various device access standards make this more evident. I don't want to have to run Firefox in a virtual machine with fake (virtual) devices just to keep it under control when fed questionable payloads.
Blocks: wasm
Component: Untriaged → JavaScript Engine: JIT
OS: Unspecified → All
Product: Firefox → Core
Hardware: Unspecified → All
Thanks for the report.

It is already possible to entirely disable WebAssembly with the about:config preference called javascript.options.wasm (set it to false instead of the "true" default). I am pretty sure that the Tor browser does this already, in highly secured modes.

> WebRTC, WebGL, WebAssembler can all be useful, but unconditionally giving pages another angle to inject questionable payloads does not align with 2018 security standards.

WebAssembly control flow is safe (no arbitrary goto), and all memory accesses are sandboxed, etc.. The only plausible way to attack it is through an exploit in the browser itself. In that manner, WebAssembly is not more safe or unsafe than any other new API we add. I don't think this should prevent us from implementing new APIs because of the introduced attack surface.

> Let's have pages request WebAssembly like they have to for persistent storage or notifications and integrate it with the planned overhaul of the permission popup/setting dialogs.

This doesn't sound plausible, as WebAssembly is likely to be used in frameworks any time soon (references: [1] [2]), so asking the user's permission to use WebAssembly would be a major source of annoyance in day-to-day usage on popular web sites.

> Another consideration, in addition to the permission, is, of course, introducing space and time bounds for WebAssembly, maybe similar the JS hang check.

Such time checks already happen, as with JS: after more than 10 seconds (by default) executing, the slow script hanger should show up. If it does not, it is an issue and I invite you to open a new bug about it. We are also considering throttling background tabs which use WebAssembly with a high CPU load (bug 1401628).

[1] React: https://www.youtube.com/watch?v=3GHJ4cbxsVQ
[2] Ember: https://www.youtube.com/watch?v=qfnkDyHVJzs&feature=youtu.be&t=5880
Thanks for the fast return, Benjamin. I realize my request is in conflict with
general goals, though please allow me to at least try to make a point and
plead for a change of policy when introducing advanced features. I'm still
not trolling.

(In reply to Benjamin Bouvier [:bbouvier] from comment #2)
> Thanks for the report.
> 
> It is already possible to entirely disable WebAssembly with the about:config
> preference called javascript.options.wasm (set it to false instead of the
> "true" default). I am pretty sure that the Tor browser does this already, in
> highly secured modes.

Thanks. I didn't think of searching for 'wasm', just tried 'assembly'.

> WebAssembly control flow is safe (no arbitrary goto), and all memory
> accesses are sandboxed, etc.. The only plausible way to attack it is through
> an exploit in the browser itself. In that manner, WebAssembly is not more
> safe or unsafe than any other new API we add. I don't think this should
> prevent us from implementing new APIs because of the introduced attack
> surface.

WASM's performance profile has inspired many use cases that were deemed
impractical before to target WASM. This mean, we are on the verge of
an ActiveX/Applet/Flash revival.

> This doesn't sound plausible, as WebAssembly is likely to be used in
> frameworks any time soon (references: [1] [2]), so asking the user's
> permission to use WebAssembly would be a major source of annoyance in
> day-to-day usage on popular web sites.

The following is more of a stream of thought and possibly incoherent
in some places. If so, please bear with me.

Given the new power unleashed, and the, some would argue, less transparent
format (bytecode vs JS script), plus the plan for having large code caches
for WASM, it has to be an annoyance. Nobody wants random sites activating
cameras or initiating WebRTC traffic and maybe participate in an illegal
webtorrent seed, but that's what's possible today because the only permissions
we are asked for is for microphone and camera, not peer connections.

I absolutely understand the argument that by enabling WASM by default we
are not really worsening the security profile of a default browser session.
However, just because JS and other features have been chosen to be enabled
unconditionally a long time ago, doesn't justify repeating the mistake
with WASM. All of these advanced and risky technologies have to be opt-in.

It's already risky to run a browser in the same user profile as the one
with all your sensitive files which might be extracted by malicious JS,
and we're trusting that the browser doesn't have bugs here which allow
a page to overreach and abuse the file APIs. Once we give access to
WASM, the door is wide open, more than before.

I opened this bug knowing the decisions has probably been made already
and features are higher prioritized than security, and I'm also aware
that my arguments are not strong enough to sway the team's opinion,
though I will still officially object and advise against opt-out.
We already have too many features that should be opt-in and often
on-demand opt-in. For instance, WebRTC networking, file access APIs,
advanced CSS and DOM, remote fonts, and especially the ability to
run JavaScript in the first place.

If you're thinking WASM and all the browser features are a nice way
to deploy applications in a better fashion than native applications
in the past, then consider the fact that you have no idea of telling
what payload is running upon a 2nd visit to a page you have previously
approved as trustworthy to get access to all features. Maybe we can
get signatures for pages and have clear separation, in order to notice
anytime something changes in a web page/app, and then reject it when
it suddenly decides to do more than initially promised.

> Such time checks already happen, as with JS: after more than 10 seconds (by
> default) executing, the slow script hanger should show up. If it does not,
> it is an issue and I invite you to open a new bug about it. We are also
> considering throttling background tabs which use WebAssembly with a high CPU
> load (bug 1401628).

Great, but that wouldn't hinder a malicious payload, since they could
strategically insert yield points to avoid hangs.
I would certainly like to argue further for a change in defaults, but I can
imagine it will be an uphill battle. So since javascript.options.wasm solves
part of the problem, I won't object if you decide to close this as WONTFIX
on grounds of different security-vs-features priorities on your part. No offense
taken or intended, just a little frustration that we have to fight for a
safe, unobtrusive, power efficient browser experience.
> Nobody wants random sites activating cameras or initiating WebRTC traffic and maybe participate in an illegal webtorrent seed, but that's what's possible today because the only permissions we are asked for is for microphone and camera, not peer connections.

Note that activating the camera should always cause a permission pop-up to show up, unless the user has permanently authorized to use the camera on that given website. WebRTC indeed leaks personal info (IP address), but it also provides many very useful features, like instant video chat between random people without any central server, p2p multiplayer game, and as you said WebTorrent, allowing decentralized alternatives to e.g. YouTube to flourish [1]. It's not only black or white here.

Same thing happens with wasm: there are wasm crypto miners on the one hand, and AAA-rated video games running in web browser (without plugins!) thanks to wasm on the other hand.

> Maybe we can get signatures for pages and have clear separation, in order to notice anytime something changes in a web page/app, and then reject it when it suddenly decides to do more than initially promised.

Good news, everyone! Such a feature already exists and is called subresource integrity [2]. More generally, note that every security feature implemented by the browser may help for WebAssembly, since it is Just The Web: SRI as I just said, but also CSP (which could forbid dynamic evaluation of wasm modules on web pages), etc.

> Great, but that wouldn't hinder a malicious payload, since they could strategically insert yield points to avoid hangs.

Indeed; it might be a story of heuristics (and a game of cat and mouse), but this could be eventually addressed.

> No offense taken or intended, just a little frustration that we have to fight for a safe, unobtrusive, power efficient browser experience.

If that's any consolation, wasm only has access to the same features as JavaScript; it just executes faster and is more compact as a transport format. Disabling wasm would create a precedent and the same arguments could be used for requesting to disable JavaScript as well, which would be quite a usability disaster, considering how JS is prevalent in modern web sites.

Bear with me, I hear and understand your concerns about safety. However, security always has been a tradeoff between what we want to protect versus what features we want. If you're concerned about wasm, then you should be concerned about JS too, and then you can use the NoScripts addon or entirely disable JS/wasm (as Tor does in the highest secured mode, if I remember correctly).

I think there are too many good things coming with wasm, and requesting opt-in for every single use would be a UX nightmare: I can imagine a non technical person would be frightened by such a question and just imagine the worse can happen if they accept to use it, so they would decline most of the time. This could result in poor usage, thus preventing wasm to accomplish its first goal, which is to make web apps much faster to start and to run. Of course we could explain what wasm is and what is the security risk, but people might not want to make the time to go read such notices.

I am also afraid this sail has shipped a long time ago, and wasm has been approved at many levels months ago. It is also implemented in every other major browser, so disabling it would make it a huge competitive disadvantage for Firefox, unless all agreed to change the way it's used. I hope my comments slightly convinced you that the situation is not as bad as initially expected. I don't know if there's anything else we can do on the browser side (that is, doing something that wouldn't require disabling wasm entirely or requesting explicit permissions with a hanger). Considering this, I propose that we close this as WONTFIX. Thanks for taking the time to give us feedback, though.

[1] https://joinpeertube.org/en/
[2] https://developer.mozilla.org/en-US/docs/Web/Security/Subresource_Integrity
Status: UNCONFIRMED → RESOLVED
Closed: 6 years ago
Resolution: --- → WONTFIX
I'm glad we make the same assessment, albeit with different perspectives
and priorities. While I can see the usefulness of unconditional peer
connections and it allowing torrenting in the browser, this is exactly
the kind of feature that might result in a lawsuit aimed at browser
makers. Many users of Firefox or Chrome live in places where you can
easily get a cease and desist or worse for having participated certain
torrents. This means, if the user's browser can, unbeknownst to him,
use WebRTC to seed something or otherwise use one of the many features
to upload and share copyrighted material, they will be in trouble and
rightfully wonder how this could happen. It's like pages mining Monero,
except that your browser is implicating you in a crime.

Now, in an ideal world scripting and the ability to have the browser
do random things on the Internet would be used to modernize laws such
that the mere lookup of a DNS entry or connection to a website or
involuntary torrenting will not be base to hold the user liable. The
argument would be that browsers are autonomous due to pages possessing
the ability to control the behavior. It would be like if browsing
a website in your Tesla would give it access to the driving system and
make it navigate your car into an accident, prohibited area, or just
ignore traffic rules.

In a browser we can make this less of an issue by making the opting in
a better user experience and implementing heuristics and better filtering
mechanisms that don't need to rely on 3rd party extensions. Once we have
better mechanisms and UX for this, it will be like in the mobile space,
users wondering why foobar.com needs to access my contacts, join a
torrent, connect to www.highly-questionable.com, and access the camera
while doing so (maybe taking pictures of the person who supposedly
browsed the site with illegal content).

I know this is radical, but with the current laws it's dangerous to
use a browser since a web page can attempt to access all kinds of
remote URLs, some of which is enough to get people in jail. People
have been jailed for the mere existence of certain messaging apps
or loading the web page of said app, in some nations. I mean,
if your browser started to access the 2018 replacement of backpages.com,
judge and jury will be against you, and you will have a hard time
arguing it wasn't you but the browser. It takes people that understand
how the web works to get you acquitted, and I fear we will need one
prominent case before laws are introduced that reflect the reality
of browsers and autonomous vehicle or IoT devices that do things
on their own without consent by the owner.



Since you asked how we could improve the situation, I can make a proposal
what might help tremendously. Everyone that is aware of the risks involved
of running a 2018 browser without filters has considered or is using some
extension to selectively control website behavior. What this means is that
browsers haven't yet adopted the full set of security controls we have for
things like iOS apps.

My proposal or wish list would be this:

- improve permission popups and management of saving permissions to be
  less obtrusive and safer

- get more inspiration from Tor Bundle Browser and promote things that
  uMatrix or NoScript allow you to do as real features, not just knobs
  and extension based filter injections. This means, we would have to
  improve the way such filters can be implemented, making them first
  class similar to Safari's or Opera's ad filter systems. This is again
  going in the direction of adopting safety features from the mobile
  app systems. We can learn a lot from Tor Bundle Browser and mobile
  apps here.

- you know, if I navigate to bugzilla.mozilla.org and the new permission
  system is in place, the page will work but have disabled parts due to
  needed opt-in. With the right conventions this could be presented in
  a way that is obvious while preserving safety and privacy.


Or better yet move more things into encrypted P2P, making it normal
to do P2P and it being impossible to tell what kind of traffic your
browser participated in, restoring privacy, if done right and the
P2P app/protocol doesn't expose crucial details. This is the naive
technical solution that will be fought and tried to be outlawed
like E2E messaging getting popular has shown us.
So while, as you say, the ship may have sailed on changing defaults, when we
have the better opt-in and filtering mechanism, it's as easy as providing
something like Developer Edition in that it flips a security level switch
that you can also do in the regular channel. It's like installing a different
browser from the uneducated/unaware user's perspective, with the aim to get
a safer experience, all the while having the default Firefox session accept
and execute any kind of random payload.

The existence of browser variants and the pilot program give me confidence
that we can still fix the situation.
You need to log in before you can comment on or make changes to this bug.