Open Bug 1414675 Opened 7 years ago Updated 2 years ago

Introduce customizable javascript execution limits

Categories

(Core :: JavaScript Engine, enhancement, P3)

enhancement

Tracking

()

UNCONFIRMED
Tracking Status
firefox58 --- affected

People

(Reporter: wavexx, Unassigned)

Details

(Keywords: ux-control)

User Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:50.0) Gecko/20100101 Firefox/50.0
Build ID: 20100101

Steps to reproduce:

We are increasingly leaning towards applications as webpages, and as such FF itself has become essentially a process manager. However FF offers very little to limit the execution resources of the code it's running.

Unwillingly letting javascript mine bitcoin on the client, with both unbounded memory and cpu resources, is something that shouldn't be possible unless the user has agreed to, no matter if it's sandboxed or not. The inability to limit resources has created an overabundance of increasingly complex pages that have little or no regard about the resources they use.

I'd love to have the ability to set a hard limit for the following:

- heap size limit of the JS runtime
- instruction count limit
- execution time limit

The heap, instruction and time limit should sum across all scripts, workers and serviceworkers currently registered for the current page and nested iframes.

Reaching any of those limits should immediately *suspend* the execution of the runtime and prompt the user for confirmation to either continue or stop execution. The confirmation should include the ability to memorize the preference, in order to both block offenders or to allow legitimate scripts.

Note that this is *distinctly* different than the current behavior of "dom.max_script_runtime". "max_script_runtime" seems to be geared at bogus or badly coded scripts, but doesn't prevent a worker to exhaust resources, not does it actually *stop* the page upon visualization of the prompt. A background tab can indefinitely continue to consume resources *despite* max_script_runtime.

On the resource limits: aside from the heap size which should be absolutely quantifiable, it's difficult to instead to say what constitutes an instruction for JS code [statements?], as well as setting a time limit which would be comparatively equal in terms of produced work across different architectures. The exact details shouldn't matter, as long as the limits are enforced.

Both statements are needed in order to fully prevent the following scenario:

- a page that takes advantage of a fast system to compute more than it should
- a page that works for too long on a busy system

We are at a stage where disabling JS is no longer an option, and we also agree that JS is meant to improve the interaction of the page we're visiting. We don't want to block the pages that use JS as intended, but we heavily want to penalize rogue scripts that attempt to take advantage of the system it's running on to perform general, unrelated computation.

Thanks
Severity: normal → enhancement
Component: Untriaged → JavaScript Engine
Keywords: ux-control
Product: Firefox → Core
This is very well written and I agree with pretty much everything here. I've talked with the team about this briefly and have a few ideas... keep in mind that we haven't talked about how big a priority it is yet, or if we're actually going to work on it. Just the technical aspects.



Once upon a time, JS was single-threaded, and we had a "slow script dialog" that would interrupt a long-running script and let the user cancel execution. However, this has aged badly:

- The limit is per event, so it's far too easy to dodge using async code (timeouts, promises, async/await).

- As wavexx mentions, it doesn't pause workers.

- I'm not sure how it works with WebAssembly; and there are probably other kinds of CPU/GPU load it doesn't measure or ration, like CSS animations and WebGL shaders.

- Cancelling doesn't stop the web site.

- It doesn't ration memory usage at all.

But each of these things can be fixed individually, in exactly the ways wavexx suggests:

- Make CPU limits sum across events.

- Make CPU limits sum across threads.

- Make CPU limits sum across types of code.

- Make sure the "slow script" UI blocks all execution; and if the user kills a web site, make sure it stays dead.

- Limit memory usage as well as CPU.
Priority: -- → P3
I didn't mention it in the beginning because it's tangential, but resource usage is not only the main concern at play here.

There have been several talks about using Rowhammer via JS which rely on unbounded memory and cpu resources that allow to essentially bypass even system sandboxing. Breaking ASLR is essentially carried out via memory page exhaustion.

Introducing even a generous but *hard* limit on resources will deeply hamper the ability to perform these kind of attacks, which from the user perspective are almost invisible.
(continuing from comment 1)

Then there's the user interface to these limits. This depends on the work in comment 1 (measuring and enforcing limits correctly).

UI is delicate. We're talking about limiting web site use of system resources -- that is, limiting web site performance -- which has obvious user experience tradeoffs. Users don't want to start playing a game and get a pop-up, "This site is using a lot of CPU; is that OK? [Allow] [Always Allow For xyzzy.com] [Set Limits]".

Nonetheless, a few ideas:

- Currently about:performance is basically useless to anyone but Firefox engineers. We could provide more useful information there, and users could "limit" web sites in a satisfying, low-tech way: by killing badly behaved tabs. I'm told Tarek is working on this already. (?)

- Web sites in background tabs -- stuff the user isn't actually looking at right now -- should be throttled by default. We already throttle event delivery for background tabs, but not CPU or memory usage.

- We could offer user-defined limits for all untrusted web pages, as comment 0 proposes, with user-defined exceptions for trusted sites. This is less attractive because it's unlikely any limits could be enabled for all users by default. It would annoy too many users, and they would blame Firefox. And if the feature is off by default, it can't protect many users.

In a sense, the open web is in heated competition with walled-garden platforms that expose the full power of the user's hardware, with no real limits. But we can definitely do better that we're doing now.
I agree the UI interaction is tricky to solve if you consider all possible scenarios.

I'm not sure the blame would go directly on Firefox though. Consider the following: a page churning away memory, up to the point where firefox starts to become sluggish. Who's going to be blamed? Likely Firefox. Now imagine a default limit of 1GB of heap. This is *very* generous. Could you imagine a legitimate page currently sucking up that amount of memory? I've only seen this happen intentionally but to ill effect (d3 websites loading huge interactive graphs that end up becoming slow beyond oblivion), or unintentionally (background leaks in JS timers over the course of days in background tabs). In the first case, I would be happy to have a warning. In the second, I'd be happy to notice and just kill the page.

But setting aside the UI for a moment, we could take another approach.
Can if we consider having just a webext API for this in the beginning?

This would offload the UI to an extension, which is much more liberal in terms of experimentation. Just to draw a parallel, this is how adblock extensions got started. I have absolutely no problem in managing something like uMatrix, but other would probably be better off with the much more conservative built-in alternative.
(In reply to wavexx from comment #4)

> Now imagine a
> default limit of 1GB of heap. This is *very* generous. Could you imagine a
> legitimate page currently sucking up that amount of memory?

Bug 1392234 points out that our 2GB per-allocation (not per-heap) limit is a problem in some cases: "There are some applications in image processing, video codecs, OpenCV and games that would benefit from being able to allocate WebAssembly heaps of size 4GB."  1GB is plenty for plain JS, but large C++ code bases recompiled for the web will use that easily.

There's increasing awareness that we have to do something about the CPU resouces that a page can consume, coin miners have made this a visible problem.  Not clear what form that fix can take, if it would be a cap on CPU time or a cap on concurrent cores, or what.
Your example of bug 1392234 is definitely not something a common webpage should do though. IMHO hitting the threshold in such a  case is perfectly acceptable, and combined with "always allow this website" fixes the issue entirely.
Flags: needinfo?(sdetar)
Flags: needinfo?(sdetar)
Severity: normal → S3
You need to log in before you can comment on or make changes to this bug.