Closed
Bug 1507765
Opened 6 years ago
Closed 5 years ago
On 64-bit systems, if VM limits are low, do not use 6GB allocation trick
Categories
(Core :: JavaScript: WebAssembly, enhancement, P3)
Core
JavaScript: WebAssembly
Tracking
()
RESOLVED
FIXED
mozilla71
Tracking | Status | |
---|---|---|
firefox71 | --- | fixed |
People
(Reporter: lth, Assigned: rhunt)
References
Details
Attachments
(1 file)
This is a more sophisticated response to the same problem as bug 1507759: if the process has stringent VM limits (say, not enough to accomodate some dozens of wasm memories) we should not use the reserve-6-gigs trick, but instead select traditional, slower bounds checking, as we do on 32-bit. We might print a warning about this in the console.
I expect this requires a fair amount of engineering since we're currently equating a 64-bit build with a huge heap at compile time; we won't be able to do this.
(There are also other ways to mitigate the problem, including possible flags when the memory is created or the module is instantiated; and other techniques coming down the pike that will reduce the problem, like site isolation. I see those as complementary.)
Computing the virtual memory limits may be (a) system-specific and (b) probabilistic, and there's always the risk that we will regress some application that only has a small number of memories, but it's probably worth trying this anyway.
![]() |
||
Comment 1•6 years ago
|
||
Yes, in the limit we must definitely do this. Do you have a sense of this increasing in prevalence?
![]() |
||
Comment 2•6 years ago
|
||
Oooh, also: if we can make a once-up-front (per content process) decision of "this is a large vmem system", we can simply go "all bounds-checked" / "all guard pages" within that process, avoiding the instantiation-time-mismatch situation JSC has.
We would have to key cached code on this to handle extreme cases, handling them like a build-id mismatch, but these should not be common.
Comment 3•6 years ago
|
||
Lars, just a heads up that bug 1502733 added js::gc::SystemAddressBits() [1] which should be able to tell you if the address range is small enough that the allocation trick needs to be disabled. So far it seems that this only applies to AArch64 Linux as far as automation is concerned.
[1] https://hg.mozilla.org/mozilla-central/file/e4ac2508e8ed/js/src/gc/Memory.h#l24
Flags: needinfo?(lhansen)
Assignee | ||
Comment 5•5 years ago
|
||
This commit uses gc::SystemAddressBits() to disable huge memory in low address
space situations on process startup. I've tried to pick a conservative number
here, but it may require some tuning.
Additionally, this change requires a change to the initialization order of JS
systems (Wasm <-> GC).
Depends on D42217
Assignee | ||
Updated•5 years ago
|
Assignee: nobody → rhunt
Assignee | ||
Comment 6•5 years ago
|
||
Pushed by rhunt@eqrion.net:
https://hg.mozilla.org/integration/autoland/rev/8830edfe87cd
Wasm: Use gc::SystemAddressBits() for determing whether we use huge memory. r=lth
Comment 8•5 years ago
|
||
bugherder |
Status: NEW → RESOLVED
Closed: 5 years ago
status-firefox71:
--- → fixed
Resolution: --- → FIXED
Target Milestone: --- → mozilla71
You need to log in
before you can comment on or make changes to this bug.
Description
•