Open Bug 1864296 Opened 1 year ago Updated 9 months ago

Reloading dynamic webassembly module several times results in "failed to allocate executable memory for module" and WebAssembly stops working

Categories

(DevTools :: General, defect, P3)

Firefox 119
defect

Tracking

(Not tracked)

People

(Reporter: schierlm, Unassigned)

References

(Blocks 1 open bug)

Details

Attachments

(2 files)

Attached file wasm-reload.html

User Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/119.0

Steps to reproduce:

First a bit of background: I am working on a webassembly port of an emulator, to emulate a certain old instruction set architecture inside the browser. This emulator consists of a WebAssembly module for the emulator (exporting the memory) and a dynamic WebAssembly module for just-in-time compilation (built by the emulator at runtime when code memory changes and the changes are about to be executed, containing a translation of the current instructions in code memory to WebAssembly). Whenever code memory gets invalidated, this module gets reloaded (instantiated a new one, keeping no references to the old one). After a few hundred to thousand reloads (equalling a few minutes of emulator usage if the software emulated uses self-modifying code), the emulator stops working. Chrome/Edge/Safari are unaffected. Also happens on 105-esr, and on both Windows and Linux.

I made a very small, constructed, reproducing example, which constantly reloads a 43-byte webassembly module that imports an external WebAssembly.Memory.

Steps to reproduce:

  1. Visit file:///C:/Users/Michi/Desktop/wasm-reload.html (also attached to this issue)
  2. Watch the reload counter increase (500 reloads per 200 ms). After something between 10000 and 50000 reloads, the reload counter stops increasing and instead these two messages appear in the browser console:
  • WebAssembly module validated with warning: failed to allocate executable memory for module
  • out of memory
  1. Try in other browsers and see it passes several million reloads without issues.

When using larger webassembly modules (a few megabytes), the page fails after less than 200 reloads.

Actual results:

The browser fails reloading the module, and WebAssembly.compile or WebAssembly.instantiate throw errors. There are no obvious references left that point to the unloaded WebAssembly modules. There is also no documented API call I could call to explicitly free the old modules. Effectively rendering the emulator website useless on Firefox.

Expected results:

The browser should detect that the old webassembly modules are unused and free resources they occupy, making room for the reloaded version.

The Bugbug bot thinks this bug should belong to the 'Core::JavaScript: WebAssembly' component, and is moving the bug to that component. Please correct in case you think the bot is wrong.

Component: Untriaged → JavaScript: WebAssembly
Product: Firefox → Core

Have to add a comment. If you start Firefox completely fresh and never open Developer Tools, the bug does not happen. Once developer tools are opened, the bug happens rather quickly.

So workaround for now: Restart firefox and do not touch developer tools :)

The severity field is not set for this bug.
:rhunt, could you have a look please?

For more information, please visit BugBot documentation.

Flags: needinfo?(rhunt)
Flags: needinfo?(rhunt)
Attached file wasm-reload-sync.html

Here's an alternate version of the test case that is more synchronous and stops trying to create modules after the first OOM.

Status: UNCONFIRMED → NEW
Ever confirmed: true

Here's what I've been able to find:

  1. OOM is caused by reaching the JIT executable memory limit of 2GiB
  2. The page is creating a bunch of dynamic WebAssembly Instance objects, but dropping references to them
  3. Without Devtools open we can GC these away without an issue
  4. With Devtools open, it looks like the Instance's are being kept alive by a Debugger Source object. This is being kept alive by some objects created by Devtools (I think the SourceActor, https://searchfox.org/mozilla-central/source/devtools/server/actors/source.js#112).
    - I was able to see this by using about:memory to dump GC logs and grep around for what's keeping our instance's alive.

This sounds like then the leak is being caused by Devtools, moving this over there. We also might be doing something wrong on the WebAssembly Debugger integration end, but I'm not seeing that.

Component: JavaScript: WebAssembly → General
Product: Core → DevTools

(In reply to Michael Schierl from comment #2)

Have to add a comment. If you start Firefox completely fresh and never open Developer Tools, the bug does not happen. Once developer tools are opened, the bug happens rather quickly.

So workaround for now: Restart firefox and do not touch developer tools :)

If you open Devtools and close them afterwards, do you still have the issue? Or does it only happen if you keep DevTools open?

Flags: needinfo?(schierlm)

(In reply to Ryan Hunt [:rhunt] from comment #5)

  1. With Devtools open, it looks like the Instance's are being kept alive by a Debugger Source object. This is being kept alive by some objects created by Devtools (I think the SourceActor, https://searchfox.org/mozilla-central/source/devtools/server/actors/source.js#112).
    - I was able to see this by using about:memory to dump GC logs and grep around for what's keeping our instance's alive.

You are very right. The Debugger has the historical design pattern to hold all the scripts alive so that they can be debugged at any time.
If we weren't doing that, you would have trouble debugging scripts which are GC eagerly: not being able to display their source content, set breakpoints, ...
I imagine you could reproduce similar memory trouble using regular Javascript modules.

We might be able to revise this historical design, but that would be a sizable project.

I can confirm that when I close devtools "early enough", the bug does not happen. Also, when I close and re-open devtools and manually trigger the crashed code again, it works for a while. Probably I can implement a workaround that catches the issue at the right spot and ask the user to close devtools and click continue (with no data loss).

In the Debugger view, only very few wasm instances (e.g. 3 or 7) show up. And even when I ignore all files in Debugger (because all I want from devtools is the console), it still happens. Is there another way to disable just the debugger but keep the console alive?

Do you know how other browsers solve this issue?

I assume there is no way from the script on the site to close and re-open devtools when this happens? It would disrupt debugging sessions, but having a non-working website is probably more disrupting to users anyway.

About plain JavaScript: I've done lots of "eval"ing of dynamically created JavaScript (basically a predecessor of that emulator which was written in JavaScript and compiled instructions into "eval"ed functions) but never ran into that issue there. Doing the same in wasm increased the performance considerably (no wonder as all the string concatenation and parsing is no longer needed) so I don't really want to go back.

But I'm very oldfashioned and do not use any JS "modules" (<script type="module">) yet.

Flags: needinfo?(schierlm)

(In reply to Alexandre Poirot [:ochameau] from comment #7)

(In reply to Ryan Hunt [:rhunt] from comment #5)

  1. With Devtools open, it looks like the Instance's are being kept alive by a Debugger Source object. This is being kept alive by some objects created by Devtools (I think the SourceActor, https://searchfox.org/mozilla-central/source/devtools/server/actors/source.js#112).
    - I was able to see this by using about:memory to dump GC logs and grep around for what's keeping our instance's alive.

You are very right. The Debugger has the historical design pattern to hold all the scripts alive so that they can be debugged at any time.
If we weren't doing that, you would have trouble debugging scripts which are GC eagerly: not being able to display their source content, set breakpoints, ...
I imagine you could reproduce similar memory trouble using regular Javascript modules.

We might be able to revise this historical design, but that would be a sizable project.

Okay, so if I understand you correctly, every js script (or in this case wasm module) created in a page is kept alive by devtools indefinitely? Even if the script is 'dead' to the web page and cannot be accessed by it anymore?

(In reply to Ryan Hunt [:rhunt] from comment #9)

Okay, so if I understand you correctly, every js script (or in this case wasm module) created in a page is kept alive by devtools indefinitely? Even if the script is 'dead' to the web page and cannot be accessed by it anymore?

Unfortunately, yes.
This design predates my introduction in the project.
AFAIK, the reasoning is to:

  • Retrieve the source content
  • Retrieve breakable positions (lines and columns where it is valid to set a breakpoint)

So that you can open any script involved in the page, an be able to set a breakpoint,
that, even if the script is gone.
It allows to set and hit the breakpoint if the script runs again, or on next reload.

We are fetching this on-demand, only when the user open the source.
I imagine we could somehow cache this information, but it may bring performance issues.

Having said all that, I haven't looked at the test page scenario.
May be there is something we can do specific to wasm sources.
I imagine they share the same URL, and they are all aggregated behind a single source in the debugger source tree.
We might be able to release/forget the duplicates.
But I'm having a hard time finding a way to detect when a script is released by the page without revising Debugger.Script
In order to hold a weak reference instead of a strong one. And this would also require adding an API to detect released script.
Spidermonkey also assumes that Debugger API only care about new scripts given that the API itself
hold the script alive, as soon as the related Debugger.Script object JS object is hold alive.
If we drop the SourceActor/Debugger.Script references as soon as we have duplicates for the same URL,
we may break breakpoint supports for scripts which aren't released.

(In reply to Alexandre Poirot [:ochameau] from comment #10)

Having said all that, I haven't looked at the test page scenario.
May be there is something we can do specific to wasm sources.
I imagine they share the same URL, and they are all aggregated behind a single source in the debugger source tree.
We might be able to release/forget the duplicates.
But I'm having a hard time finding a way to detect when a script is released by the page without revising Debugger.Script
In order to hold a weak reference instead of a strong one. And this would also require adding an API to detect released script.
Spidermonkey also assumes that Debugger API only care about new scripts given that the API itself
hold the script alive, as soon as the related Debugger.Script object JS object is hold alive.
If we drop the SourceActor/Debugger.Script references as soon as we have duplicates for the same URL,
we may break breakpoint supports for scripts which aren't released.

Two things that might be useful here:

  1. We only support breakpoints and debugging in a wasm module if they are created when the debugger tab is open. If just devtools is open but on a different tab, like console or inspector then we shouldn't compile them to support debugging.
  2. This test case is dynamically creating a bunch of wasm modules by using the JS-API and building custom wasm bytecode. So there should be no URL and future reloads of the page will never see this exact module

With at least just (2) maybe you could ignore wasm modules the debugger API gives that don't have a URL? Or hold them in weak references so that they can be freed?

Blocks: wasm-tools
Severity: -- → S3
Priority: -- → P3
Duplicate of this bug: 1895309
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: