Closed
Bug 1223782
Opened 10 years ago
Closed 7 years ago
Erratic IndexedDB large file store operations cause IndexedDB consume a large amount of memory that is not freed.
Categories
(Core :: Storage: IndexedDB, defect, P3)
Core
Storage: IndexedDB
Tracking
()
RESOLVED
WORKSFORME
People
(Reporter: jujjyl, Unassigned)
Details
We are tracking a bug with a Mozilla partner code on an Emscripten application that uses IndexedDB to persist files to local system for caching.
A bug in Emscripten ( https://github.com/kripken/emscripten/issues/3908 ) caused the Emscripten application to perform a lot of parallel IDB store operations. While this is being fixed separately, the bug uncovered another bug in Firefox:
When the application spams a lot of IDB stores, Firefox consumes about 40GB of RAM (that's fine, the application did indeed misbehave). However, the issue is that after the large spam of stores is finished, this temporary memory is never freed, but Firefox retains the large memory allocation, which shows up in about:memory as
Main Process
Explicit Allocations
41,853.10 MB (100.0%) -- explicit
├──41,691.10 MB (99.61%) ── heap-unclassified
Note that the allocation is in main Firefox.exe process and not the content process. Clicking on the about:memory buttons for GC/CC/Minimize memory usage do not free up this memory. The consumed memory is not a traditional dangling pointer type of memory leak, since when one closes the tab, then the consumed 40GB of memory is immediately freed.
Debugging indicates that the 40GB of memory is consumed in the IndexedDB backend in Firefox, since skipping the IDB store.put() operations avoids the large memory allocation altogether.
The expected behavior is that once Firefox finishes the large number of IndexedDB operations, the memory consumption should go down.
I have been trying to create a small test case, however that has proven quite difficult, since the memory consumption seems to be timing specific on parallel IndexedDB operations. An always reproducing test case does exist in the full app, but it is subject to Mozilla-Partner NDA, so can't link it directly here.
Can you run DMD on it?
Comment 2•10 years ago
|
||
Is it possible it was leaking transactions?
Updated•7 years ago
|
Priority: -- → P3
Comment 3•7 years ago
|
||
There have been extensive cleanups in Blob handling in general and in IDB in particular since this bug was filed, which is likely what this problem was related to. I'm going to resolve this WFM. Obviously, if it's still happening, please let us know.
Status: NEW → RESOLVED
Closed: 7 years ago
Resolution: --- → WORKSFORME
When I attempt to write a large object (all on the client), comprised of three levels of nested objects, to indexedDB it grabs a large amount of memory as viewed in windows task manager firefox.exe, and crashes the computer unless the transaction is stopped.
I can get the object written to indexedDB if I delete each component object property immediately after it's written to an object store. However, the memory usage continues to grow throughout the process by jumping up some amount and back down a lesser amount; and reaches about 1.0-1.5GB before the transaction is complete. After about 10 to 30 seconds, the memory is eventually released in most cases.
In those cases in which the memory is not released, a refresh won't release it but only closing the page will.
The object size is a small fraction of the amount of memory used to write it to the database.
If I write a large object of similar size that does not contain the nested levels of smaller objects, the memory usage issue does not occur.
Would you please let me know if this could be related to what was earlier thought to be a bug, or if this is expeced to occur during the serialization of any large object, similar to what happens when one attempts to use JSON.stringify on a large object?
Looking through related questions on stackoverflow, it appears that many have been struggling to deal with memory usage during serialization of an object and without a good solution.
I can certainly provide an example. However, because much of what is discussed on this site far exceeds my knowledge level in this area, I figured I'd wait to see if you feel the issue I've experienced could be at all related to this earlier bug. I don't want to waste anyone's time.
Thank you.
Comment 5•7 years ago
|
||
Gary, your problem sounds different from this bug which I surmised to involve Blob/File instances. Please file a new IndexedDB bug and cc me on it. Please be sure to mention if you are trying to store objects that use getters or special object types like TypedArrays/ArrayBuffers, Blob/File instances, WebAssembly/etc. Example code like in a glitch.com example that reproduces the problem would be invaluable.
In general, we expect the dynamic memory usage of a put() to be a small multiple of the size of the data on-disk. The one caveat is that we do try and compress the data we write to disk, so in the event you do something like try and store an ArrayBuffer that's 10 megabytes of zeroes, that will of course compress very well, but take up an immense amount of memory in the process of writing it to disk. IDB uses the structured clone algorithm on a per-value basis so complex object-graphs should not be a problem as long as they exist inside a single put().
You need to log in
before you can comment on or make changes to this bug.
Description
•