Open
Bug 681479
Opened 14 years ago
Updated 1 year ago
Need to account for DOM objects that hold significant amounts of non-GCd memory alive in GC heuristics
Categories
(Core :: DOM: Core & HTML, defect, P5)
Core
DOM: Core & HTML
Tracking
()
NEW
People
(Reporter: khuey, Unassigned)
References
Details
(Whiteboard: [MemShrink:P2])
Attachments
(1 file)
In Bug 680847, the reporter presents a testcase that churns up a lot of FileReader objects each loading a few MB of data into memory. This doesn't trigger GC, however, because those MB of memory are consumed in the DOM side of things, which leads to explosive memory growth.
We ran into a similar problem with Images a while back, but I couldn't find the bug.
CCing some of the usual suspects for DOM/js gc interactions.
Comment 1•14 years ago
|
||
Something along these lines?
Comment 2•14 years ago
|
||
Sorry, I wrote a patch too quickly. This would probably help for the bug in question, but I guess this bug would be about creating some sort of general system by which GCed objects could report some amount of non-GCed memory usage to encourage sensible GCing?
Reporter | ||
Comment 3•14 years ago
|
||
Something like that (you want to report the delta, not the total amount) .... is that the canonical way to do this?
Reporter | ||
Comment 4•14 years ago
|
||
(In reply to Josh Matthews [:jdm] from comment #2)
> Sorry, I wrote a patch too quickly. This would probably help for the bug in
> question, but I guess this bug would be about creating some sort of general
> system by which GCed objects could report some amount of non-GCed memory
> usage to encourage sensible GCing?
Right. If the solution is to one-off everything like that we can do that, but we seem to be running into this a fair amount.
![]() |
||
Comment 5•14 years ago
|
||
We have a manual hack for this for canvas right now, fwiw..... It does feel really nasty. :(
Reporter | ||
Comment 6•14 years ago
|
||
(In reply to Boris Zbarsky (:bz) from comment #5)
> We have a manual hack for this for canvas right now, fwiw..... It does feel
> really nasty. :(
Yeah, it's pretty ugly. :-/
![]() |
||
Updated•14 years ago
|
Whiteboard: [MemShrink] → [MemShrink:P2]
It seems hard to come up with a solution to this problem that's actually good. Here's why I'm worried.
Our basic malloc heuristic works as follows. We use a counter, M, to count how many bytes have been malloced since the last GC. If it reaches 128MB, we GC. A general solution to this problem would be to replace calls to malloc and free with JS_malloc and JS_free, since these will increment the counter. (You could use JS_updateMallocBytes, but it seems pretty hacky to me.)
However, I think it's potentially bad if memory that we allocate via JS_malloc can be freed outside of a GC finalizer. The problem is that it's pretty easy to churn through 128MB and cause spurious GCs. I'm worried about code that looks like this:
for (...) { ... void *x = malloc(...); ... free(x); ... }
(Not that we would write such a loop directly, but it's easy to create such patterns when lots of abstraction is involved.) The problem is that the malloc will increment M, but the free will not decrement it.
Looking at the code in the bug, it seems like we only would want to use JS_malloc if the only reference to the file reader is from JS. Otherwise there is no way that a GC could free the memory. However, this seems impractical.
So I guess I would propose the following instead. We need a way to decrement the counter when a realloc or free happens. To do this, we need to be able to figure out how big the buffer is. We could require that this value be passed in explicitly to JS_realloc/JS_free. Or we could get the info out of jemalloc somehow.
If we did this, then we could start converting lots of malloc calls in the DOM over to JS_malloc. This seems like a somewhat general solution. It's a lot of work, though, and I'm not sure it's worth it. Patching these when they break doesn't seem like it's been too bad so far.
Reporter | ||
Comment 8•14 years ago
|
||
I thought about this some more on the plane back from the all hands. The issue with decrementing the counter when the file reader is destroyed is that you run into the following scenario:
- JS creates a FileReader which results in 20 MB of memory being allocated.
- The FileReader tells the JS engine to advance the malloc counter by 20 MB.
- Lets say the malloc counter is now at 50 MB.
- Sometime later (after multiple GCs) the FileReader is destroyed.
- The malloc counter is at 10 MB.
- The FileReader tells the JS engine to advance the malloc counter by -20 MB.
- The malloc counter is now at -10 MB.
It seems like what we really want to do is decrement the counter on free if there has been no GC since the allocation. That sounds pretty hard.
Also, FWIW, the number of call sites on the DOM side that we're talking about here is pretty small.
If we track frees, then we can make the malloc bytes heuristic work the same way as the normal GC scheduling heuristic. It works as follows:
We track the total number of bytes allocated by the GC in gcBytes. At the end of a GC, this holds the number of bytes of live data. We schedule another GC when gcBytes reaches three times this number. This allows us to bound the memory overhead of gabrage collection to a factor of 3.
If we track frees, we can do the same thing with malloc bytes. We would not reset malloc bytes during GC, so it could never hit a negative value.
I checked in bug 730447. We should start using this API in the browser instead of JS_updateMallocCounter.
Comment 11•7 years ago
|
||
https://bugzilla.mozilla.org/show_bug.cgi?id=1472046
Move all DOM bugs that haven't been updated in more than 3 years and has no one currently assigned to P5.
If you have questions, please contact :mdaly.
Priority: -- → P5
Assignee | ||
Updated•6 years ago
|
Component: DOM → DOM: Core & HTML
Updated•3 years ago
|
Severity: normal → S3
You need to log in
before you can comment on or make changes to this bug.
Description
•