JavaScript Engine
9 years ago
8 years ago


(Reporter: gal, Assigned: gal)


Mac OS X
Bug Flags:
wanted1.9.2 +

Firefox Tracking Flags

(Not tracked)



(1 obsolete attachment)



9 years ago
This seems to be some deprecated API that is only used by iterators. However, those allocations are already accounted for by JS_malloc anyway. I think this can safely be removed.

Comment 1

9 years ago
Created attachment 390156 [details] [diff] [review]
[Included in bug 506125]
Assignee: general → gal
Attachment #390156 - Flags: review?(igor)

Comment 2

9 years ago
(In reply to comment #0)
> This seems to be some deprecated API that is only used by iterators. However,
> those allocations are already accounted for by JS_malloc anyway.

JS_malloc just counts the total amount of allocated memory without paying attention to the corresponding free calls. It cannot be used for precise GC accounting and heuristics. On the other hand js_AddAsGCBytes counts the memory that is properly allocated and freed. That is, js_AddAsGCBytes can be used to enforce quotas and is better suited for GCtuning. Ideally all the memory allocated through JS_malloc should be accounted in the same way but that requires changing JS_free and JS_realloc to pass the size of the previous allocation.

Comment 3

9 years ago
We only use js_AddAsGCBytes for one specific rare allocation. The vast amount of them just goes into malloc. Is there any proof that we need this high level of precision for the heuristics? Right now this is just an almost unused API.

Comment 4

9 years ago
The reason for js_AddAsGCBytes is that a enumerator can be huge. If such enumerator survives the GC once and then it gets cached, then without js_AddAsGCBytes it would not contribute to the memory pressure and runtime would not know that a simple extra GC can release memory when cleaning the cache. 

An alternative to js_AddAsGCBytes is the explicit bound on the cache size, but js_AddAsGCBytes is simple and avoids yet another arbitrary parameter for the runtime.

Comment 5

9 years ago
So what is the use case for this? The runtime explicitly inspecting the current gc bytes and then deciding to GC? Is this case currently ever hit in the browser?

Comment 6

9 years ago
I guess I just really don't understand why enumerators are any different from arrays for example. Arrays can also be huge, and only contribute towards memory pressure once (using JS_malloc). In fact, I would assume arrays usually contribute a lot more to memory pressure than enumerators. And many arrays are no longer reachable, and will be disposed of at the next GC, along with all the malloc-ed memory hanging off them. The runtime doesn't know that either. I think the right behavior for the embedding is to simply GC when memory is getting tight and see what happens. A forced GC is much likely to recover a lot of memory from disposing objects and arrays than from enumerators, so not running the GC because it looks like not a lot of enumerators are around seems like a bad strategy.

Comment 7

9 years ago
The difference between enumerators and other GC-controlled objects is that the no-longer-used enumerators are cached. This cache is cleared during the GC so the GC should know that it can release some memory via running the collection cycle.

I added js_AddAsGCBytes when I have seen this issue in a synthetic test case.

Comment 8

9 years ago
An alternative to js_AddAsGCBytes was avoiding caching big enumerators, but js_AddAsGCBytes allowed to avoid to put yet another number into the runtime, that is, the enumeration size.

Comment 9

9 years ago
#7: The GC can never reliably know how much memory it can release without running a collection. Except for synthetic cases, a GC will always successfully free up a lot more memory for dslots than for enumerators, which this API doesn't address. If we are low on memory, it doesn't make sense to consult a heuristics whether to GC or not. We just GC.

#8: I think its fine to cache big enumerators.  I don't see why we need an upper bound. All we have to make sure is that the GC runs when we hit memory pressure. Counting bytes the GC allocates internally or via malloc is a very poor indicator for that, since the DOM independently allocates a ton of memory itself (which we don't see). If we detect actual memory pressure, and then simply GC. I don't see the need for fancy heuristics for the memory pressure case. Incremental GC to avoid long pause times is a different story, but that doesn't seem to be the purpose of this API.

I am still strongly in favor of dropping this API. Its yet another obscure accounting mechanism and it further adds to the complexity of the GC heuristics without a clear use case or benefit. I really would like to reduce the overall code size and complexity of the GC.

So can we go ahead and rip this out?

Comment 10

9 years ago
(In reply to comment #9)
> #8: I think its fine to cache big enumerators.  I don't see why we need an
> upper bound. All we have to make sure is that the GC runs when we hit memory
> pressure.

js_AddAsGCBytes serves this purpose. Without one or another way to account for cached enumerators one can construct an example when the runtime continues to allocate objects without running the GC when there are over 100MB of cached enumerators.

I agree that the way this is implemented can be made simpler, but just dropping this is not the best option.

Comment 11

9 years ago
The problem we are trying to solve is that the GC isn't triggered when large chunks of mallocs hang of GCThings and we can run into OOM.

This API solves the problem for a very specific, more or less irrelevant corner case, and is a distraction from any real solution. I can push the browser into OOM with a 1-liner just concatenating strings. No enumerators needed.

Memory pressure cannot be measured by observing allocations in the VM. In any non-trivial embedding allocations will happen and reside mostly outside the VM. The API as is simply can't work. I am all for true memory pressure measurement (lets file a bug) but in the meantime I would really like to reduce the complexity of the GC and take out whatever code isn't really helping. Our GC is 3x more code than a comparable implementation, and 3-5 times slower, and has 3x worse pause times. The only way we will make progress on those things is reducing complexity and throwing out dead weight.

Comment 12

9 years ago
Comment on attachment 390156 [details] [diff] [review]
[Included in bug 506125]

I agree, lets remove this.
Attachment #390156 - Flags: review?(igor) → review+
(In reply to comment #11)
> I am all for true memory pressure measurement
> (lets file a bug)

Bug 506125 already filed.



9 years ago
Depends on: 506125


9 years ago
Blocks: 505308


9 years ago
Last Resolved: 9 years ago
Resolution: --- → DUPLICATE
Duplicate of bug: 506125
No longer blocks: 505308
No longer depends on: 506125
Attachment #390156 - Attachment description: patch → patch [Included in bug 506125]
Attachment #390156 - Attachment is obsolete: true

Comment 15

8 years ago
With the bug 512046 fixed js_AddAsGCBytes and js_RemoveAsGCBytes became truly a deadwood and can be just removed.
Resolution: DUPLICATE → ---

Comment 16

8 years ago

I nominate this for 1.9.2 as the bug 512046 that made js_AddAsGCBytes unused is already wanted for 1.9.2.
Blocks: 512046
Flags: wanted1.9.2?

Comment 17

8 years ago
Last Resolved: 9 years ago8 years ago
Flags: wanted1.9.2? → wanted1.9.2+
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.