Closed Bug 317865 Opened 19 years ago Closed 18 years ago

JavaScript GC memory limit - let it be optional

Categories

(Core :: JavaScript Engine, defect, P1)

defect

Tracking

()

RESOLVED FIXED
mozilla1.9alpha1

People

(Reporter: zizka, Assigned: brendan)

References

()

Details

(Keywords: fixed1.8.0.2, fixed1.8.1, Whiteboard: [tcn-dl])

Attachments

(2 files, 1 obsolete file)

User-Agent:       Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.7.12) Gecko/20050915 Firefox/1.0.7
Build Identifier: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.7.12) Gecko/20050915 Firefox/1.0.7

On http://lxr.mozilla.org/seamonkey/source/js/src/xpconnect/src/xpcruntimesvc.cpp#129 :
   const uint32 gGCSize = 4L * 1024L * 1024L; /* pref? */
Could the comment be realized into an setting in preferences?

I think I have reached this limit at 
  http://ondra.zizka.cz/projekty/bugs/sm_array_limit.html
(Causes Out of memory exception.)

But I am not sure whether it is _that_ limit, because at
  http://www.mozilla.org/js/spidermonkey/apidoc/gen/api-JS_NewRuntime.html :
"maxbytes specifies the number of allocated bytes after which garbage collection is run."

But anyway, if it is not the limit: 4 megabytes is really very little. Something should be increased to allow at least milion floats (that gives around 10 MBs?) Firefox takes several thousands of megabytes usually, so why bother with few MB for JavaScript. It's not allocated "physically" until you use it, anyway.

Thanks, Ondra

Reproducible: Always

Steps to Reproduce:
http://ondra.zizka.cz/projekty/bugs/sm_array_limit.html

Actual Results:  
Out of memory exception is thrown and Firefox (JavaScript) hangs (on Win32).

Expected Results:  
Like MSIE - take as much memory as needed. That could be optional. Use some limit otherwise, but more than 4 MB (one should be able to set this too).

Firefox & JavaScript becomes a good tool for first-scratch code to test algorithms. This limit prevents it being used for larger apps.
Just to make it clear - there should be:
"But anyway, if it is not the limit: Around 400 000 floats is really very little."
instead of
"But anyway, if it is not the limit: 4 megabytes is really very little."

Because with the second it sounds like I care about garbage collector, but I care about the number of floats I can store in the Array.
we've made it an env var for testing purposes. it really can't be a pref because prefs happen too late. can you please provide a real world scenario where you actually want to juggle that many floats?
Assignee: nobody → general
Component: Build Config → JavaScript Engine
Product: Firefox → Core
QA Contact: build.config → general
Version: unspecified → Trunk
Currently I work on neural networks, and that bunch of floats is needed mainly to track it's history (for learning purposes). I admit, I don't reach the limit often, but I plan to combine neural networks and state space search method and that will increase the number of floats about hundred times.
*** Bug 324844 has been marked as a duplicate of this bug. ***
After talking with brendan and shaver (see bug 324844) we've decided that this limit needs to be either raised or removed. The bfcache is currently based on the amount of system memory available, so maybe that's the way to go here, too.
Status: UNCONFIRMED → NEW
Ever confirmed: true
Flags: blocking1.8.1?
Flags: blocking1.8.0.2?
Running out of time on 1.8.0.2 -- if you're serious about that release we need a trunk-baked patch landed ASAP.
Flags: blocking1.8.0.2? → blocking1.8.0.2-
Attached patch proposed fix (obsolete) — Splinter Review
This is a minimal, backward-API-compatible fix to allow Firefox and other apps to schedule their own GCs, yet still run a last-ditch GC after a sufficient number of bytes have been allocated by JS_malloc.

Suggested by dbaron, thanks to him for pushing on the issue -- my only concern was API compatibility, because many SpiderMonkey embedders rely on the last-ditch GC based on nominal heap size threshold.

We should have done this for 1.5.  The failure to raise or eliminate the threshold based on nominal heap size means greater memory pressure, sometimes putting the app on the edge of the cliff, where every allocation GCs.  The app's performance is miserable at that point, often bad enough that you have to kill it.

/be
Attachment #211211 - Flags: superreview?(shaver)
Attachment #211211 - Flags: review?(mrbkap)
Comment on attachment 211211 [details] [diff] [review]
proposed fix

>     uint32              gcMaxBytes;
>+    uint32              gcMaxMallocBytes;

If we want this on the branch, do we need to add this member to the end of the runtime struct to preserve binary compatibility with people trying to link with Firefox's version of SpiderMonkey? 


>+    rt->gcMaxBytes = rt->gcMaxMallocBytes = maxbytes;

A comment here mentioning that this provides needed backwards compat with pre-GCParameter SpiderMonkeys might not hurt.

>+        JS_SetGCParameter(mRuntime, JSGC_MAX_BYTES, 0xffffffff);

Another comment here as to why we're doing things this way might not hurt either. I might just want this comment because of the opacity of the previous value.

r=mrbkap
Attachment #211211 - Flags: review?(mrbkap) → review+
> If we want this on the branch, do we need to add this member to the end of the
> runtime struct to preserve binary compatibility with people trying to link with
> Firefox's version of SpiderMonkey?

If you mean the 1.8.0 branch, then "yes" ... I've seen some extensions that do link to spidermonkey in order to do fancy JS stuff.  If you mean 1.8 for 1.8.1, then "probably yes" based on our no interface change policy for Gecko 1.8, but strictly speaking "no" since extensions can differentiate between FF 1.5.0.* and 2.0.0.* easily enough.
(In reply to comment #9)
> > If we want this on the branch, do we need to add this member to the end of the
> > runtime struct to preserve binary compatibility with people trying to link with
> > Firefox's version of SpiderMonkey?
> 
> If you mean the 1.8.0 branch, then "yes" ... I've seen some extensions that do
> link to spidermonkey in order to do fancy JS stuff.

But JSRuntime is not part of the public API.  Sorry, I don't buy it.  We can add in the middle.  This is not an interface at all, it's a concrete struct defined in a private header file.

Blake, where's the opacity?  4 * 1024 * 1024 is less obviously a sane value than 0xffffffff?

Comments tomorrow, I'm tired.

/be
Assignee: general → brendan
> But JSRuntime is not part of the public API.

OK
Attachment #211211 - Attachment is obsolete: true
Attachment #211329 - Flags: superreview?(shaver)
Attachment #211329 - Flags: review+
Attachment #211211 - Flags: superreview?(shaver)
Comment on attachment 211329 [details] [diff] [review]
proposed fix, with more comments

sr=shaver
Attachment #211329 - Flags: superreview?(shaver) → superreview+
Waiting for trunk to reopen.

/be
Status: NEW → ASSIGNED
Flags: blocking1.8.0.2- → blocking1.8.0.2?
Priority: -- → P1
Summary: JavaScript memory limit - let it be optional → JavaScript GC memory limit - let it be optional
Target Milestone: --- → mozilla1.9alpha
(In reply to comment #10)
> But JSRuntime is not part of the public API.

We said that about supposedly opaque principal objects too, then had the Asgard plugin crashing fiasco.
(In reply to comment #15)
> (In reply to comment #10)
> > But JSRuntime is not part of the public API.
> 
> We said that about supposedly opaque principal objects too, then had the Asgard
> plugin crashing fiasco.

That's a different animal, nsIPrincipal, an XPCOM interface that can be called into and even implemented outside of caps.  This is a C struct that cannot be usefully implemented or called into outside the JS engine.  There is nothing to fear here, and unreasonable fears should not make us add at the end of structs that are opaque types in the JS API.

/be
Please land on trunk for baking so we can consider this for 1.8.0.2
Flags: blocking1.8.0.2? → blocking1.8.0.2+
Attachment #212701 - Flags: superreview+
Attachment #212701 - Flags: review+
Attachment #212701 - Flags: approval1.8.0.2?
Attachment #212701 - Flags: approval-branch-1.8.1+
Status: ASSIGNED → RESOLVED
Closed: 18 years ago
OS: Windows 2000 → All
Hardware: PC → All
Resolution: --- → FIXED
Comment on attachment 212701 [details] [diff] [review]
branch version of patch

Needs more bake time on the trunk.  It just landed yesterday.  minusing for 1.8.0.2.  Plusing for 1.8.0.3.
Attachment #212701 - Flags: approval1.8.0.3+
Attachment #212701 - Flags: approval1.8.0.2?
Attachment #212701 - Flags: approval1.8.0.2-
Comment on attachment 212701 [details] [diff] [review]
branch version of patch

I don't think this needs more bake time.  It removes a feedback loop that we believe is too low, which makes us hit vulnerabilities in our code.  It is not needed to schedule the garbage collector -- the DOM code does that.  Please reconsider.

/be
Attachment #212701 - Flags: approval1.8.0.2- → approval1.8.0.2?
(In reply to comment #20)
> (From update of attachment 212701 [details] [diff] [review] [edit])
> I don't think this needs more bake time.  It removes a feedback loop that we
> believe is too low,

s/low/short/

Point is we have enough GC scheduling in the DOM to collect garbage.  The problem with the 4MB threshold is actually two nasty bugs:

1.  You can hit it hard due to lots of JS objects, and you're toast -- no way around it.  Even if you have 1GB of RAM and can well afford hundreds of megs on JS objects.

2.  You can hit it transiently, forcing a GC that does collect enough garbage to back off from the threshold by a decent interval.  But hitting the threshold exposes more users to those bugs we'd rather find and fix by our own custom build stress-testing.

Both problems show up in 1.5 talkback (see bug 307317) and bug reports.  Neither should be shipped in 1.5.0.2.

The only downside of unconstraining the threshold is if the DOM fails to otherwise schedule GCs often enough.  But we actually GC quite a bit from several opportunistic places in DOM code.  And, the other GC parameter that this patch introduces will automatically, w/o DOM intervention, run a GC after 4MB worth of allocations that go through JS_malloc.  So we're doubly covered for GC scheduling.

Again, the bug to fix is unexpected "last ditch" GC nesting inside the GC's allocator.

/be
Comment on attachment 212701 [details] [diff] [review]
branch version of patch

approved for 1.8.0 branch, a=dveditz for drivers
Attachment #212701 - Flags: approval1.8.0.2? → approval1.8.0.2+
Fixed on the 1.8* branches.

/be
Flags: testcase-
See bug 319980 for a missing fix-part wanted by this bug.  I'll nominate the patch over there.

/be
Please provide testing guidance for verifying this fix.  I don't think just loading ths above URL is sufficient, but I'm not sure where or how we need to poke around.
Whiteboard: [tcn-dl]
Attachment #212701 - Flags: approval1.8.0.3+
Flags: blocking1.8.1?
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: