Closed
Bug 613551
Opened 15 years ago
Closed 15 years ago
JSGC_MAX_MALLOC_BYTES is too large on mobile
Categories
(Core :: JavaScript Engine, defect)
Tracking
()
RESOLVED
FIXED
Tracking | Status | |
---|---|---|
fennec | 2.0b4+ | --- |
People
(Reporter: pavlov, Assigned: dougt)
References
Details
Attachments
(3 files, 2 obsolete files)
3.47 KB,
patch
|
cdleary
:
review+
|
Details | Diff | Splinter Review |
1.13 KB,
patch
|
jst
:
review+
|
Details | Diff | Splinter Review |
1.22 KB,
patch
|
jag+mozilla
:
review+
|
Details | Diff | Splinter Review |
Bug 598650 doubled the already large max GC allocation size to 128mb, which is more than we want the whole browser to be using on mobile. This needs to be made a pref so that it can be lowered.
Reporter | ||
Updated•15 years ago
|
tracking-fennec: --- → 2.0b3+
Comment 1•15 years ago
|
||
This is already configurable through JS API so I think this is the wrong component and should go over to DOM or whoever claims ownership over nsJSEnvironment.
Comment 2•15 years ago
|
||
The naming is very confusing but JSGC_MAX_MALLOC_BYTES has nothing to do with the actual heap size. This is just a GC trigger that kicks in when more than "JSGC_MAX_MALLOC_BYTES" bytes are dynamically allocated since the last GC run.
Reporter | ||
Comment 3•15 years ago
|
||
If 128mb are allocated since we last GC'd, we're likely to be out of memory. I was thinking that we probably want this closer to 16 or 32, to cause more GCing?
Reporter | ||
Comment 4•15 years ago
|
||
Not sure if this bug is related, but we're now crashing on the V8 test while it is doing regex due to out of memory.
Comment 5•15 years ago
|
||
(In reply to comment #4)
> Not sure if this bug is related, but we're now crashing on the V8 test while it
> is doing regex due to out of memory.
This can only happen if you don't set JSGC_MAX_BYTES accordingly and rely on JSGC_MAX_MALLOC_BYTES for triggering the GC "early enough".
Comment 6•15 years ago
|
||
(In reply to comment #3)
> If 128mb are allocated since we last GC'd, we're likely to be out of memory. I
> was thinking that we probably want this closer to 16 or 32, to cause more
> GCing?
The problem with a lower limit is that it triggers many GC runs during sunspider at suboptimal points.
I don't think it makes sense to use the same limits for mobile and the desktop browser so you should definitely change it for mobile.
Reporter | ||
Updated•15 years ago
|
tracking-fennec: 2.0b3+ → 2.0b4+
Comment 7•15 years ago
|
||
How about setting JSGC_MAX_MALLOC_BYTES to |PR_GetPhysicalMemorySize()/8| or some other multiple (on both desktop and mobile)? That way we wouldn't need to change a pref as mobile devices have more and more memory, each individual device would have a suitable value.
Assignee | ||
Updated•15 years ago
|
Assignee: general → doug.turner
Assignee | ||
Comment 8•15 years ago
|
||
the defaults will remain the same, but will allow applications to tweak there settings.
Also, the other contexts, should we also provide setting control over them. i am specifically thinking about xpconnect and jetpack
Attachment #494832 -
Flags: review?(jst)
Comment 9•15 years ago
|
||
see also bug 617505.
Updated•15 years ago
|
Attachment #494832 -
Flags: review?(jst) → review+
Assignee | ||
Comment 10•15 years ago
|
||
Status: NEW → RESOLVED
Closed: 15 years ago
Resolution: --- → FIXED
Assignee | ||
Comment 11•15 years ago
|
||
there will be a separate bug to set the right value on fennec.
Comment 12•15 years ago
|
||
cdleary's going to have to explain how this changed the default value on 64-bit builds, but, I backed it out because it was causing bug 623105, which wasn't happening on TraceMonkey where this was the only non-merged candidate, and bug 623105 indeed stopped happening once it was backed out.
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Assignee | ||
Comment 13•15 years ago
|
||
-1 instead of 0xffffffff?
Assignee | ||
Comment 14•15 years ago
|
||
i think the "* 1024 * 1024" probably passed the wrong value. Fixing that.
Attachment #501267 -
Flags: review?(cdleary)
Comment 15•15 years ago
|
||
I'm not sure, but I think -1 * 1024 * 1024 is effectively -1UL << 20, which is certainly different. It was just the most plausible explanation for the single-shot-memory-intensive js1_5/Regress/regress-290575.js to start failing, as it allocates a big array with arguments that are then used as arguments to a function.
If we can isolate the failure more I'd be happy to help out with the debugging! In the meantime, I'm going to mark 623105 as fixed until we can get this passing regressions on all platforms.
Assignee | ||
Comment 16•15 years ago
|
||
these pref changes really should be reviewed. jst, basically these match up with what the C++ code does.
Attachment #501272 -
Flags: review?(jst)
Assignee | ||
Updated•15 years ago
|
Attachment #494832 -
Attachment is obsolete: true
Assignee | ||
Comment 17•15 years ago
|
||
cdleary, with the above patch (v.2), I was able to run jsreftest on a linux 64 bit machine without any problem.
Comment 18•15 years ago
|
||
Comment on attachment 501267 [details] [diff] [review]
patch v.2
A few suggestions:
- Turn all values max <= 0 into -1UL instead of just max == -1.
- Use a PRUint32 as a temporary (for clarity), to indicate it will be coerced to a uint32 at the JSAPI boundary.
- Check for overflow after multiplication if you feel like it -- doesn't seem like we sanitize these inputs much, so it's probably unnecessary.
P.S. What we used to do was really weird... if the value was >= 32 we pinned it to 128MiB max js_malloc bytes? Gross.
Attachment #501267 -
Flags: review?(cdleary) → review+
Updated•15 years ago
|
Attachment #501272 -
Flags: review?(jst) → review+
Assignee | ||
Comment 19•15 years ago
|
||
http://hg.mozilla.org/mozilla-central/rev/c5428f83814e
http://hg.mozilla.org/mozilla-central/rev/031ad5f6f767
Status: REOPENED → RESOLVED
Closed: 15 years ago → 15 years ago
Resolution: --- → FIXED
Comment 20•15 years ago
|
||
I got a warning for this on my Mac, where UL is 64 bits. In that case |max| can never be -1UL (0xFFFFFFFFFFFFFFFF).
On both 32-bit and 64-bit platforms just using PRUint32 is enough to turn that -1 into 0xFFFFFFFF, rendering the then case unnecessary.
If you want to treat any negative value or too large a value as 0xFFFFFFFF, you could do something like this:
static int
SetMemoryMaxPrefChangedCallback(const char* aPrefName, void* aClosure)
{
PRUint32 max = (PRUint32)nsContentUtils::GetIntPref(aPrefName, -1);
if (max > 0xFFF) // catches negative values and overflow
max = 0xFFFFFFFF;
else
max = max * 1024 * 1024;
JS_SetGCParameter(nsJSRuntime::sRuntime, JSGC_MAX_BYTES, max);
return 0;
}
Comment 21•15 years ago
|
||
I didn't explain my suggestions in comment 18 clearly enough. (Communication breakdown: it's always the same!)
The patch I originally r+'d had:
+ PRInt32 max = nsContentUtils::GetIntPref(aPrefName, -1);
+ if (max == -1)
+ max = 0xffffffff;
+ else
+ max = max * 1024L * 1024L;
This was correct, but I was suggesting that you test (max <= 0) instead of (max == -1). I also thought it might be nice to make a separate uint32 variable to pass the to the API, since the API would otherwise coerce it implicitly.
The committed diff had:
+ PRUint32 max = nsContentUtils::GetIntPref(aPrefName, -1);
+ if (max == -1UL)
+ max = 0xffffffff;
+ else
+ max = max * 1024L * 1024L;
Which is dangerous for the reason that jag describes: max is coercing the int32 into a uint32, which is okay, but then sign extension won't happen in a max == -1UL comparison. With a int32 max, max == -1 is a signed comparison that will always occur at the appropriate widths.
I think the ideal is:
+ PRInt32 pref = nsContentUtils::GetIntPref(aPrefName, -1);
+ PRUint32 max = (pref <= 0) ? -1UL : max * 1024 * 1024;
With that approach the -1UL will be truncated into the uint32 on 64b platforms.
What happens now on 64b platforms is that we get (-1UL << 20), is not the result we want. (Loses out on 1MiB of space).
Since the result only loses out on 1MiB of space this doesn't block b9, but somebody please check my reasoning. Type coercion is always tricky.
Status: RESOLVED → REOPENED
blocking2.0: --- → ?
Resolution: FIXED → ---
Updated•15 years ago
|
blocking2.0: ? → ---
Comment 22•15 years ago
|
||
While it's unlikely anyone will pick a max over 4095 MiB, I think you should still guard against overflow from multiplication, hence my suggested change.
Assignee | ||
Comment 23•15 years ago
|
||
Attachment #502823 -
Flags: review?(cdleary)
Updated•15 years ago
|
Attachment #502823 -
Flags: review?(cdleary) → review+
Assignee | ||
Updated•15 years ago
|
Attachment #502823 -
Flags: review+ → review-
Assignee | ||
Comment 24•15 years ago
|
||
Attachment #502823 -
Attachment is obsolete: true
Attachment #502911 -
Flags: review?
Assignee | ||
Updated•15 years ago
|
Attachment #502911 -
Flags: review? → review?(jag-mozilla)
Comment 25•15 years ago
|
||
Comment on attachment 502911 [details] [diff] [review]
followup with more correcterness.
Doug said he'd add a comment to point out the second test is for overflow. For clarity, the previous patch was assigning from an uninitialized |max| instead of |pref|, hence a bit more churn here.
Attachment #502911 -
Flags: review?(jag-mozilla) → review+
Assignee | ||
Comment 26•15 years ago
|
||
Status: REOPENED → RESOLVED
Closed: 15 years ago → 15 years ago
Resolution: --- → FIXED
You need to log in
before you can comment on or make changes to this bug.
Description
•