Closed Bug 613551 Opened 15 years ago Closed 15 years ago

JSGC_MAX_MALLOC_BYTES is too large on mobile

Categories

(Core :: JavaScript Engine, defect)

ARM
All
defect
Not set
normal

Tracking

()

RESOLVED FIXED
Tracking Status
fennec 2.0b4+ ---

People

(Reporter: pavlov, Assigned: dougt)

References

Details

Attachments

(3 files, 2 obsolete files)

Bug 598650 doubled the already large max GC allocation size to 128mb, which is more than we want the whole browser to be using on mobile. This needs to be made a pref so that it can be lowered.
tracking-fennec: --- → 2.0b3+
Depends on: 598650
This is already configurable through JS API so I think this is the wrong component and should go over to DOM or whoever claims ownership over nsJSEnvironment.
The naming is very confusing but JSGC_MAX_MALLOC_BYTES has nothing to do with the actual heap size. This is just a GC trigger that kicks in when more than "JSGC_MAX_MALLOC_BYTES" bytes are dynamically allocated since the last GC run.
If 128mb are allocated since we last GC'd, we're likely to be out of memory. I was thinking that we probably want this closer to 16 or 32, to cause more GCing?
Not sure if this bug is related, but we're now crashing on the V8 test while it is doing regex due to out of memory.
(In reply to comment #4) > Not sure if this bug is related, but we're now crashing on the V8 test while it > is doing regex due to out of memory. This can only happen if you don't set JSGC_MAX_BYTES accordingly and rely on JSGC_MAX_MALLOC_BYTES for triggering the GC "early enough".
(In reply to comment #3) > If 128mb are allocated since we last GC'd, we're likely to be out of memory. I > was thinking that we probably want this closer to 16 or 32, to cause more > GCing? The problem with a lower limit is that it triggers many GC runs during sunspider at suboptimal points. I don't think it makes sense to use the same limits for mobile and the desktop browser so you should definitely change it for mobile.
tracking-fennec: 2.0b3+ → 2.0b4+
How about setting JSGC_MAX_MALLOC_BYTES to |PR_GetPhysicalMemorySize()/8| or some other multiple (on both desktop and mobile)? That way we wouldn't need to change a pref as mobile devices have more and more memory, each individual device would have a suitable value.
Assignee: general → doug.turner
the defaults will remain the same, but will allow applications to tweak there settings. Also, the other contexts, should we also provide setting control over them. i am specifically thinking about xpconnect and jetpack
Attachment #494832 - Flags: review?(jst)
Attachment #494832 - Flags: review?(jst) → review+
Status: NEW → RESOLVED
Closed: 15 years ago
Resolution: --- → FIXED
there will be a separate bug to set the right value on fennec.
cdleary's going to have to explain how this changed the default value on 64-bit builds, but, I backed it out because it was causing bug 623105, which wasn't happening on TraceMonkey where this was the only non-merged candidate, and bug 623105 indeed stopped happening once it was backed out.
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
-1 instead of 0xffffffff?
Attached patch patch v.2Splinter Review
i think the "* 1024 * 1024" probably passed the wrong value. Fixing that.
Attachment #501267 - Flags: review?(cdleary)
I'm not sure, but I think -1 * 1024 * 1024 is effectively -1UL << 20, which is certainly different. It was just the most plausible explanation for the single-shot-memory-intensive js1_5/Regress/regress-290575.js to start failing, as it allocates a big array with arguments that are then used as arguments to a function. If we can isolate the failure more I'd be happy to help out with the debugging! In the meantime, I'm going to mark 623105 as fixed until we can get this passing regressions on all platforms.
these pref changes really should be reviewed. jst, basically these match up with what the C++ code does.
Attachment #501272 - Flags: review?(jst)
Attachment #494832 - Attachment is obsolete: true
cdleary, with the above patch (v.2), I was able to run jsreftest on a linux 64 bit machine without any problem.
Blocks: 623072
Comment on attachment 501267 [details] [diff] [review] patch v.2 A few suggestions: - Turn all values max <= 0 into -1UL instead of just max == -1. - Use a PRUint32 as a temporary (for clarity), to indicate it will be coerced to a uint32 at the JSAPI boundary. - Check for overflow after multiplication if you feel like it -- doesn't seem like we sanitize these inputs much, so it's probably unnecessary. P.S. What we used to do was really weird... if the value was >= 32 we pinned it to 128MiB max js_malloc bytes? Gross.
Attachment #501267 - Flags: review?(cdleary) → review+
Attachment #501272 - Flags: review?(jst) → review+
Status: REOPENED → RESOLVED
Closed: 15 years ago15 years ago
Resolution: --- → FIXED
I got a warning for this on my Mac, where UL is 64 bits. In that case |max| can never be -1UL (0xFFFFFFFFFFFFFFFF). On both 32-bit and 64-bit platforms just using PRUint32 is enough to turn that -1 into 0xFFFFFFFF, rendering the then case unnecessary. If you want to treat any negative value or too large a value as 0xFFFFFFFF, you could do something like this: static int SetMemoryMaxPrefChangedCallback(const char* aPrefName, void* aClosure) { PRUint32 max = (PRUint32)nsContentUtils::GetIntPref(aPrefName, -1); if (max > 0xFFF) // catches negative values and overflow max = 0xFFFFFFFF; else max = max * 1024 * 1024; JS_SetGCParameter(nsJSRuntime::sRuntime, JSGC_MAX_BYTES, max); return 0; }
I didn't explain my suggestions in comment 18 clearly enough. (Communication breakdown: it's always the same!) The patch I originally r+'d had: + PRInt32 max = nsContentUtils::GetIntPref(aPrefName, -1); + if (max == -1) + max = 0xffffffff; + else + max = max * 1024L * 1024L; This was correct, but I was suggesting that you test (max <= 0) instead of (max == -1). I also thought it might be nice to make a separate uint32 variable to pass the to the API, since the API would otherwise coerce it implicitly. The committed diff had: + PRUint32 max = nsContentUtils::GetIntPref(aPrefName, -1); + if (max == -1UL) + max = 0xffffffff; + else + max = max * 1024L * 1024L; Which is dangerous for the reason that jag describes: max is coercing the int32 into a uint32, which is okay, but then sign extension won't happen in a max == -1UL comparison. With a int32 max, max == -1 is a signed comparison that will always occur at the appropriate widths. I think the ideal is: + PRInt32 pref = nsContentUtils::GetIntPref(aPrefName, -1); + PRUint32 max = (pref <= 0) ? -1UL : max * 1024 * 1024; With that approach the -1UL will be truncated into the uint32 on 64b platforms. What happens now on 64b platforms is that we get (-1UL << 20), is not the result we want. (Loses out on 1MiB of space). Since the result only loses out on 1MiB of space this doesn't block b9, but somebody please check my reasoning. Type coercion is always tricky.
Status: RESOLVED → REOPENED
blocking2.0: --- → ?
Resolution: FIXED → ---
blocking2.0: ? → ---
While it's unlikely anyone will pick a max over 4095 MiB, I think you should still guard against overflow from multiplication, hence my suggested change.
Attached patch followup (obsolete) — Splinter Review
Attachment #502823 - Flags: review?(cdleary)
Attachment #502823 - Flags: review?(cdleary) → review+
Attachment #502823 - Flags: review+ → review-
Attachment #502823 - Attachment is obsolete: true
Attachment #502911 - Flags: review?
Attachment #502911 - Flags: review? → review?(jag-mozilla)
Comment on attachment 502911 [details] [diff] [review] followup with more correcterness. Doug said he'd add a comment to point out the second test is for overflow. For clarity, the previous patch was assigning from an uninitialized |max| instead of |pref|, hence a bit more churn here.
Attachment #502911 - Flags: review?(jag-mozilla) → review+
Status: REOPENED → RESOLVED
Closed: 15 years ago15 years ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: