6.26 KB, patch
|Details | Diff | Splinter Review|
6.26 KB, patch
|Details | Diff | Splinter Review|
Just to make it clear - there should be: "But anyway, if it is not the limit: Around 400 000 floats is really very little." instead of "But anyway, if it is not the limit: 4 megabytes is really very little." Because with the second it sounds like I care about garbage collector, but I care about the number of floats I can store in the Array.
we've made it an env var for testing purposes. it really can't be a pref because prefs happen too late. can you please provide a real world scenario where you actually want to juggle that many floats?
Assignee: nobody → general
Product: Firefox → Core
QA Contact: build.config → general
Version: unspecified → Trunk
Currently I work on neural networks, and that bunch of floats is needed mainly to track it's history (for learning purposes). I admit, I don't reach the limit often, but I plan to combine neural networks and state space search method and that will increase the number of floats about hundred times.
*** Bug 324844 has been marked as a duplicate of this bug. ***
After talking with brendan and shaver (see bug 324844) we've decided that this limit needs to be either raised or removed. The bfcache is currently based on the amount of system memory available, so maybe that's the way to go here, too.
Status: UNCONFIRMED → NEW
Ever confirmed: true
Running out of time on 18.104.22.168 -- if you're serious about that release we need a trunk-baked patch landed ASAP.
Flags: blocking22.214.171.124? → blocking126.96.36.199-
Created attachment 211211 [details] [diff] [review] proposed fix This is a minimal, backward-API-compatible fix to allow Firefox and other apps to schedule their own GCs, yet still run a last-ditch GC after a sufficient number of bytes have been allocated by JS_malloc. Suggested by dbaron, thanks to him for pushing on the issue -- my only concern was API compatibility, because many SpiderMonkey embedders rely on the last-ditch GC based on nominal heap size threshold. We should have done this for 1.5. The failure to raise or eliminate the threshold based on nominal heap size means greater memory pressure, sometimes putting the app on the edge of the cliff, where every allocation GCs. The app's performance is miserable at that point, often bad enough that you have to kill it. /be
Comment on attachment 211211 [details] [diff] [review] proposed fix > uint32 gcMaxBytes; >+ uint32 gcMaxMallocBytes; If we want this on the branch, do we need to add this member to the end of the runtime struct to preserve binary compatibility with people trying to link with Firefox's version of SpiderMonkey? >+ rt->gcMaxBytes = rt->gcMaxMallocBytes = maxbytes; A comment here mentioning that this provides needed backwards compat with pre-GCParameter SpiderMonkeys might not hurt. >+ JS_SetGCParameter(mRuntime, JSGC_MAX_BYTES, 0xffffffff); Another comment here as to why we're doing things this way might not hurt either. I might just want this comment because of the opacity of the previous value. r=mrbkap
Attachment #211211 - Flags: review?(mrbkap) → review+
> If we want this on the branch, do we need to add this member to the end of the > runtime struct to preserve binary compatibility with people trying to link with > Firefox's version of SpiderMonkey? If you mean the 1.8.0 branch, then "yes" ... I've seen some extensions that do link to spidermonkey in order to do fancy JS stuff. If you mean 1.8 for 1.8.1, then "probably yes" based on our no interface change policy for Gecko 1.8, but strictly speaking "no" since extensions can differentiate between FF 1.5.0.* and 2.0.0.* easily enough.
(In reply to comment #9) > > If we want this on the branch, do we need to add this member to the end of the > > runtime struct to preserve binary compatibility with people trying to link with > > Firefox's version of SpiderMonkey? > > If you mean the 1.8.0 branch, then "yes" ... I've seen some extensions that do > link to spidermonkey in order to do fancy JS stuff. But JSRuntime is not part of the public API. Sorry, I don't buy it. We can add in the middle. This is not an interface at all, it's a concrete struct defined in a private header file. Blake, where's the opacity? 4 * 1024 * 1024 is less obviously a sane value than 0xffffffff? Comments tomorrow, I'm tired. /be
Assignee: general → brendan
> But JSRuntime is not part of the public API. OK
Created attachment 211329 [details] [diff] [review] proposed fix, with more comments
Comment on attachment 211329 [details] [diff] [review] proposed fix, with more comments sr=shaver
Attachment #211329 - Flags: superreview?(shaver) → superreview+
Waiting for trunk to reopen. /be
Status: NEW → ASSIGNED
Flags: blocking188.8.131.52- → blocking184.108.40.206?
Priority: -- → P1
Target Milestone: --- → mozilla1.9alpha
(In reply to comment #10) > But JSRuntime is not part of the public API. We said that about supposedly opaque principal objects too, then had the Asgard plugin crashing fiasco.
(In reply to comment #15) > (In reply to comment #10) > > But JSRuntime is not part of the public API. > > We said that about supposedly opaque principal objects too, then had the Asgard > plugin crashing fiasco. That's a different animal, nsIPrincipal, an XPCOM interface that can be called into and even implemented outside of caps. This is a C struct that cannot be usefully implemented or called into outside the JS engine. There is nothing to fear here, and unreasonable fears should not make us add at the end of structs that are opaque types in the JS API. /be
Please land on trunk for baking so we can consider this for 220.127.116.11
Created attachment 212701 [details] [diff] [review] branch version of patch
Status: ASSIGNED → RESOLVED
Last Resolved: 13 years ago
OS: Windows 2000 → All
Hardware: PC → All
Resolution: --- → FIXED
Comment on attachment 212701 [details] [diff] [review] branch version of patch Needs more bake time on the trunk. It just landed yesterday. minusing for 18.104.22.168. Plusing for 22.214.171.124.
Comment on attachment 212701 [details] [diff] [review] branch version of patch I don't think this needs more bake time. It removes a feedback loop that we believe is too low, which makes us hit vulnerabilities in our code. It is not needed to schedule the garbage collector -- the DOM code does that. Please reconsider. /be
Attachment #212701 - Flags: approval126.96.36.199- → approval188.8.131.52?
(In reply to comment #20) > (From update of attachment 212701 [details] [diff] [review] ) > I don't think this needs more bake time. It removes a feedback loop that we > believe is too low, s/low/short/ Point is we have enough GC scheduling in the DOM to collect garbage. The problem with the 4MB threshold is actually two nasty bugs: 1. You can hit it hard due to lots of JS objects, and you're toast -- no way around it. Even if you have 1GB of RAM and can well afford hundreds of megs on JS objects. 2. You can hit it transiently, forcing a GC that does collect enough garbage to back off from the threshold by a decent interval. But hitting the threshold exposes more users to those bugs we'd rather find and fix by our own custom build stress-testing. Both problems show up in 1.5 talkback (see bug 307317) and bug reports. Neither should be shipped in 184.108.40.206. The only downside of unconstraining the threshold is if the DOM fails to otherwise schedule GCs often enough. But we actually GC quite a bit from several opportunistic places in DOM code. And, the other GC parameter that this patch introduces will automatically, w/o DOM intervention, run a GC after 4MB worth of allocations that go through JS_malloc. So we're doubly covered for GC scheduling. Again, the bug to fix is unexpected "last ditch" GC nesting inside the GC's allocator. /be
Comment on attachment 212701 [details] [diff] [review] branch version of patch approved for 1.8.0 branch, a=dveditz for drivers
Attachment #212701 - Flags: approval220.127.116.11? → approval18.104.22.168+
Fixed on the 1.8* branches. /be
Keywords: fixed22.214.171.124, fixed1.8.1
See bug 319980 for a missing fix-part wanted by this bug. I'll nominate the patch over there. /be
Please provide testing guidance for verifying this fix. I don't think just loading ths above URL is sufficient, but I'm not sure where or how we need to poke around.
You need to log in before you can comment on or make changes to this bug.