bugzilla.mozilla.org has resumed normal operation. Attachments prior to 2014 will be unavailable for a few days. This is tracked in Bug 1475801.
Please report any other irregularities here.

lots of time spent doing script destruction when lots of pages are open




JavaScript Engine
8 years ago
8 years ago


(Reporter: dbaron, Unassigned)



Dependency tree / graph

Firefox Tracking Flags

(blocking2.0 betaN+)


I started investigating why Firefox uses a lot of CPU in my session that has a lot of tabs open (probably around 600-700 right now, spread across 38 windows).

It turns out a decent part of the time is spent destroing scripts:  mrbkap says that when people pass a string to setTimeout, we'll go through js_DestroyScript for each timeout.

This bug has some similarities to bug 579653, but I want this bug report to focus on the script destruction performance issues.

An example of a page that's (needlessly) using a lot of timeouts is any article on http://hacks.mozilla.org/ , such as http://hacks.mozilla.org/2010/08/fun-with-fast-javascript/ .  If I open 20 instances of that article in tabs, I end up using about 20%-40% CPU to execute the timeouts for the sharethis.com code that changes the icon next to the "ShareThis" link near the upper right of the page.

A profile of just that situation shows about 10% of the time spent in js::PropertyCache::purgeForScript (the largest hit).

A profile of my real 500 window session shows that the dominant problem in my real session is actually js::PurgeScriptFragments (over 20% of the time), and purgeForScript is much smaller (1.1%).

(I admit I'm profiling a debug build, since that's what I happen to be using.  However, I note that the performance problems with the simple testcase described above are also present in a nightly.)

What these two functions have in common seems to be that they loop over global data structures to find all the pieces in them that pertain to a particular script that is being destroyed.

Could we get a substantial performance gain here by keeping the script, or some aspect of it, around until the GC, and only running purgeForScript and PurgeScriptFragments once per GC (even then, only if scripts are marked for destruction), and after that point really destroying any such scripts?

Or, alternatively, could we use a data structure that allows removal of these things for one script without traversing those for all scripts?
blocking2.0: --- → ?
JSScript is gced now, last I checked (and Luke confirms).  If we're creating new scripts for timeouts, then we're going through JS_EvaluateUCScriptForPrincipals.  Unfortunately, this calls JS_DestroyScript, so destroys the script synchronously.  Does it have to?

Using a smarter data structure would be a good idea anyway.
I was a bit loose with saying "JSScripts are gced".  JSScript itself isn't -- its malloc'd -- but it is attached to either an JSObject or JSFunction that is.
So, IIUC, the only concern is that the script's memory gets recycled and then we get a false prop-cache or fragment-table hit.  Is that right?

If so, then just not cx->free'ing the script memory at the end of js_DestroyScript, and instead adding it to a linked list that gets cx->free'd at GC (at which point we purge the tables) seems like it would do the trick.  This would have the added benefit that (1) we could do a single walk over these tables for multiple scripts (better locality) and (2) cx->free would do gcSweepTask->freeLater() instead of runtime->free.
Several points:

1. GC all scripts is bug 438633.

2. Scripts are usually owned by GC-things (objects, function objects) but the old JS API lets users New and Destroy without a GC-thing owning the script, and this is a problem to be solved by bug 438633.

3. Not every GC purges property cache and fragment tables, so we still need a way to know when the weak refs from those caches into JSScript bytecode are about to dangle.

4. JSScript is going to be compartment-local and mutable. We want it to hold the relevant cached data, just as in JM it holds its PICs. This may not be a bug to fix before Firefox 4 / Mozilla 2, though.

I think we need fix bug 438633. It is on jimb's list but I'm not sure he has time for it atm. Whoever can fix it first should grab it if that's ok with Jim.

Depends on: 438633
(In reply to comment #4)
> 3. Not every GC purges property cache and fragment tables, so we still need a
> way to know when the weak refs from those caches into JSScript bytecode are
> about to dangle.

I guess we have two different uses of the word "purge".  PropertyCache::purge means flush the whole table and PurgeScriptFragments means remove the fragments of a single script from the fragment table, leaving the rest.  So, what I meant was, save up the JSScripts that are about to be cx->free'd and then purge *just those* from the script table with a (batched) PurgeScriptFragments.

Furthermore, unless I am missing an escape clause in the chain
  js_GC -> GCUntilDone -> PreGCCleanup -> js_PurgeThreads ->
  JSThreadData::purge -> PropertyCache::purge

then it looks like we already purge the property cache on every gc, so there is no additional work required there.  (Btw, this seems to be opposite the comment in js_DestroyScript.)
One other thing: David told me that the whole vmfragments table went away in JMv1 and will go away again in JMv2 (but not until after the merge, to decrease merge pain).  So with that + the property cache being flushed on every GC (unless I missed something), all we would need to do is postpone cx->free'ing scripts until GC.
Luke: you're right, I forgot about bug 488731 and bug 506341 (which is nicely cited in the code by a FIXME, see jscntxt.cpp around line 571.

We are purging too much, but we should aim to fix all this, for PICs of any kind (which should hold weak refs). OTOH at least JSC (not sure about V8) holds strong refs from its PICs and just lets them get cycled out over time. Optimistic and simpler, avoids the purge prob.



8 years ago
blocking2.0: ? → betaN+
Is this still happening? I can open 40 copies of http://hacks.mozilla.org/2010/08/fun-with-fast-javascript/ without using more than 1-2% CPU steady-state on a TM nightly opt build. This bug has been around for a while; maybe it got fixed.
Well, it looks like a bunch of potential causes got fixed, and comment 8 failed to repro, so marking WFM.
Last Resolved: 8 years ago
Resolution: --- → WORKSFORME


8 years ago
No longer depends on: 584860


8 years ago
Depends on: 584860
You need to log in before you can comment on or make changes to this bug.