Consider eagerly compiling scripts in the scriptloader




8 years ago
5 years ago


(Reporter: bz, Unassigned)



Mac OS X

Firefox Tracking Flags

(Not tracked)


Right now the scriptloader maintains its pending scripts (whether blocked on stylesheet loads or pending due to speculative parsing or whatever) as strings.  Might it not make sense to compile them eagerly and then only execute at the right time?  From what I've seen, script compilation is not exactly cheap (for large scripts), so if we can do it while we're idle waiting on the network, that might be nice...

We would need to make sure to compile with COMPILE_N_GO, which means adding an argument to nsIScriptContext::CompileScript and frobbing the context flags in there (since JS_CompileUCScriptForPrincipals doesn't have a separate flags argument), but I think we can do that, right?
I'm pretty confident that this would be a win.  It's too bad that we can't pre-record (obviously) or anything, but we could maybe do more background analysis in the future...
OK, two concerns I've thought of:

1)  Web compat concern.  Compiling can report errors, which fire the onerror handler.  Since speculative preloading is involved, firing onerror handlers for scripts that aren't actually supposed to be executed is ... possibly suboptimal.  Not sure what we can do about that.

2)  Is it safe to COMPILE_N_GO if there might be other script executing (and maybe adding/removing global props in particular) between the compile and execute operations?
COMPILE_N_GO does not prebind properties in the global object, it only prebinds parent slots in function objects the compiler creates.

For error handling, you could intercept the reporter and queue the reports.

Bah.  The python stuff is in the way again.  We don't know the script-type id until we know where the node will be in the DOM, and the preload doesn't even have a node.  I'm tempted to say "screw this; pre-compilation will assume JS, and if you happen to actually have python we'll deal later".
So I'm told that at least as of today with JM the answer to my question 2 in comment 2 is "No, it's not safe to do that." 

Given which, is this worth doing?  I don't think we want to lose the compile_n_go optimizations....
Depends on: 563375
Assume JS.

Compile-n-go in flux for JM, but it wins even today, so precompiling with the JSOPTION_COMPILE_N_GO set is worth keeping. We'll figure out a "linker" solution in due course. If that makes the COMPILE_N_GO option a no-op, but adds a bit of linker cost before execution, probably that's ok (measurement needed).

For now use JSOPTION_COMPILE_N_GO if you split JS_Evaluate* into JS_Compile* and then (at most one) JS_Execute.

No longer depends on: 563375
Depends on: 563375
Does doing something like this make sense in the world SpiderMonkey is evolving towards?  I know we're planning to get rid of COMPILE_N_GO, right?  Do we expect bytecode compilation to continue to be noticeable in terms of time taken?  Do we have any plans for allowing compilation to bytecode on a background thread?
I think the world we want to move towards is bug 678037: don't compile scripts to bytecode until they are first executed.  This would I think preclude bytecode compilation on a background thread (since the main thread would just block until it finishes), but performance on the whole should be a lot better.

We should be getting rid of COMPILE_N_GO, but that's only because with compartment-per-global all scripts are now COMPILE_N_GO so there's no sense in distinguishing ones which are not.  Though a JSOPTION_NON_GLOBAL_SCOPE_CHAIN or something would then be needed I think for scripts which run against non globals (event handlers etc.).
OK.  So from the point of view of the browser, we'd still compile the toplevel script to bytecode when we do now (which is exactly when we execute it), but this will be a cheaper operation because it won't compile all the functions in that script, right?

Sounds like we should wontfix this bug, then.  Thanks!
Last Resolved: 5 years ago
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.