Last Comment Bug 875174 - OdinMonkey: fall back to baseline jit for enormous functions
: OdinMonkey: fall back to baseline jit for enormous functions
Product: Core
Classification: Components
Component: JavaScript Engine (show other bugs)
: unspecified
: All All
-- normal with 4 votes (vote)
: ---
Assigned To: Luke Wagner [:luke]
: Jason Orendorff [:jorendorff]
Depends on:
Blocks: 881537
  Show dependency treegraph
Reported: 2013-05-22 18:42 PDT by Luke Wagner [:luke]
Modified: 2014-01-19 03:13 PST (History)
7 users (show)
See Also:
Crash Signature:
QA Whiteboard:
Iteration: ---
Points: ---
Has Regression Range: ---
Has STR: ---

bad compile times 1 (652.25 KB, application/zip)
2013-05-22 18:42 PDT, Luke Wagner [:luke]
no flags Details

Description User image Luke Wagner [:luke] 2013-05-22 18:42:53 PDT
Created attachment 753071 [details]
bad compile times 1

When functions get incredibly large, Ion starts to exhibit quadratic behavior in time and space usage.  This is fine normally, since these functions just get denied by the Ion compilation heuristics.  Odin, OTOH, currently Ion-compiles everything.  Fortunately, most codebases don't come anywhere close to the limit, but we are starting to find cases that do.  (One is the bison-generated yyparse (bug 864587), another is an Intel face-detection code, attached.)

A straightforward solution is to just fallback to the baseline jit in the case of these huge functions.  That's made a little difficult by the fact that we don't create bytecode for anything in an asm.js module.  However, we should be able to produce a JSScript on demand (for just the huge function) by reusing the lazy-bytecode work in bug 678037.
Comment 1 User image Jukka Jylänki 2013-12-20 01:45:10 PST
Will this mean there will be a max script size limit after which asm.js will not validate?
Comment 2 User image Luke Wagner [:luke] 2013-12-20 08:32:13 PST
Well, it'd "validate", but it'd just run in the baseline jit which is 2-20x slower.  This would be good for the case of huge cold functions, but still bad for hot ones (like say, big interpreter loops) so the developer would still need to cut the function into smaller pieces in that case.

Emscripten's outlining feature made this feature much less of a priority, btw.  A related feature I wish Emscripten could grow is to be able automatically move functions outside the asm.js module.  This could give developers another tool for managing the load-time of cold code.  In my wildest dreams, you could have Emscripten generate a profiling build that generates a list of cold functions which could then be moved outside of the asm.js module.  With asm.js caching, though, it's *still* possible that keeping these in the asm.js module would overall be faster.

Note You need to log in before you can comment on or make changes to this bug.