Closed
Bug 771285
Opened 12 years ago
Closed 12 years ago
IonMonkey: don't treat JSOP_LABEL as a jump opcode during preliminary analysis
Categories
(Core :: JavaScript Engine, defect)
Tracking
()
RESOLVED
FIXED
People
(Reporter: bhackett1024, Unassigned)
References
(Blocks 1 open bug)
Details
Attachments
(1 file)
4.00 KB,
patch
|
dvander
:
review+
|
Details | Diff | Splinter Review |
JSOP_LABEL is a no-op annotation, but the analysis passes used by type inference (bytecode, lifetimes, SSA) treat it as a jump, degrading precision. This bites on the fannkuch benchmark in bug 771106, where the extra edges cause some locals to be treated as possibly-undefined with a lot of stub calls resulting. Fixing this takes time for --ion -n from 24s to 12s
Attachment #639441 -
Flags: review?(dvander)
Comment on attachment 639441 [details] [diff] [review] patch Review of attachment 639441 [details] [diff] [review]: ----------------------------------------------------------------- Sweet!
Attachment #639441 -
Flags: review?(dvander) → review+
Reporter | ||
Comment 2•12 years ago
|
||
https://hg.mozilla.org/projects/ionmonkey/rev/171cc91d4c18
Status: NEW → RESOLVED
Closed: 12 years ago
Resolution: --- → FIXED
Comment 3•12 years ago
|
||
(In reply to Brian Hackett (:bhackett) from comment #0) > Fixing this takes time for --ion -n from 24s to 12s I don't see a big change locally or on awfy, was the test on a modified version of fannkuch (maybe the hand-optimized one in bug 771106)?
Reporter | ||
Comment 4•12 years ago
|
||
Yes, this is fixing the hand optimized fannkuch benchmark. The basic one has extra shifts that coerce the value the compiler thinks may be undefined into an int32, so we don't take stub calls. Though pretty soon I'd like to work on getting emscripten to generate code closer to the hand optimized benchmark in bug 771106, as that is the pattern I'd like to optimize for base + index accesses.
Comment 5•12 years ago
|
||
Ok, I see, thanks. Regarding modifying emscripten to generate fewer << >> operations, there is a good chance we would want to do that in the C++ LLVM backend we are planning to write. The current compiler is written in JS and parses LLVM bitcode externally to LLVM, it does not perform well on 1M+ codebases, and also there are various optimization benefits from implementing a C++ backend. So for new optimizations we might want to focus on that as opposed to optimizing the current compiler. Unless there is a very simple solution. Regarding the C++ backend, the plan is for Rafael and I to start very soon, and hopefully it will not take too long.
Reporter | ||
Comment 6•12 years ago
|
||
(In reply to Alon Zakai (:azakai) from comment #5) > Ok, I see, thanks. > > Regarding modifying emscripten to generate fewer << >> operations, there is > a good chance we would want to do that in the C++ LLVM backend we are > planning to write. The current compiler is written in JS and parses LLVM > bitcode externally to LLVM, it does not perform well on 1M+ codebases, and > also there are various optimization benefits from implementing a C++ > backend. So for new optimizations we might want to focus on that as opposed > to optimizing the current compiler. Unless there is a very simple solution. > > Regarding the C++ backend, the plan is for Rafael and I to start very soon, > and hopefully it will not take too long. Sounds good to me, can you loop me in on bugs/etc. for that new backend? I'd like to help. (My pre-Mozilla background is after all in C/C++ static analysis)
Comment 7•12 years ago
|
||
Sure thing, very happy you are interested in this!
You need to log in
before you can comment on or make changes to this bug.
Description
•