The default bug view has changed. See this FAQ.

IonMonkey: don't treat JSOP_LABEL as a jump opcode during preliminary analysis

RESOLVED FIXED

Status

()

Core
JavaScript Engine
RESOLVED FIXED
5 years ago
5 years ago

People

(Reporter: bhackett, Unassigned)

Tracking

(Blocks: 1 bug)

Other Branch
x86
Mac OS X
Points:
---

Firefox Tracking Flags

(Not tracked)

Details

Attachments

(1 attachment)

Created attachment 639441 [details] [diff] [review]
patch

JSOP_LABEL is a no-op annotation, but the analysis passes used by type inference (bytecode, lifetimes, SSA) treat it as a jump, degrading precision.  This bites on the fannkuch benchmark in bug 771106, where the extra edges cause some locals to be treated as possibly-undefined with a lot of stub calls resulting.  Fixing this takes time for --ion -n from 24s to 12s
Attachment #639441 - Flags: review?(dvander)
Comment on attachment 639441 [details] [diff] [review]
patch

Review of attachment 639441 [details] [diff] [review]:
-----------------------------------------------------------------

Sweet!
Attachment #639441 - Flags: review?(dvander) → review+
https://hg.mozilla.org/projects/ionmonkey/rev/171cc91d4c18
Status: NEW → RESOLVED
Last Resolved: 5 years ago
Resolution: --- → FIXED

Comment 3

5 years ago
(In reply to Brian Hackett (:bhackett) from comment #0)
> Fixing this takes time for --ion -n from 24s to 12s

I don't see a big change locally or on awfy, was the test on a modified version of fannkuch (maybe the hand-optimized one in bug 771106)?
Yes, this is fixing the hand optimized fannkuch benchmark.  The basic one has extra shifts that coerce the value the compiler thinks may be undefined into an int32, so we don't take stub calls.  Though pretty soon I'd like to work on getting emscripten to generate code closer to the hand optimized benchmark in bug 771106, as that is the pattern I'd like to optimize for base + index accesses.

Comment 5

5 years ago
Ok, I see, thanks.

Regarding modifying emscripten to generate fewer << >> operations, there is a good chance we would want to do that in the C++ LLVM backend we are planning to write. The current compiler is written in JS and parses LLVM bitcode externally to LLVM, it does not perform well on 1M+ codebases, and also there are various optimization benefits from implementing a C++ backend. So for new optimizations we might want to focus on that as opposed to optimizing the current compiler. Unless there is a very simple solution.

Regarding the C++ backend, the plan is for Rafael and I to start very soon, and hopefully it will not take too long.
(In reply to Alon Zakai (:azakai) from comment #5)
> Ok, I see, thanks.
> 
> Regarding modifying emscripten to generate fewer << >> operations, there is
> a good chance we would want to do that in the C++ LLVM backend we are
> planning to write. The current compiler is written in JS and parses LLVM
> bitcode externally to LLVM, it does not perform well on 1M+ codebases, and
> also there are various optimization benefits from implementing a C++
> backend. So for new optimizations we might want to focus on that as opposed
> to optimizing the current compiler. Unless there is a very simple solution.
> 
> Regarding the C++ backend, the plan is for Rafael and I to start very soon,
> and hopefully it will not take too long.

Sounds good to me, can you loop me in on bugs/etc. for that new backend?  I'd like to help.  (My pre-Mozilla background is after all in C/C++ static analysis)

Comment 7

5 years ago
Sure thing, very happy you are interested in this!
You need to log in before you can comment on or make changes to this bug.