Closed Bug 785543 Opened 12 years ago Closed 12 years ago

Bug 778724 regressed emscripten-memops by 42%

Categories

(Core :: JavaScript Engine, defect)

defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla17

People

(Reporter: azakai, Assigned: bhackett1024)

References

Details

Attachments

(3 files, 1 obsolete file)

Bug 778724 regressed emscripten-memops by 42%. I noticed this while verifying the fix for bug 784550, and I noticed while that bug was in fact fixed, it was then regressed by this. Unrelated I guess? This is just an unlucky benchmark ;)

The benchmark appears in bug 784550. I get 2.114 seconds when this landed,

https://hg.mozilla.org/mozilla-central/rev/bf07c6253287

versus 1.488 in the previous revision.
Blocks: 778724
Running the whole emscripten benchmark suite I see some other regressions. They are small compared to the one I mentioned at the beginning (memops) though. Thought I'd paste the data though if that's helpful.
Numbers after this regression for comparison to last comment.
The problem here is that bug 778724 changed things so that uint32arrays are always treated as possibly producing doubles, which hurts JM code generation because it does not have special handling for uint32s (and IM handling for such is some ways off I think).  Fixing this is a one liner (rm 'arrayKind == TypedArray::TYPE_UINT32' from PropertyAccess in jsinfer.cpp) but doing this may incur additional unnecessary recompilation in large programs.  Alon, does this affect more substantial programs?
It's harder to benchmark entire programs, but the benchmark suite does contain real-world code and not tiny kernels. For example dlmalloc which is included in many large programs regressed by 33%. So I would guess yes.

On the other hand, I just tested bullet and box2d from

https://github.com/kripken/misc-js-benchmarks

and I do *not* see a regression there, performance is unchanged. And they are much more serious than the benchmark suite. But - they are just 2 tests.

So I don't have a simple answer here I am afraid.
Your description of the problem scares me, though - it sounds like any code that does a lot of Uint32Array accesses will be slower? That should be quite a lot of code. Perhaps bullet and box2d avoid the problem since they are mainly float ops.

In fact this seems to make sense since dlmalloc and memops, the worst regressions in the benchmark suite, have no floats but just int operations and memory accesses.

So if code that does a lot of int operations with heap accesses is at risk, this is concerning. Perhaps I did not understand the explanation of the issue though.
Attached patch patch (obsolete) — Splinter Review
Fair enough, thanks for looking.  The real long term fix here is to have a compiler that deals well with native uint32s, until then I think we need to just eat any recompilations we get from not knowing which uint32array reads are int32 vs. uint32.
Assignee: general → bhackett1024
Attachment #655248 - Flags: review?(luke)
Attached patch patchSplinter Review
Oops, wrong patch.
Attachment #655248 - Attachment is obsolete: true
Attachment #655248 - Flags: review?(luke)
Attachment #655249 - Flags: review?(luke)
I know ~0 about the implementation here, but looking at the patch, it seems it no longer marks Uint32 arrays as potentially producing doubles. And I guess if/when doubles actually appear they would trigger recompilation? This seems like the right approach since values big enough to be doubles are fairly rare in compiled code, so the recompiles won't be so bad, and normal code without doubles will be fast like before. Did I understand that right?

Btw, v8 just landed good support for uint32s,

http://code.google.com/p/v8/issues/detail?id=2097

so would be nice to eventually have that in IonMonkey.
(In reply to Alon Zakai (:azakai) from comment #8)
> I know ~0 about the implementation here, but looking at the patch, it seems
> it no longer marks Uint32 arrays as potentially producing doubles. And I
> guess if/when doubles actually appear they would trigger recompilation? This
> seems like the right approach since values big enough to be doubles are
> fairly rare in compiled code, so the recompiles won't be so bad, and normal
> code without doubles will be fast like before. Did I understand that right?

Yes, this is right.
Attachment #655249 - Flags: review?(luke) → review+
I don't know if this is coincidence or not because it hasn't happened since, but this push had a Linux64 debug jit-test failure.

https://tbpl.mozilla.org/php/getParsedLog.php?id=14701711&tree=Mozilla-Inbound
TEST-UNEXPECTED-FAIL | jit_test.py -a -m -d -n    | /builds/slave/m-in-lnx64-dbg/build/js/src/jit-test/tests/basic/bug688939.js: Assertion failure: first - 1 <= last, at ../../../js/src/gc/Heap.h:340
https://hg.mozilla.org/mozilla-central/rev/1a6fa8dd582b
Status: NEW → RESOLVED
Closed: 12 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla17
(In reply to Brian Hackett (:bhackett) from comment #9)
> (In reply to Alon Zakai (:azakai) from comment #8)
> > I know ~0 about the implementation here, but looking at the patch, it seems
> > it no longer marks Uint32 arrays as potentially producing doubles. And I
> > guess if/when doubles actually appear they would trigger recompilation? This
> > seems like the right approach since values big enough to be doubles are
> > fairly rare in compiled code, so the recompiles won't be so bad, and normal
> > code without doubles will be fast like before. Did I understand that right?
> 
> Yes, this is right.

And is the reason we were considering changing this to the less-performant but fewer-recompilation version because recompilations take memory?
Bug 778724 wasn't meant to be less performant here, this was a simple oversight.
Oh ok, thanks. Just curious about the sequence of events here.
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: