Closed Bug 1688136 Opened 3 years ago Closed 3 years ago

Assertion failure: cx_->hadNondeterministicException(), at jit/WarpOracle.cpp:188

Categories

(Core :: JavaScript Engine: JIT, defect, P1)

defect

Tracking

()

RESOLVED FIXED
87 Branch
Tracking Status
firefox87 --- fixed

People

(Reporter: anbu1024.me, Assigned: iain)

References

Details

Attachments

(2 files)

User Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:84.0) Gecko/20100101 Firefox/84.0

Steps to reproduce:

The following test case crashes the debug version of SpiderMonkey:

function f0(a, b, c) {
    var v0 = 1 + a + b;
    if (c)
        v0++;
    else
        v0--;
    return v0 + v0 | 0;
}
var v1 = 2147483647;
var v2 = 0;
for (var v3 = 0; v3 < 500000; ++v3)
    v2 = v2 + f0(v3, v1 - v3, v3 % 2) | 0;

build options: --enable-debug --disable-optimize

Actual results:

Assertion failure: cx_->hadNondeterministicException(), at gecko-dev/js/src/jit/WarpOracle.cpp:188
Segmentation fault (core dumped)

The stack backtrace is following:

[#0] 0x555557087713 → js::jit::WarpOracle::createSnapshot(this=0x7fffffff94e8)
[#1] 0x555556aea987 → js::jit::CreateWarpSnapshot(cx=0x7ffff5524000, mirGen=0x7ffff55f1170, script={
  <js::HandleBase<JSScript*, JS::Handle<JSScript*> >> = {
    <js::WrappedPtrOperations<JSScript*, JS::Handle<JSScript*> >> = {<No data fields>}, <No data fields>}, 
  members of JS::Handle<JSScript*>: 
  ptr = 0x7fffffff9968
})
[#2] 0x555556aea4e0 → js::jit::IonCompile(cx=0x7ffff5524000, script={
  <js::HandleBase<JSScript*, JS::Handle<JSScript*> >> = {
    <js::WrappedPtrOperations<JSScript*, JS::Handle<JSScript*> >> = {<No data fields>}, <No data fields>}, 
  members of JS::Handle<JSScript*>: 
  ptr = 0x7fffffff9968
}, osrPc=0x7ffff557c04e "\223\006")
[#3] 0x555556ad09f5 → js::jit::Compile(cx=0x7ffff5524000, script={
  <js::HandleBase<JSScript*, JS::Handle<JSScript*> >> = {
    <js::WrappedPtrOperations<JSScript*, JS::Handle<JSScript*> >> = {<No data fields>}, <No data fields>}, 
  members of JS::Handle<JSScript*>: 
  ptr = 0x7fffffff9968
}, osrFrame=0x7fffffff9a80, osrPc=0x7ffff557c04e "\223\006")
[#4] 0x555556aeae9f → BaselineCanEnterAtBranch(cx=0x7ffff5524000, script={
  <js::HandleBase<JSScript*, JS::Handle<JSScript*> >> = {
    <js::WrappedPtrOperations<JSScript*, JS::Handle<JSScript*> >> = {<No data fields>}, <No data fields>}, 
  members of JS::Handle<JSScript*>: 
  ptr = 0x7fffffff9968
}, osrFrame=0x7fffffff9a80, pc=0x7ffff557c04e "\223\006")
[#5] 0x555556ad0e34 → IonCompileScriptForBaseline(cx=0x7ffff5524000, frame=0x7fffffff9a80, pc=0x7ffff557c04e "\223\006")
[#6] 0x555556ad11a6 → js::jit::IonCompileScriptForBaselineOSR(cx=0x7ffff5524000, frame=0x7fffffff9a80, frameSize=0x50, pc=0x7ffff557c04e "\223\006", infoPtr=0x7fffffff9a48)
[#7] 0xd33c469b027 → add rsp, 0x8
[#8] 0x7ffff3f0d570 → movabs al, ds:0x2a00000d33c46df3
[#9] 0x7fffffff9a48 → add BYTE PTR [rax], al

My spidermonkey version:

gecko-dev$ js/src/build_debug/dist/bin/js --version
JavaScript-C86.0a1
gecko-dev$ git rev-parse HEAD
1ebc9745be02eebf7a694f5c527a44a045b8c97a

Bug 1687672 got fixed a few hours before your report - can you check if the issue persists with the latest mozilla-central revision?

Flags: needinfo?(anbu1024.me)

Hi, Thanks for your effort. I have just test latest mozilla-central revision, however, the issue persists.

$ git rev-parse HEAD
7c03e28b3d065fa80839e9659fd50bf340913d5f
$ git show --stat
commit 7c03e28b3d065fa80839e9659fd50bf340913d5f
Author: Sebastian Hengst <archaeopteryx@coole-files.de>
Date:   Fri Jan 22 10:06:08 2021 +0000

    Bug 1688097 - update web-platform-test reftest annotations on Windows after latest wptsync. a=test-only
    
    Differential Revision: https://phabricator.services.mozilla.com/D102705

 .../web-platform/meta/css/css-writing-modes/direction-vrl-002.xht.ini   | 2 +-
 .../web-platform/meta/css/css-writing-modes/direction-vrl-004.xht.ini   | 2 ++
 .../meta/css/css-writing-modes/float-shrink-to-fit-vrl-008.xht.ini      | 2 +-
 3 files changed, 4 insertions(+), 2 deletions(-)

$ ./js/src/build_debug/dist/bin/js test.js
Assertion failure: cx_->hadNondeterministicException(), at gecko-dev/js/src/jit/WarpOracle.cpp:188
Segmentation fault (core dumped)

The stack backtrace is

#0  0x000055555705b053 in js::jit::WarpOracle::createSnapshot (this=0x7fffffff94e8) at gecko-dev/js/src/jit/WarpOracle.cpp:188
#1  0x0000555556abdba7 in js::jit::CreateWarpSnapshot (cx=0x7ffff5524000, mirGen=0x7ffff55f1170, script=...) at gecko-dev/js/src/jit/Ion.cpp:1631
#2  0x0000555556abd700 in js::jit::IonCompile (cx=0x7ffff5524000, script=..., osrPc=0x7ffff557b14e "\223\006") at gecko-dev/js/src/jit/Ion.cpp:1708
#3  0x0000555556aa3c15 in js::jit::Compile (cx=0x7ffff5524000, script=..., osrFrame=0x7fffffff9a80, osrPc=0x7ffff557b14e "\223\006") at gecko-dev/js/src/jit/Ion.cpp:1931
#4  0x0000555556abe0bf in BaselineCanEnterAtBranch (cx=0x7ffff5524000, script=..., osrFrame=0x7fffffff9a80, pc=0x7ffff557b14e "\223\006") at /gecko-dev/js/src/jit/Ion.cpp:2132
#5  0x0000555556aa4054 in IonCompileScriptForBaseline (cx=0x7ffff5524000, frame=0x7fffffff9a80, pc=0x7ffff557b14e "\223\006") at gecko-dev/js/src/jit/Ion.cpp:2183
#6  0x0000555556aa43c6 in js::jit::IonCompileScriptForBaselineOSR (cx=0x7ffff5524000, frame=0x7fffffff9a80, frameSize=0x50, pc=0x7ffff557b14e "\223\006", infoPtr=0x7fffffff9a48) at gecko-dev/js/src/jit/Ion.cpp:2295
#7  0x00002747c3168027 in ?? ()
#8  0x00007ffff3f0d570 in ?? ()
#9  0x00007fffffff9a48 in ?? ()
#10 0x00002747c318bd90 in ?? ()
#11 0x0000000000000000 in ?? ()
Flags: needinfo?(anbu1024.me)

Thanks for the report!

This assertion was recently added to help us identify corner cases where we invalidate optimized code because one of our assumptions had been violated, and then recompile with the same assumptions. This bug is not a security concern or a correctness problem, but fixing it may improve performance.

It looks like you're doing fuzz-testing of some sort. If you're not interested in finding this sort of bug, you can use the --disable-bailout-loop-check shell flag to turn off this assertion. (We're happy to see this kind of report, though!)

Flags: needinfo?(iireland)

The bug here is caused by range analysis. Here's a slightly reduced testcase (with --no-threads --fast-warmup):

function f(a, c) {
    if (c) {
        a++;
    } else {
        a--;
    }
    return (a + a) | 0;
}

with ({}) {}
for (var i = 0; i < 100; i++) {
    f(2147483647, i % 2);
}

When we take the a++ branch, the value of a overflows Int32. We attach a stub with a DoubleAddResult for (a + a), and transpile it, producing MIR that looks like this after GVN:

17 phi toDouble15:Double add10:Double
22 add phi17:Double phi17:Double [double]
...
24 truncatetoint32 add22:Double

Because the only consumer of the add is a truncation, range analysis decides to convert it to Int32. We generate an overflow check, which bails out. We don't make any changes to CacheIR, though, because the existing stub already handles overflow. In release builds without the assertion, this will trigger a bailout/invalidation loop.

The easiest fix here is to mark any TruncateKind::TruncateAfterBailout node here with BailoutKind::EagerTruncation. If we bail out for that node, we will make range analysis more conservative when we recompile.

It's a little silly that we're doing this optimization at all, since baseline presumably had a good reason for attaching DoubleAddResult. It's probably overkill to rip TruncateAfterBailout out completely, though.

If an addition of integer values overflows, we will transpile a CacheIR stub containing a DoubleAddResult. In this testcase, after some optimization, we have MIR that looks like:

17 phi toDouble15:Double add10:Double
22 add phi17:Double phi17:Double [double]
...
24 truncatetoint32 add22:Double

Range analysis sees that the only consumer of add22 is a truncation, and truncates the add to Int32. When we overflow again and bail out, we don't make any changes to the CacheIR, because it already handles the overflow case. We end up in a bailout loop.

This patch fixes the problem by marking every truncated instruction with TruncateKind::TruncateAfterBailout as BailoutKind::EagerTruncation. If the instruction bails out, FinishBailoutToBaseline will invalidate the script and set the hadEagerTruncationBailout flag. When we recompile, that flag will prevent range analysis from truncating any instructions with TruncateKind::TruncateAfterBailout.

Assignee: nobody → iireland

(In reply to Iain Ireland [:iain] from comment #5)

It's a little silly that we're doing this optimization at all, since baseline presumably had a good reason for attaching DoubleAddResult. It's probably overkill to rip TruncateAfterBailout out completely, though.

It is not silly at all, some code can be proven to work well in Int32 range despite Baseline rightly thinking that each operation has to run with double math. This is what you are able to achieve with an optimizer which has more contextual information, than simply a single operation at a time.

Baseline only attaches stubs for cases that actually occur. If there's a DoubleAddResult, then this addition has previously produced a double. That either means the addition has overflowed (in which case we should probably not be optimizing as if overflow is unlikely), or that a double flowed in as an operand. But if so, where did that double come from?

Range analysis gives a static over-approximation of the possible runtime values. Baseline IC data is a sample of the actual runtime values. The latter should be a subset of the former.

(In reply to Iain Ireland [:iain] from comment #8)

Range analysis gives a static over-approximation of the possible runtime values. Baseline IC data is a sample of the actual runtime values. The latter should be a subset of the former.

But the former might not actually be useful:

function add3Int(x, y, z) {
  return ((x | 0) + (y | 0) + (z | 0)) | 0;
}

In this case Baseline can report 2 double additions with both double inputs and double outputs, but this is not the intent of the code to actually compute doubles.

I don't understand your example. JSOp::BitOr will produce an Int32 value in baseline. If you compile that function with IONFLAGS=warp-snapshots, you will see that all of the additions use Int32AddResult.

In general, baseline does not produce double values unless it has to. There may be exceptions, but I can't think of any off the top of my head.

(In reply to Nicolas B. Pierron [:nbp] from comment #9)

function add3Int(x, y, z) {
  return ((x | 0) + (y | 0) + (z | 0)) | 0;
}

In the previous example:

  • x | 0 is in the range [-2^31 - 1, 2^31]
  • y | 0 is in the range [-2^31 - 1, 2^31]
  • z | 0 is in the range [-2^31 - 1, 2^31]
  • (x | 0) + (y | 0) is in the range [-2^32 - 1, 2^32] {computed with double}
  • (…) + (z | 0) is in the range [-2^33 - 1, 2^33] {computed with double}
  • ((x | 0) + (y | 0) + (z | 0)) | 0 is in the range [-2^31 - 1, 2^31]

Range Analysis is responsible to truncate the operations in the following order:

  • (…) + (z | 0)
  • (x | 0) + (y | 0)

I agree that range analysis is useful. The specific optimization that I'm questioning is double truncation: specifically, the subset of double truncation that involves TruncateAfterBailouts. If we transpile a CacheIR op with a double result, that normally implies that the observed range of results for that op is outside [-2^31 - 1, 2^31]. If we truncate it anyway, then we are adding a bailout for a case that we've already observed, which seems like a mistake.

The important thing to realize here is that WarpBuilder gives us fairly precise information about the observed types of arithmetic operands, so there may be less value than there used to be in trying to "improve" types. In your example, we don't need range analysis to use integer addition: we already have that from the transpiled CacheIR. If you look at the ionlog, range analysis removes the unnecessary BitOrs, but doesn't truncate the addition, because it's already Int32.

I'm instead interested in the case where the MAdd (or whatever) is Double before range analysis, but Int32 afterwards. Given WarpBuilder, are there still any cases where that's a profitable transformation?

(In reply to Iain Ireland [:iain] from comment #12)

The important thing to realize here is that WarpBuilder gives us fairly precise information about the observed types of arithmetic operands, so there may be less value than there used to be in trying to "improve" types. In your example, we don't need range analysis to use integer addition: we already have that from the transpiled CacheIR. If you look at the ionlog, range analysis removes the unnecessary BitOrs, but doesn't truncate the addition, because it's already Int32.

I'm instead interested in the case where the MAdd (or whatever) is Double before range analysis, but Int32 afterwards. Given WarpBuilder, are there still any cases where that's a profitable transformation?

This picture highlights that WarpBuilder produced Double addition for MAdd 17 and MAdd 22, as predicted, and which are both truncated later on. Tested with:

function top() {
  with ({}) {};
  for (let z = 0; z < 33; z++) for (let y = 0; y < 33; y++) for (let x = 0; x < 33; x++) {
    add3Int(Math.pow(2, x), Math.pow(2, y), Math.pow(2, z));
  }
}

So, we do not need Range Analysis to ensure that we are using integer types here.

(In reply to Iain Ireland [:iain] from comment #12)

I agree that range analysis is useful. The specific optimization that I'm questioning is double truncation: specifically, the subset of double truncation that involves TruncateAfterBailouts. If we transpile a CacheIR op with a double result, that normally implies that the observed range of results for that op is outside [-2^31 - 1, 2^31]. If we truncate it anyway, then we are adding a bailout for a case that we've already observed, which seems like a mistake.

TruncateAfterBailouts is used to ensure that the input of a truncated operation are integers as well, but if we could not truncate the input then we would have to bailout, as this would be a new case not encountered in previous compilations.

(In reply to Iain Ireland [:iain] from comment #5)

When we take the a++ branch, the value of a overflows Int32. We attach a stub with a DoubleAddResult for (a + a), and transpile it, producing MIR that looks like this after GVN:

The MAdd 10 instruction apparently does not check for the Truncation kind and always bailout on overflow … https://searchfox.org/mozilla-central/source/js/src/jit/x86-shared/CodeGenerator-x86-shared.cpp#780

I presume this is where the problems come from, not from the fact that Range Analysis predict that this is fine, but the Code Generator never got instrumented to consider Range Analysis flags.

In this case we should only generate overflow checks if the AddI instruction has truncateKind() <= TruncatedAfterBailout. Similar to what is done in DivI: https://searchfox.org/mozilla-central/source/js/src/jit/x86-shared/CodeGenerator-x86-shared.cpp#1369-1372

Severity: -- → S4
Priority: -- → P1
Pushed by iireland@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/0f27a332d47f
Make range analysis more conservative after truncated node bails out r=nbp,jandem
Status: UNCONFIRMED → RESOLVED
Closed: 3 years ago
Resolution: --- → FIXED
Target Milestone: --- → 87 Branch
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: