Closed Bug 1237410 Opened 8 years ago Closed 8 years ago

Access to SharedArrayBuffer stops working after JIT kicks in -- Nightly

Categories

(Core :: JavaScript Engine, defect)

46 Branch
x86_64
Windows 7
defect
Not set
normal

Tracking

()

RESOLVED INVALID

People

(Reporter: jerryj, Unassigned)

References

(Blocks 1 open bug)

Details

Attachments

(2 files)

User Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:46.0) Gecko/20100101 Firefox/46.0
Build ID: 20160104030217

Steps to reproduce:

1. Run Firefox Nightly on a 2/4 core (4/8 thread) processor
2. Visit http://www.duckware.com/test/chrome/spidermonkey.html
3. Open the console (press F12)
4. Click on the "Run Test" link in the web page


Actual results:

Values returned from the shared memory 'lock up' -- which is assumed to be right at the point at which JIT kicks in?

The code detects this 'lock up' and displays an error message.


Expected results:

The test should complete successfully in seconds (and does in Chrome with flags that turn SharedArrayBuffer on).
OS: Unspecified → Windows 7
Hardware: Unspecified → x86_64
Component: Untriaged → JavaScript Engine
Product: Firefox → Core
Flags: needinfo?(lhansen)
I'll look into this a little bit more but I'm pretty sure the problem is that the test code is racy and that IonMonkey simply optimizes the code too well.  Neither the worker or the master performs any synchronization; while the stores from the worker will be performed the loads in the master loop will probably be hoisted out of the loop by the JIT.  I've seen this before; this is the reason emscripten must translate volatile accesses into atomic accesses, for example.

It's on my plate to figure out which JIT optimizations that are illegal if the affected memory might be shared, but I'm pretty sure this is not one of them.
Flags: needinfo?(lhansen)
I'll attach a couple of files.  One is a spidermonkey shell test case that has the same problem as the HTML version (this is substantially the same as the original test case).  The other is the annotated disassembly of the jitted code for the loop in the main program with its pre-header.  This shows clearly that the load was hoisted out of the loop.

If the load in the loop is replaced by Atomics.load then the program will run as intended because Atomics.load is not hoisted out of the loop.  The program is still racy, though, since the store is not synchronizing.  To avoid the race you must use Atomics.store in the worker as well.

Depending on the platform, Atomics.load and Atomics.store are more expensive than regular loads and stores.  Occasionally there is discussion about adding cheaper atomic loads and stores than these (notably release-acquire and relaxed variants) but that's really not something we're targeting for v1.
Attached file shelltest.js
Attached file disas.txt
Closing as invalid given the above analysis but tracking as a dependent on the shared-memory bug.
Status: UNCONFIRMED → RESOLVED
Closed: 8 years ago
Resolution: --- → INVALID
Create an example to show a JIT optimization error, and get blamed for the example, which is not real code, but which was created only to show the JIT optimization error.  Interesting.

While the interpreter is running, this 'racy' code actually WORKS.

As soon as the JIT kicks in, the code is converted to NON-RACY code.  The read/write to the same location is converted into a read/write to DIFFERENT memory locations.  There is NO race anymore when the code is run under the JIT.  That is a JIT optimization error.  You can't have your cake and eat it too.  If the code is racy and you blame 'racy code', the JIT output MUST also be racy.  But the JIT output is not racy.  The JIT has translated the original code into something that is is not.

This is a clear example of a JIT *highly* optimized under the presumption that variable access is only made by a single thread at a time.  That assumption breaks once shared memory is introduced into JavaScript.

If Firefox is going to support direct access to shared memory in the interpreter, but not the JIT, that is going to create nasty problems down the road.

If the JIT can not figure out what is shared memory, is it time for 'volatile'?
> that is going to create nasty problems down the road.

Nastier than the equivalent problems you get with a C compiler with racy memory access that may or may not work as the programmer "intended" depending on the compiler optimization level?  This is the same exact problem: unoptimized racy code has one behavior but optimized racy code has a different behavior.
Let me restate my explanation in an attempt to be clearer.

Shared memory is supported both in the interpreter and in the jit in Firefox, and with the same semantics.  According to the shared memory specification, the reads and writes to the shared location are racy because they are conflicing (ie to the same location) and are performed without any kind of synchronization.  One implication of the data race according to the spec is that it becomes unpredictable what is read from and written to the racy locations; any value may be written, and any value may be read.  Ergo the JIT is within its right to optimize the load in the way it does, and it is well understood that jitting code may have this kind of effect (more on that below).  The only way to fix this is for the program to use atomic operations for the reads and writes.  As I wrote earlier, C/C++ volatile must be translated into JS atomic operations for correctness.

Even supposing that the spec were to change to mandate that TypedArray reads can't be hoisted out of loops if the memory might be shared, I'm not sure what good it would do.  A data race can still cause any value to be read or written and the read that's performed does not have to see the value that was written.  On all modern hardware your example program would observe that the flag had changed, but it would not necessarily observe that the flag values increased monontonically (most non-x86 CPUs may reorder the writes).  So such a spec change would fix this particular test case but would do little for the problem in general.

I think you're misunderstanding the contract for the JIT.  The data race exists as a consequence not of any particular actual execution but as a consequence of all possible executions of the program.  The program is racy whether it's jitted or not.  The program does not in fact work, it only "works" when we assume one particular implementation technology (the interpreter).

(Nobody's blaming you for anything; feedback is always helpful.  But what you've encountered is not a bug in Firefox, at least not until the spec changes.)
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: