Closed Bug 965880 Opened 10 years ago Closed 10 years ago

OdinMonkey: support changing heaps in asm.js

Categories

(Core :: JavaScript Engine, defect)

defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla35

People

(Reporter: bbouvier, Assigned: luke)

References

Details

Attachments

(4 files, 3 obsolete files)

For bug 927182, we first need to implement ArrayBuffer.resize in JS land.

I can see that AllocateArrayBufferContents allows to do that, but as sfink noticed, it is too simplistic and doesn't update the buffer's ArrayViews pointers. We might invalidate Ion code (sfink pointed to me MarkObjectStateChange to use with a valid context), and for Odin we could just repatch all the array accesses (that will happen in bug 927182).

I think there will be some particular cases, if the ArrayBufferObject uses the slots for storage and we want to resize it to a size bigger than what can be stored in the slots (the other way around needs some thoughts too).

Regarding specification, I didn't find anything on the draft and I admit I haven't read the es6 mailing lists for a while, so I'll just handle some obvious issues (like newSize <= 0). dherman, is there any proposal or specification so far?
Do we actually need to support newSize < oldSize at all?  Not needing to do it probably makes things a lot simpler in terms of defining the behavior...
Shrinking the array might make sense for long-running applications that can eventually free some of their memory using sbrk. But given C memory fragmentation it might not be that common I guess, and even though emscripten could do it (in non-asm.js mode), it never has, and no one has really asked for it - whereas they do ask all the time for growing the array (which we do support in non-asm.js mode).

So the case of growing the array is far more important, if it's simpler to ignore shrinking then that sounds ok to me.
The hard case with resize is if the user resizes to a length that doesn't match the asm.js link-time validation requirements.  Since resize can/will happen with asm.js on the stack, we have to deal with it; we can't just issue a link-time validation error and be done w/ it.  My tentative solution to this problem is to mprotect the entire heap and have the AsmJSFaultHandler do the right thing.  Given all this, we should be able to handle the shrinking case w/o any extra effort, I think.

(Note, for all the asm.js cases, the elements are not stored inline in ArrayBufferObject; it can only hold 16*sizeof(Value) bytes.  See also ArrayBufferObject::prepareForAsmJS.)
Also, I noticed there's already bug 927182, so one of these should probably be a dup of the other.
(In reply to Luke Wagner [:luke] from comment #4)
> Also, I noticed there's already bug 927182, so one of these should probably
> be a dup of the other.

I was wondering whether we could split up work between non-Odin and Odin, but it seems that Odin patching has to happen as soon as resizing lands in JS land.
Do we need bug 845899 fixed first?
(In reply to Benjamin Bouvier [:bbouvier] from comment #5)
> I was wondering whether we could split up work between non-Odin and Odin,
> but it seems that Odin patching has to happen as soon as resizing lands in
> JS land.

Yes, unfortunately it seems so, since one can always .resize() a linked asm.js module's buffer.

> Do we need bug 845899 fixed first?

All you need to do is call MarkObjectStateChange(cx, view) on all the views linked to the array buffer to invalidate all the associated jit code.  (See ArrayBufferObject::neuterViews.)
bz brought up an interesting design question for this API that I think we've discussed before, but I forgot the exact resolution: when you resize the ArrayBuffer, what do you do to the .length of all the dependent views?

The best option seems to me to be:
 - if one of the following, the view's .length is resized to be floor(new-byte-length/element-size):
  + the buffer was extracted from the view (int32view.buffer)
  + the view was created from the buffer without the length argument (e.g., new Int32Array(buffer, offset))
  + the ArrayBuffer would otherwise shrink past the .length of the view
 - else, the view's length stays the same

One question is what to do for resizes to non-multiples of view element length.  E.g., let's say I have:
  var i32 = new Int32Array(4);
  i32.buffer.resize(5);
One option is what I did above, using floor().  Another option is to throw, which is analogous to what we do when constructing a view from a non-multiple ArrayBuffer size.  I like the throwing option, but we can't just say "if it isn't a non-zero multiple of the largest element size of a dependent view" since what dependent views are around is GC-dependent.  Sub-options include (1) remembering the largest element size ever used on the buffer, (2) just fixing a multiple of 8.  (1) is blech, (2) seems insufficient to handle future typed arrays of elements greater than 8 bytes (Float32x4Array).

Thus, in the bad-multiple case, it seems like we'd just stick with the floor behavior implied by above.  (Note this implies zero-length typed array views.)  Sound good Dave?
Flags: needinfo?(dherman)
Could separate buffer and view classes be added that support resizing, to avoid the complexity of supporting resizing on the existing buffers, and to provide classes that are fixed in their size?

There appears to be little practical benefit in universally supporting resizing, and it adds a lot of complexity to implementations.  The language is frustrating the implementation.

The compiler could check the class of the buffers and dispatch to different compilation paths.  Odin would not need to deal with the complexity of buffers being resized after it has already optimized the code for a fixed buffer size.

The resizing is needed to support multiple apps, running in separate tabs, in the same process address space, and on 32 bit builds.  To be effective it will need to support shrinking, be able to reclaim the address space after shrinking, and support compacting to avoid fragmentation.  It will need to deal with thread safety if the resizeable buffers can be shared between threads.  The demands will result in the generated code being slower, and it will block further optimizations (does not allow pointers into the heap) so it will become increasingly slower.  Performance improvements will demand an asm.js JIT compiler, and it might be a better investment to develop the Ion JIT to better handle asm.js code.   This is a sizeable project that will be obsolete when tabs run in separate processes.
(In reply to Douglas Crosher [:dougc] from comment #9)
I don't think that having separate resizable buffer/view classes avoids any real complexities (assuming we want the resizable views to receive the same level of optimization as the nonresizable views): we'll need the code to handle on-stack resizing one way or the other.  Rather, I think the root of your concern is that resizing inhibits the heap-pointer-computation-hoisting optimization I know you're interested in.  However, this optimization isn't entirely disabled; we just lose the ability to hoist heap computations across function calls which seems like a relatively minor loss.  Also, SharedArrayBuffer definitely wouldn't have .resize so there wouldn't be thread safety issues.

On the need for resize:
 - given that the first milestone of e10s is one-process-for-all-content, and that is likely at least a year away, process-per-origin is not at all close at hand
 - all browsers that do support process-per-tab start putting multiple tabs in a single process after a few tabs (due to memory consumption), so virtual address usage on 32-bit would remain a problem for uses with many tabs open (which is precisely IMVU's use case)
 - even if a single app is in its own process you could end up with several independent asm.js modules (with independent heaps) such that it would be a problem if each one allocates its own max-sized heap:
  + consider a modern page embedding code from different sources viz iframes
  + an eventual goal of asm.js is to implement compiled components that can be used in handwritten JS.  If multiple components are used (mega.co.nz uses two already) you don't want to have each one declaring max-sized heaps.

The implementation won't be trivial, though, so it'd be nice to hear from dherman/brendan if other browsers would be game to implement resize before we invest too much time in it.
So we had some discussions and also included some Googlers as well.  One problem with resize() is that, while it drafts off of the existing work to support neuter(), it has future-conflicts with the expected implementation of Typed Object views (which other than they conflict with resize(), I'm quite happy about: Typed Object views would be standardized as values, not objects and the expected JIT optimization puts them on the stack (so no linked list to traverse during resize()).  Also, the above issue with resizing views (and other improvements we discussed) are still not super simple, so the spec-cost isn't exactly free.

OTOH, it's always been a possibility to just extend asm.js to allow a carefully controlled swapping of buffers.  If we go this route, we can even avoid, by constructing, all the issues of shrinking and resizing to non-multiple lengths:

    function f(global, foreign, buffer) {
      "use asm";
     
      var getByteLength = global.Function.prototype.call.bind(
                            global.Object.getOwnPropertyDescriptor(global.ArrayBuffer.prototype, 'byteLength').get);
     
      var I32A = global.Int32Array;
      ...
      var i32 = new I32A(buffer);
      ...
     
      function changeHeap(newBuffer) {
        if (getByteLength(newBuffer) < getByteLength(buffer))
            return false;
        if (getByteLength(newBuffer) % 0x1000000 != 0)
            return false;
     
        i32 = new I32A(newBuffer);
        ...
        buffer = newBuffer;
        return true;
      }
     
      ...
     
      return { ..., changeHeap:changeHeap };
    }

The runaround for getByteLength is necessary to avoid any proto-walkign during changeHeap (at which point we cannot simply issue a validation error).  The asm.js spec would basically require you to write a function syntactically isomorphic to this one (the exact same parse tree modulo variable names, whitespace, parens, curlies, comments, etc).

Overall, this should all be strictly easier to implement and allow us to start using it right now (well, once the Odin optimization gets to release) instead of having to wait for other browsers to implement resize.  The only loss is that, resize() would allow realloc and realloc might be able to avoid some OOM situations that buffer-swaping would hit, but this is highly dependent on realloc's impl.  So, overall, I'm quite happy with this route.
Flags: needinfo?(dherman)
For what it's worth, I am also happy with this route because it means we would not have to implement any workarounds for browsers that don't implement ArrayBuffer.prototype.resize.

Long-term, for most efficient use of address and commit space, we'll want a more mmap-like memory allocation scheme anyway.
With the proposed solution, how would the memory usage be handled? At present most browsers on Win32 can't handle a typed array allocation of more than about ~1GB; this approach to buffer resizes would stop you from ever raising the heap size above 0.5GB because a resize needs to be able to have (oldBufferSize + newBufferSize) allocated.
realloc() wouldn't necessarily do better.  It might, if the impl was smart and we were lucky, but it highly depends on allocation patterns (esp. with ASLR).  Ultimately, 32-bit is just going to have problems with large contiguous allocations.  (There's a lot of work now to ship 64-bit FF on Win64 for this reason.)  Apps that predictably need a lot of memory should continue to do what they do now: allocate up front and catch OOM and present UI suggesting users restart their browser.  The intended use case for heap resizing is apps that usually have small heaps and are expected to be instantiated many times.
That approach might work, but the extremely specific form of the changeHeap function is something that concerns me. Since it looks like normal JS code, people will likely modify it, for example add a check that the size is a multiple of 1MB if they care about that, etc., and validation will fail, leading to confusion. This feels like a big departure from the current asm.js spec (which is very low-level except for some very simple necessary things like function tables, but even there it is hard to see alternatives to the current approach which is very natural).
(In reply to Alon Zakai (:azakai) from comment #15)
> This feels like a big departure from the current
> asm.js spec

This is exactly like the current asm.js spec: you're not going to just happen to write asm.js w/o (1) using Emscripten, (2) carefully studying the spec.  It's a compiler target.  People tinkering with changeHeap will get a very clear error message.
We were considering this again, recently.  Here my updated thinking on the best pattern.  Going to try to hack on this now:

function asmModule(stdlib, foreign, buffer) {
    "use asm";
    
    var I32 = stdlib.Int32Array;
    ...
    var i32 = new I32(buffer);
    
    // Must be the result of calling
    //  Function.prototype.call.bind(
    //    Object.getOwnPropertyDescriptor(ArrayBuffer.prototype, 'byteLength').get))
    var len = stdlib.bound.byteLength;
    
    function changeHeap(newBuffer) {
        if (len(newBuffer) & 0xffff)  // accept any constant that implies multiple of 64kb
            return false;
        i32 = new I32(newBuffer);
        ...
        buffer = newBuffer;
        return true;
    }

    return {...., changeHeap:changeHeap, ... }
}

Additionally, we must prevent calls inside heap-index expressions (e.g. H32[f()] = 0) since semantics say we must evaluate H32 before f() (which would mean the old heap if f() indirectly calls changeHeap).

As I said before, this will let us get something working right now without any new semantics (thinking about implementing now).  However, we're also proposing a new primitive ArrayBuffer.transfer [1] which would allow us to get the efficiency of realloc/mremap.  The cool part is that the asm.js pattern is the same; the glue code can do whatever it wants to produce the new heap (including feature testing for ArrayBuffer.transfer and falling back to copy).

[1] https://gist.github.com/andhow/95fb9e49996615764eff
Summary: Implement ArrayBuffer.prototype.resize → OdinMonkey: support changing heaps in asm.js
Attached patch import-viewsSplinter Review
As a preparatory step, this patch allows array view constructors to be imported.
Assignee: nobody → luke
Status: NEW → ASSIGNED
Attachment #8496298 - Flags: review?(benj)
Attached patch change-heap (WIP) (obsolete) — Splinter Review
This patch is almost done, just missing a bit of link-time validation (which could cause a crash if you shrink...).  Alon, could you check whether Emscripten can produce validating code with this patch in Odin?
Attachment #8496301 - Flags: feedback?(azakai)
I get errors when I try to apply that to inbound, and I don't see c6df107ebd0cfea9a589cc7107051f49908862a9 on inbound either - do I need other patches than that last one?
Comment on attachment 8496301 [details] [diff] [review]
change-heap (WIP)

Review of attachment 8496301 [details] [diff] [review]:
-----------------------------------------------------------------

I don't think this is going to solve the use cases people hope it will. While ArrayBuffer.transfer may allow the buffer elements to grow, it still has problems with fragmentation, and there is no handle to the elements for sharing between asm.js modules. For practical reasons there is a minimum size for the asm.js buffer, and ArrayBuffer.transfer zeros the length of the prior buffer which is still being used my multiple asm.js modules. At least with a buffer resize operation we still have a handle to the buffer.

The change-heap function exposes an ability to flip between buffers and this is not needed and will frustrate asm.js implementations. It would be better to restrict the asm.js module to working with the initial buffer, even if the size can change. But with ArrayBuffer.transfer we don't even have a handle to compare the buffers.

As an alternative, could a buffer with a lazy storage allocation length be implemented? The buffer would be allocated with the maximum length but the implementation would be free to allocate an appropriate storage size for the device. An attempt to access elements beyond the allocated storage size could invoke a realloc operation to grow the storage or just oom if not possible. Implementations would be free to use a range of strategies to reserve and grow the allocated storage, from lazy growing and copying, to pre-allocating the maximum length. The buffer length would define a maximum storage size allowing the implementation to reuse virtual memory beyond this - much better than having to keep virtual memory available for an unknown resizeable length. This would also be compatible with using index masking for heap access because the buffer length does not change.

::: js/src/asmjs/AsmJSValidate.cpp
@@ +6551,5 @@
> +    ParseNode *mask = BinaryRight(leftCond);
> +    if (!IsLiteralInt(m, mask, &u32))
> +        return m.fail(mask, "expecting integer literal mask");
> +    if ((u32 & 0xffffff) != 0xffffff)
> +        return m.fail(mask, "mask value must have the bits 0xffffff set");

Could this also require that the mask be 0*1* and not have any holes?

@@ +6563,5 @@
> +    ParseNode *limit = BinaryRight(rightCond);
> +    if (!IsLiteralInt(m, limit, &u32))
> +        return m.fail(limit, "expecting integer limit literal");
> +    if (u32 < 0xffffff)
> +        return m.fail(limit, "limit value must be >= 0xffffff");

Could this initialize the minimum required heap length used to validate the heap when linking?

Also could this lock the minimum required heap length so that constant index heap accesses no longer change it? This might even be necessary because the bounds checks are optimized away in these cases.

@@ +6775,5 @@
>  
>      if (!CheckFunctionHead(m, fn))
>          return false;
>  
> +    if (m.tryToValidateChangeHeap()) {

Could (or does) the change-heap function be required to be the first function?

It will (or could) set the minimum required heap length and the hasChangeHeap() flag that affects later code validation and generation. The minimum required heap length is also used by range analysis and to optimize away bounds check.

If there are multiple threads validating functions then there might be a race? If it is required to be the first function then it could be validated on the main thread before other threads start validating.

@@ +7688,5 @@
>          MOZ_CRASH("SIMD types shouldn't be returned from a FFI");
>      }
>  
> +    // The heap pointer may have changed during the FFI, so reload it.
> +    masm.loadAsmJSHeapRegisterFromGlobalData();

Conditional on hasChangeHeap()?

@@ +7904,5 @@
> +    masm.loadPtr(Address(StackPointer, savedGlobalOffset), GlobalReg);
> +#else
> +    JS_STATIC_ASSERT(MaybeSavedGlobalReg == 0);
> +#endif
> +    masm.loadAsmJSHeapRegisterFromGlobalData();

Looks good - doing this with and without hasChangeHeap(). Loading from the stack or global data probably has the same cost.

@@ +8118,5 @@
>      masm.mov(ABIArgGenerator::NonVolatileReg, StackPointer);
>  
>      // Restore the machine state to before the interrupt.
>      masm.PopRegsInMask(AllRegsExceptSP, AllRegsExceptSP.fpus()); // restore all GP/FP registers (except SP)
> +    masm.loadAsmJSHeapRegisterFromGlobalData();  // In case there was a changeHeap

Conditional on hasChangeHeap(), or is it just not a hot path?

@@ +8237,5 @@
>      masm.call(AsmJSImmPtr(AsmJSImm_HandleExecutionInterrupt));
>      masm.branchIfFalseBool(ReturnReg, throwLabel);
>  
> +    // Reload the heap register in case the callback changed heaps.
> +    masm.loadAsmJSHeapRegisterFromGlobalData();

Here too?
(In reply to Douglas Crosher [:dougc] from comment #22)
> While ArrayBuffer.transfer may allow the buffer elements to grow, it still
> has problems with fragmentation, 

I don't think we can solve that without 64-bit or at least a new process per tab.  I do think this solves the original problem that many potential asm.js users have expressed with wanting to start with a small heap and grow on demand.

> and there is no handle to the elements for
> sharing between asm.js modules.

I don't fully parse this statement, but in case this addresses your concern: you can still share a single ArrayBuffer between multiple modules; Emscripten just has to track which modules are linked to a given ArrayBuffer and call changeHeap on all of them.

> For practical reasons there is a minimum
> size for the asm.js buffer, and ArrayBuffer.transfer zeros the length of the
> prior buffer which is still being used my multiple asm.js modules. 

This is just the general neutering (now, "detachment") problem.  I plan to solve it correctly (instead of the current hack) in the followup bug that adds ArrayBuffer.transfer (which apparently is now stage 0 for ES7 :).

> At least with a buffer resize operation we still have a handle to the buffer.

resize has many broader JS engine implementation problems (we have to handle resize to non-good-for-asm.js sizes (trickier than I initially thought) and there is also significant (and well-reasoned) opposition from Dmitry Lomov on v8) and is a lot quirkier to standardize (need to add a concept of auto-resizing views of ArrayBuffers and audit all ArrayBuffer method spec/impls to handle resize-during-iteration correctly).  With ArrayBuffer.transfer, I think we mostly have feature parity with resize and, as a bonus, we don't have to wait for other browsers to implement.  The only downside is it's conceptually less elegant and we have to manually call changeHeap on all affected asm.js modules.

> The change-heap function exposes an ability to flip between buffers and this
> is not needed and will frustrate asm.js implementations.

I think there are actually use cases for flipping between heaps (imagine a single running asm.js app that is transferred between workers by just transferring heaps).  I don't see any implementation problems compared to resize (actually, resize has harder cases; viz., resize to non-good sizes).

> As an alternative, could a buffer with a lazy storage allocation length be
> implemented?
[...]

That's an interesting idea.  The main problem I see is that, for x64, to continue avoiding bounds checks, we'd need to use a signal handler which, upon fault, attempts to mremap and, on success, updates the base-register pointer and re-executes and, on failure, moves pc to a throw stub.  The problem is, ArrayBuffer contents can be accessed from arbitrary C++ which means we can't do this.  On non-x64, we'd need to give all typed arrays a second "mappedLength" which is <= byteLength and check this everywhere we index and then this becomes a large maintenance burden requiring audits of all JIT paths.

Another problem is that, if we're the only one that implement this lazy-mapping strategy, asm.js apps that attempt to allocate a large buffer a priori will simply fail on non-FF on 32-bit.

> - much better than having to keep virtual memory available for an unknown
> resizeable length.

This patch doesn't do that...

> This would also be compatible with using index masking
> for heap access because the buffer length does not change.

IIUC, we'd still have a problem on ARM/x86 with heap-masking: the goal of heap-masking is to avoid bounds checks with a mask that guarantees in-bounds.  But if we are growing the heap with realloc (not memory protection, because we don't want to allocate all the virtual address space up front; that's the whole point), we need a bounds check.

> Could this also require that the mask be 0*1* and not have any holes?

That would force a power of two which doesn't seem necessary or useful (given that we don't require this for the initial linking).

> Could this initialize the minimum required heap length used to validate the
> heap when linking?

Yeah, that's the plan and the part of the patch that is TODO.

> Also could this lock the minimum required heap length so that constant index
> heap accesses no longer change it? This might even be necessary because the
> bounds checks are optimized away in these cases.

That's the plan.  I should be more explicit about this in the specifiction RFC.

> Could (or does) the change-heap function be required to be the first
> function?

That is in the RFC (for the reasons you mention), and also a TODO in the patch.

> > +    // The heap pointer may have changed during the FFI, so reload it.
> > +    masm.loadAsmJSHeapRegisterFromGlobalData();
> 
> Conditional on hasChangeHeap()?

Yes, native won't clobber the non-volatile register, but I'd rather keep things uniform (catch bugs easier that way), esp. since the interp exit is quite slow so this won't make a significant difference.

> Conditional on hasChangeHeap(), or is it just not a hot path?
...
> Here too?

Again, I'd rather keep it uniform to get better test coverage and b/c these are all cold paths.
Attached patch combined.diff (obsolete) — Splinter Review
Oops, you need to apply both patches (the parent of the second patch is likely the cset of the first patch in my local mq).  Here's a rolled-up patch.  It applies to 82d07982831c.
Attachment #8496389 - Flags: feedback?(azakai)
Attachment #8496301 - Flags: feedback?(azakai)
(In reply to Luke Wagner [:luke] from comment #23)
> (In reply to Douglas Crosher [:dougc] from comment #22)
> > While ArrayBuffer.transfer may allow the buffer elements to grow, it still
> > has problems with fragmentation, 

Thank you for following up.

> I don't think we can solve that without 64-bit or at least a new process per
> tab.

Agreed. But when we have 'process per tab', and can solve the problem, we would also need only one large asm.js heap per 32 bit process. This patch is promoting and allowing multiple heaps per asm.js module - we'll never solve the problem on 32 bit.

> > and there is no handle to the elements for
> > sharing between asm.js modules.
> 
> I don't fully parse this statement, but in case this addresses your concern:
> you can still share a single ArrayBuffer between multiple modules;
> Emscripten just has to track which modules are linked to a given ArrayBuffer
> and call changeHeap on all of them.

Implementations are still burdened with having to support the transition period when modules that started using the same heap are using different heaps.
 
> > For practical reasons there is a minimum
> > size for the asm.js buffer, and ArrayBuffer.transfer zeros the length of the
> > prior buffer which is still being used my multiple asm.js modules. 
> 
> This is just the general neutering (now, "detachment") problem.  I plan to
> solve it correctly (instead of the current hack) in the followup bug that
> adds ArrayBuffer.transfer (which apparently is now stage 0 for ES7 :).

How? By reserving and protecting the virtual address space used by the old buffer? This is not going to work with growing the buffer in place. The fragmentation and lack of virtual address space issues are not solved.

> > The change-heap function exposes an ability to flip between buffers and this
> > is not needed and will frustrate asm.js implementations.
> 
> I think there are actually use cases for flipping between heaps (imagine a
> single running asm.js app that is transferred between workers by just
> transferring heaps).  I don't see any implementation problems compared to
> resize (actually, resize has harder cases; viz., resize to non-good sizes).

Heap flipping will not work for an implementation that reserves an area of virtual address space per-process for a single large asm.js heap. This seems the best solution for supporting large contiguous asm.js buffers on 32 bit implementations.

> > As an alternative, could a buffer with a lazy storage allocation length be
> > implemented?
> [...]
> 
> That's an interesting idea.  The main problem I see is that, for x64, to
> continue avoiding bounds checks, we'd need to use a signal handler which,
> upon fault, attempts to mremap and, on success, updates the base-register
> pointer and re-executes and, on failure, moves pc to a throw stub.
> The problem is, ArrayBuffer contents can be accessed from arbitrary C++ which
> means we can't do this.

64 bit implementations could choose to allocate or reserve the 4GB up front if they what to use this scheme to avoid bounds checks. This would be the 'appropriate' initial storage size for them.

> On non-x64, we'd need to give all typed arrays a
> second "mappedLength" which is <= byteLength and check this everywhere we
> index and then this becomes a large maintenance burden requiring audits of
> all JIT paths.
> 
> Another problem is that, if we're the only one that implement this
> lazy-mapping strategy, asm.js apps that attempt to allocate a large buffer a
> priori will simply fail on non-FF on 32-bit.

Yes, this is a problem. Perhaps it would be possible for some portable code to allocate an appropriate size for different browsers - to detect some FF feature and only then allocate the full size.
 
> > - much better than having to keep virtual memory available for an unknown
> > resizeable length.
> 
> This patch doesn't do that...

Not really. We don't know how large the heap could grow. A process per-tab implementation with a single asm.js heap would have to reserve a very large area of virtual memory to allow reliable resizing. If we know the maximum size then it need only allocate this area of virtual memory.

> > This would also be compatible with using index masking
> > for heap access because the buffer length does not change.
> 
> IIUC, we'd still have a problem on ARM/x86 with heap-masking: the goal of
> heap-masking is to avoid bounds checks with a mask that guarantees
> in-bounds.  But if we are growing the heap with realloc (not memory
> protection, because we don't want to allocate all the virtual address space
> up front; that's the whole point), we need a bounds check.

The point is that the application asm.js code would be compatible with heap masking optimizations. The asm.js implementation could choose to pre-allocate the full buffer length in virtual memory and then optimize away the bounds checks, and I thought we agreed above that the only way to solve the fragmentation problems on 32 bit is with a processes per-app and it needs to allocate the full heap early not incrementally realloc. Implementations could also choose other allocation strategies but then may need bounds checks. The point is that the application code can be migrated to future implementations and run faster. The change-heap patch is creating two tiers of asm.js applications, it's not going to solve the fragmentation problems on 32 bit (we already agree that this needs process per app), and the specs will just have more baggage.
(In reply to Douglas Crosher [:dougc] from comment #25)
> > I don't think we can solve that without 64-bit or at least a new process per
> > tab.
> 
> Agreed. But when we have 'process per tab', and can solve the problem, we
> would also need only one large asm.js heap per 32 bit process.  This patch is
> promoting and allowing multiple heaps per asm.js module - we'll never solve
> the problem on 32 bit.

'process per tab' mitigates the problem on 32-bit, it doesn't solve it.  We want to allow people to use multiple independent components that use asm.js.  When multiple tabs get coalesced into one process (no browser really has process-per-tab, btw, they all coalesce after a few tabs) or a page contains random code in an iframe, we don't want unexpected OOMs. Also, this patch does not "promote multiple heaps per asm.js module".

> Implementations are still burdened with having to support the transition
> period when modules that started using the same heap are using different
> heaps.

I see no burden; be specific.

> How? By reserving and protecting the virtual address space used by the old
> buffer? This is not going to work with growing the buffer in place. The
> fragmentation and lack of virtual address space issues are not solved.

On 32-bit, by repatching lengths to zero.  On 64-bit, we've got tons of address space, so who cares.

> Heap flipping will not work for an implementation that reserves an area of
> virtual address space per-process for a single large asm.js heap. 

That's fine.  There are more use cases for asm.js than "One big ported C++ app".

> This seems the best solution for supporting large contiguous asm.js buffers on 32 bit
> implementations.

I don't see a problem if ArrayBuffer.transfer is used.  In the intermediate phase (before all modules are changeHeap'd), the modules would be in a detached state which requires no associated reservation on 32-bit.

> > Another problem is that, if we're the only one that implement this
> > lazy-mapping strategy, asm.js apps that attempt to allocate a large buffer a
> > priori will simply fail on non-FF on 32-bit.
> 
> Yes, this is a problem. Perhaps it would be possible for some portable code
> to allocate an appropriate size for different browsers - to detect some FF
> feature and only then allocate the full size.

But then they are still allocating a quite-large buffer which crowds the rest of the process, even if the allocation succeeds.  This is an unnecessary bruden if most of the time the required heap is tiny and this is precisely what we've heard people want to avoid.

Also, you've only addressed one of two problems listed, the other being:

> On non-x64, we'd need to give all typed arrays a
> second "mappedLength" which is <= byteLength and check this everywhere we
> index and then this becomes a large maintenance burden requiring audits of
> all JIT paths.

which is one of my biggest concerns with the auto-remapping buffer idea.

> We don't know how large the heap could grow. A process per-tab
> implementation with a single asm.js heap would have to reserve a very large
> area of virtual memory to allow reliable resizing. If we know the maximum
> size then it need only allocate this area of virtual memory.

When we get to this process-per-tab happy state, we'll need some sort of annotation/hint saying "definitely give me a new process" and "please reserve X bytes".  We need the former to inhibit normal process coalescing.  We need the latter because we can't assume that asm.js is the only thing that needs memory in the app.

> The asm.js implementation could choose to
> pre-allocate the full buffer length in virtual memory and then optimize away
> the bounds checks, 

Not a good idea on 32-bit.

> and I thought we agreed above that the only way to solve
> the fragmentation problems on 32 bit is with a processes per-app and it
> needs to allocate the full heap early not incrementally realloc.

That's 1-2 years away and we shouldn't assume too much about what will be the obvious thing to do at that point; things change quickly.  If we have this reliable mechanism for grabbing large hunks of virtual address space *and* an app doesn't care about crowding out the other contents of the window, then it can simply elide the change-heap function and use masking against a constant size.

> it's not going to solve the fragmentation
> problems on 32 bit (we already agree that this needs process per app),

We definitely have not agreed on that.  Allowing changing heap would solve real problems now and, as already described, would continue to be useful even when we eventually get to the process-per-app future.
Attached patch change-heap (obsolete) — Splinter Review
Here's the finished patch.  Overall, the actual implementation of heap-switching is trivial, it just reuses the existing code for re-linking modules.  The only important bit is that we need to refresh the GlobalReg on ARM/x64 when calling out of the module.

Most of the patch is just the simple but annoying validation of the change-heap function and the byteLength import.
Attachment #8496301 - Attachment is obsolete: true
Attachment #8496942 - Flags: review?(benj)
Attached patch combined.diffSplinter Review
Updated combined patch.
Attachment #8496389 - Attachment is obsolete: true
Attachment #8496389 - Flags: feedback?(azakai)
Attachment #8496943 - Flags: feedback?(azakai)
Comment on attachment 8496943 [details] [diff] [review]
combined.diff

Emscripten output can validate with this patch (after I push a small patch to incoming later today).
Attachment #8496943 - Flags: feedback?(azakai) → feedback+
Comment on attachment 8496298 [details] [diff] [review]
import-views

Review of attachment 8496298 [details] [diff] [review]:
-----------------------------------------------------------------

Looks good to me.

::: js/src/asmjs/AsmJSLink.cpp
@@ +222,5 @@
>  {
> +    // When a view is created from an imported constructor:
> +    //   var I32 = stdlib.Int32Array;
> +    //   var i32 = new I32(buffer);
> +    // the second import has nothing to validate and thus has a null field.

Maybe this comment should be closer to maybeViewImportField()? I was wondering *there* why the field would be null.

@@ +226,5 @@
> +    // the second import has nothing to validate and thus has a null field.
> +    if (!global.maybeViewImportField())
> +        return true;
> +
> +    RootedPropertyName field(cx, global.maybeViewImportField());

You could hoist this above the test (at the cost of rooting), and replace this test by

if (!field)
  return true;

::: js/src/asmjs/AsmJSModule.h
@@ +301,5 @@
>          uint32_t ffiIndex() const {
>              JS_ASSERT(pod.which_ == FFI);
>              return pod.u.ffiIndex_;
>          }
> +        PropertyName *maybeViewImportField() const {

Even if it was inconsistent with the rest of the *field functions, I kinda prefer the maybeViewName naming, as it makes it clear that the stored propertyname is the name of the array view, not any particular "field".

@@ +302,5 @@
>              JS_ASSERT(pod.which_ == FFI);
>              return pod.u.ffiIndex_;
>          }
> +        PropertyName *maybeViewImportField() const {
> +            JS_ASSERT(pod.which_ == ArrayView || pod.which_ == ArrayViewCtor);

uber-nit: here and everywhere, s/JS_ASSERT/MOZ_ASSERT

::: js/src/asmjs/AsmJSValidate.cpp
@@ +3719,5 @@
>          if (field == m.cx()->names().NaN)
>              return m.addGlobalConstant(varName, GenericNaN(), field);
>          if (field == m.cx()->names().Infinity)
>              return m.addGlobalConstant(varName, PositiveInfinity<double>(), field);
> +        Scalar::Type type;

(hah, my luke-style-checker would have added a blank line before this :))
Attachment #8496298 - Flags: review?(benj) → review+
(In reply to Benjamin Bouvier [:bbouvier] from comment #30)
Thanks, all good points.

> (hah, my luke-style-checker would have added a blank line before this :))

Hah, you're right, it is more consistent to surround with \n.
Comment on attachment 8496942 [details] [diff] [review]
change-heap

Review of attachment 8496942 [details] [diff] [review]:
-----------------------------------------------------------------

(JS_ASSERT is dead, long live MOZ_ASSERT!)

That looks really nice! The possibilities we'll have being able to switch between different heaps at run time will be so great (quickly thinking about compiled VMs which will be able to open different pre-compiled-in-heap files and such things). First batch of comments, I'd like to see another version (or inter-diff) just to be sure I've understood everything.

::: js/src/asmjs/AsmJSLink.cpp
@@ +536,5 @@
>      module.setIsDynamicallyLinked();
>  
> +    HandleValue globalVal = args.get(0);
> +    HandleValue importVal = args.get(1);
> +    HandleValue bufferVal = args.get(2);

(nice)

@@ +609,5 @@
> +        return false;
> +    }
> +
> +    Rooted<ArrayBufferObject*> newBuffer(cx, &bufferArg.toObject().as<ArrayBufferObject>());
> +    bool rval = module.changeHeap(newBuffer, cx);

In the general case, when we pass an array buffer that has been detached (istr that's the new name of neutering, right?), we figure this out in CallAsmJS. Since ChangeHeap can be called from an FFI (and thus is not going back through the CallAsmJS path first), we should check here as well that the buffer isn't detached, right?

@@ +779,5 @@
>  {
>      RootedPropertyName name(cx, func.name());
> +    unsigned numArgs = func.isChangeHeap() ? 1 : func.numArgs();
> +    JSFunction *fun = NewFunction(cx, NullPtr(), CallAsmJS, numArgs, JSFunction::ASMJS_CTOR,
> +                                  cx->global(), name, JSFunction::ExtendedFinalizeKind);

An alternative would be to create a different JSFunction for changeHeap, that directly calls the ChangeHeap function (which would need to modify the signature). That would avoid the ChangeHeap call, but that's already in a slow path, so I don't know if that really matters.

::: js/src/asmjs/AsmJSModule.h
@@ +427,5 @@
>          PropertyName *name_;
>          PropertyName *maybeFieldName_;
>          ArgCoercionVector argCoercions_;
>          struct Pod {
> +            uint32_t isChangeHeap_;

For ideal padding (but less ideal logic positioning), this should be moved after returnType_. How about putting it at the end, so that it remains apart? By the way, why not simply a bool? There are other pods that use bool properties.

@@ +462,5 @@
> +            maybeFieldName_ = maybeFieldName;
> +            pod.isChangeHeap_ = true;
> +            pod.startOffsetInModule_ = startOffsetInModule;
> +            pod.endOffsetInModule_ = endOffsetInModule;
> +            JS_ASSERT_IF(maybeFieldName_, name_->isTenured());

This assertion looks weird (and the same in the other constructor as well), shouldn't it be name or maybeFieldName_ consistently in this assertion, here and above?

@@ +503,5 @@
> +            JS_ASSERT(pod.codeOffset_ == UINT32_MAX);
> +            pod.codeOffset_ = off;
> +        }
> +        void updateCodeOffset(jit::MacroAssembler &masm) {
> +            if (!isChangeHeap())

Feels weird that the initCodeOffset and updateCodeOffset behave differently (one asserts if the function is change heap, the other one explicitly tests). Actually, the assert is valid because initCodeOffset is indirectly called by GenerateEntry, which is not called for the changeHeap function, because we explicitly call isChangeHeap there. How about just calling isChangeHeap in the only use of updateCodeOffset as well?

::: js/src/asmjs/AsmJSValidate.cpp
@@ +2468,5 @@
>  
> +    /*************************************************************************/
> +
> +    void enterHeapExpression() {
> +        heapExpressionDepth_++;

could it just be a boolean inHeapExpression? We don't seem to care about the depth itself... (and the assert below could be MOZ_ASSERT(inHeapExpression);)

@@ +4188,5 @@
>          if (byteOffset > INT32_MAX)
>              return f.fail(indexExpr, "constant index out of range");
>  
>          unsigned elementSize = 1 << TypedArrayShift(*viewType);
> +        if (!f.m().tryRequireHeapLengthToBeAtLeast(byteOffset + elementSize)) {

So if i've understood correctly, this means that heap accesses with indexes above the length declared in the change-heap function will lead to validation failures. Couldn't the change-heap function and the min-heap-length-for-constant-indexes be separate concepts?

@@ +6563,5 @@
> +    if (!CheckByteLengthCall(m, BinaryLeft(rightCond), newBufferName))
> +        return false;
> +
> +    ParseNode *minNode = BinaryRight(rightCond);
> +    uint32_t u32;

Could you please name this length, or something more talkative?

@@ +6569,5 @@
> +        return m.fail(minNode, "expecting integer limit literal");
> +    if (u32 < 0xffffff)
> +        return m.fail(minNode, "limit value must be >= 0xffffff");
> +
> +    // Add one to account for <=

This one-off is present because all comparisons made in the patch use <, not <= (also in the C++ ChangeHeap function). It seems easier to keep it like this (to be sure not to break constant-index already working code), but that comment looks wrong at the moment.

@@ +6645,5 @@
> +        return true;
> +
> +    // We can now issue validation failures if we see something that isn't a
> +    // valid change-heap function.
> +    *validated = true;

If I understand correctly, the conditions that make use consider this may be the changeHeap function are that we have one non coerced argument and the first statement is an if. Won't the error message be confusing if the user actually doesn't mean to implement the changeHeap function and just forgets the coercion of arg?

@@ +6649,5 @@
> +    *validated = true;
> +
> +    PropertyName *bufferName = m.module().bufferArgumentName();
> +    if (!bufferName)
> +        return m.fail(fn, "to change heaps, the module must have a buffer argument");

If I understand correctly, the conditions that make us consider this may be the changeHeap function are that we have one non-coerced argument and the first statement is an if. Won't the error message be confusing if the user actually doesn't mean to implement the changeHeap function and just forgets the coercion of arg? (unfortunately, we would have to add something in all following error messages, I suppose)

@@ -7399,5 @@
> -    unsigned savedRegBytes = 2 * sizeof(void*);  // HeapReg, GlobalReg
> -#else
> -    unsigned savedRegBytes = 0;
> -#endif
> -

nice

@@ +8169,3 @@
>      masm.pop(HeapReg);
>      masm.as_jr(HeapReg);
> +    masm.loadAsmJSHeapRegisterFromGlobalData();

To be consistent, could you add the "// in case there was a changeHeap" comment everywhere, or remove it everywhere in this function?

@@ +8244,5 @@
>      masm.call(AsmJSImmPtr(AsmJSImm_HandleExecutionInterrupt));
>      masm.branchIfFalseBool(ReturnReg, throwLabel);
>  
> +    // Reload the heap register in case the callback changed heaps.
> +    masm.loadAsmJSHeapRegisterFromGlobalData();

Did you try to run the new timeout tests with JS_NO_SIGNALS ? :)

::: js/src/jit-test/tests/asm.js/testResize.js
@@ +52,5 @@
> +this['byteLength'] = Function.prototype.call.bind();
> +assertAsmLinkFail(m, this);
> +this['byteLength'] = Function.prototype.call.bind({});
> +assertAsmLinkFail(m, this);
> +this['byteLength'] = 

nit: trailing whitespace

@@ +53,5 @@
> +assertAsmLinkFail(m, this);
> +this['byteLength'] = Function.prototype.call.bind({});
> +assertAsmLinkFail(m, this);
> +this['byteLength'] = 
> +  Function.prototype.call.bind(Object.getOwnPropertyDescriptor(ArrayBuffer.prototype, 'byteLength').get);

Can you add a test in which you modify the byteLength accessor of ArrayBuffer?

@@ +66,5 @@
> +const IMPORT0 = BYTELENGTH_IMPORT;
> +const IMPORT1 = "var I8=glob.Int8Array; var i8=new I8(b); " + BYTELENGTH_IMPORT;
> +const IMPORT2 = "var I8=glob.Int8Array; var i8=new I8(b); var I32=glob.Int32Array; var i32=new I32(b); var II32=glob.Int32Array; " + BYTELENGTH_IMPORT;
> +
> +       asmCompile('glob', 'ffis', 'b', USE_ASM + IMPORT1 + 'function f() { return 42 } return f');

A lot of asmCompile in this file are just indented in a weird fashion, was that intentional to have symmetry with the assertAsmTypeFail calls?

@@ +121,5 @@
> +assertAsmTypeFail('glob', 'ffis', 'b', USE_ASM + IMPORT2 + 'function ch(b2) { if(len(b2) & 0xffffff || len(b2) <= 0xffffff) return false; i32=new I32(b2); b=b2; return true } function f() { return 42 } return f');
> +assertAsmTypeFail('glob', 'ffis', 'b', USE_ASM + IMPORT2 + 'function ch(b2) { if(len(b2) & 0xffffff || len(b2) <= 0xffffff) return false; i32=new I32(b2); i8=new I8(b2); b=b2; return true } function f() { return 42 } return f');
> +       asmCompile('glob', 'ffis', 'b', USE_ASM + IMPORT2 + 'function ch(b2) { if(len(b2) & 0xffffff || len(b2) <= 0xffffff) return false; i8=new I8(b2); i32=new I32(b2); b=b2; return true } function f() { return 42 } return f');
> +assertAsmTypeFail('glob', 'ffis', 'b', USE_ASM + IMPORT2 + 'function ch(b2) { if(len(b2) & 0xffffff || len(b2) <= 0xffffff) return false; i8=new I32(b2); i32=new I8(b2); b=b2; return true } function f() { return 42 } return f');
> +       asmCompile('glob', 'ffis', 'b', USE_ASM + IMPORT2 + 'function ch(b2) { if(len(b2) & 0xffffff || len(b2) <= 0xffffff) return false; i8=new I8(b2); i32=new II32(b2); b=b2; return true } function f() { return 42 } return f');

Nice tests, I've also thought about a few other ones, if you're willing to add them:
- a test that tries to add the changeHeap function several times (two times).
- condition && instead of ||
- left/right conditions using another bitwise operator

@@ +178,5 @@
> +                   'var len=glob.byteLength;' +
> +                   'function ch(b2) { if(len(b2) & 0xffffff || len(b2) <= 0xffffff) return false; i32=new I32(b2); b=b2; return true }' +
> +                   'function get(i) { i=i|0; return i32[i>>2]|0 }' +
> +                   'function set(i, v) { i=i|0; v=v|0; i32[i>>2] = v }' +
> +                   'return {get:get, set:set, changeHeap:ch}');

(did you know we have template strings now, for multi-line strings:

var multiline_str = `line 1
line 2
line 3`;)

@@ +191,5 @@
> +assertEq(get(4), 13);
> +set(BUF_CHANGE_MIN, 262);
> +assertEq(get(BUF_CHANGE_MIN), 0);
> +var buf2 = new ArrayBuffer(2*BUF_CHANGE_MIN);
> +assertEq(changeHeap(buf2), true);

Could you (maybe at the end) add a test where the given buffer argument doesn't respect the mask/length conditions?

::: js/src/jit/x64/CodeGenerator-x64.cpp
@@ +248,5 @@
> +#ifdef DEBUG
> +    Register scratch = ABIArgGenerator::NonReturn_VolatileReg0;
> +    masm.movePtr(HeapReg, scratch);
> +    masm.loadAsmJSHeapRegisterFromGlobalData();
> +    Label ok2;

(why ok2 and not only ok?)
Attachment #8496942 - Flags: review?(benj)
(In reply to Benjamin Bouvier [:bbouvier] from comment #32)
Thanks Bejmain, lots of great catches and suggestions.  All fixed, with a few exceptions:

> In the general case, when we pass an array buffer that has been detached
> (istr that's the new name of neutering, right?), we figure this out in
> CallAsmJS.

In the detached case, byteLength will be zero so changeHeap will return false (I'll add an explicit test for this, though).

> An alternative would be to create a different JSFunction for changeHeap,
> that directly calls the ChangeHeap function (which would need to modify the
> signature). That would avoid the ChangeHeap call, but that's already in a
> slow path, so I don't know if that really matters.

I considered that.  Ultimately, the very minor reasons I went through CallAsmJS: (1) I get to reuse a tiny bit of prologue code (2) to avoid having to add a third case to the places that test if fun->native() == CallAsmJS || LinkAsmJS.  Ultimately I agree it hardly matters.

> For ideal padding (but less ideal logic positioning), this should be moved
> after returnType_.

ReturnType should be an int32, so I don't think the ordering matters.  Even if I make it a bool and move it to the end it shouldn't matter due to compiler padding ExportedFunction up to word length.  I will make it a bool, though; I don't know why I used uint32_t.

> could it just be a boolean inHeapExpression? We don't seem to care about the
> depth itself... (and the assert below could be MOZ_ASSERT(inHeapExpression);)

If it were a boolean, we wouldn't know when we could reset it to false in the nested case.  That is, in HEAP32[HEAP32[f()]], after leaving the inner HEAP32 we can't yet clear the flag and to know this we need to know how deeply we are nested.

> So if i've understood correctly, this means that heap accesses with indexes
> above the length declared in the change-heap function will lead to
> validation failures. Couldn't the change-heap function and the
> min-heap-length-for-constant-indexes be separate concepts?

The problem is if you tried to resize to a size smaller than a constant heap access: we'll already have generated code w/o a bounds checks.

> This one-off is present because all comparisons made in the patch use <, not
> <= (also in the C++ ChangeHeap function). It seems easier to keep it like
> this (to be sure not to break constant-index already working code), but that
> comment looks wrong at the moment.

The "<=" in the comment is referring to that the user code wrote <=.  Thus, if our goal is to store a "min" (which is inclusive) and the user wrote "if (len <= C) return false" (which is exclusive), we have to add 1.  The reason for having the user write <= instead of < is so both constants' default values could be 0xffffff.  But I could always switch to < and the minimum min-length constant would be 0x1000000.  Thoughts?

> Won't the error message be confusing if the user
> actually doesn't mean to implement the changeHeap function and just forgets
> the coercion of arg?

That's a good question; and one I considered.  They'll only get the change-heap-oriented error message if: (1) they have a byteLength import, (2) it's the first function, (3) it starts with an 'if' (if no 'if', it'll fall back to normal function validation and still give a coercion error).

> Did you try to run the new timeout tests with JS_NO_SIGNALS ? :)

I did.  In fact I added a signals.enable=0 test in testTimeout7.js.

> Can you add a test in which you modify the byteLength accessor of
> ArrayBuffer?

That should be equivalent to passing an arbitrary function to bind(); but I noticed I don't have a test for passing function objects (only non-function objects) so I added two.

> A lot of asmCompile in this file are just indented in a weird fashion, was
> that intentional to have symmetry with the assertAsmTypeFail calls?

Yeah, it was done so you can see the progression of invalidation errors (it makes a nice ripply diagonal if you open it on a wide terminal :).

> Could you (maybe at the end) add a test where the given buffer argument
> doesn't respect the mask/length conditions?

Indeed, there are some of those under the comment 'Tests for validation of heap length' near the end.
Attached patch change-heapSplinter Review
Attachment #8496942 - Attachment is obsolete: true
Attachment #8499839 - Flags: review?(benj)
Comment on attachment 8499839 [details] [diff] [review]
change-heap

Review of attachment 8499839 [details] [diff] [review]:
-----------------------------------------------------------------

Thanks for your explanations, this looks great.

> > So if i've understood correctly, this means that heap accesses with indexes
> > above the length declared in the change-heap function will lead to
> > validation failures. Couldn't the change-heap function and the
> > min-heap-length-for-constant-indexes be separate concepts?
> 
> The problem is if you tried to resize to a size smaller than a constant heap
> access: we'll already have generated code w/o a bounds checks.
That makes sense in that case. I forgot that one of the use cases was to shrink the heap, in case of memory pressure.

> The "<=" in the comment is referring to that the user code wrote <=.  Thus,
> if our goal is to store a "min" (which is inclusive) and the user wrote "if
> (len <= C) return false" (which is exclusive), we have to add 1.  The reason
> for having the user write <= instead of < is so both constants' default
> values could be 0xffffff.  But I could always switch to < and the minimum
> min-length constant would be 0x1000000.  Thoughts?
Seems nice to keep both constants being the same value, thanks for making that explicit.

> > Won't the error message be confusing if the user
> > actually doesn't mean to implement the changeHeap function and just forgets
> > the coercion of arg?
> 
> That's a good question; and one I considered.  They'll only get the
> change-heap-oriented error message if: (1) they have a byteLength import,
> (2) it's the first function, (3) it starts with an 'if' (if no 'if', it'll
> fall back to normal function validation and still give a coercion error).
Indeed there are almost no chances it happens, but when it will, users will get *very* confused. Hopefully, there aren't too many users :) We'll see how that goes.

::: js/src/asmjs/AsmJSModule.h
@@ +436,5 @@
>              pod.startOffsetInModule_ = startOffsetInModule;
>              pod.endOffsetInModule_ = endOffsetInModule;
> +        }
> +
> +        ExportedFunction(PropertyName *name,

Nothing in the name or the arguments says it's the changeHeap function, could we make it more explicit here?

@@ +1041,5 @@
>  
> +    void addChangeHeap(uint32_t mask, uint32_t min) {
> +        MOZ_ASSERT(isFinishedWithModulePrologue());
> +        MOZ_ASSERT(!pod.hasFixedMinHeapLength_);
> +        MOZ_ASSERT(IsValidAsmJSHeapLength(mask+1));

nit: spaces between the '+' operands

@@ +1453,5 @@
>      // Functions that can be called after dynamic linking succeeds:
>  
>      CodePtr entryTrampoline(const ExportedFunction &func) const {
>          MOZ_ASSERT(isDynamicallyLinked());
> +        MOZ_ASSERT(!func.pod.isChangeHeap_);

how about using the public function func.isChangeHeap() here?

::: js/src/asmjs/AsmJSValidate.cpp
@@ +1658,5 @@
> +        Global *global = moduleLifo_.new_<Global>(Global::ByteLength);
> +        return global && globals_.putNew(name, global);
> +    }
> +    bool addChangeHeap(PropertyName *name, ParseNode *fn, uint32_t mask, uint32_t min) {
> +        hasChangeHeap_ = true;

above that, maybe MOZ_ASSERT(canValidateChangeHeap_);?

@@ +6532,5 @@
> +    if (!CheckByteLengthCall(m, BinaryLeft(rightCond), newBufferName))
> +        return false;
> +
> +    ParseNode *minLengthNode = BinaryRight(rightCond);
> +    uint32_t u32;

nit: naming of u32 doesn't seem ideal here, how about minLength, or minLengthValue, or limit or limitValue?

::: js/src/jit-test/tests/asm.js/testResize.js
@@ +33,5 @@
>  assertAsmLinkAlwaysFail(m, this, null, null);
>  assertAsmLinkFail(m, this, null, new ArrayBuffer(100));
>  assertEq(asmLink(m, this, null, BUF_64KB)(), undefined);
> +
> +// Tests for link-time validation of byteLength import

Could you please add one test to that file or testToSource, that checks that the changeHeap function is exactly rendered when applying toSource() on the exported function (one test) and the module (another test)?

@@ +171,5 @@
> +var changeHeap = asmLink(m, this, null, new ArrayBuffer(0x2000000));
> +assertEq(changeHeap(new ArrayBuffer(0x1000000)), false);
> +assertEq(changeHeap(new ArrayBuffer(0x2000000)), true);
> +assertEq(changeHeap(new ArrayBuffer(0x2000001)), false);
> +assertEq(changeHeap(new ArrayBuffer(0x2000001)), false);

nit: same test twice
Attachment #8499839 - Flags: review?(benj) → review+
(In reply to Benjamin Bouvier [:bbouvier] from comment #35)
Thanks, good points.

> Nothing in the name or the arguments says it's the changeHeap function,
> could we make it more explicit here?

I feel the same way, but I don't see any good options other than introducing dummy arguments.  I feel like that is a bit overkill, though, given that the ExportedFunction constructor is private, intended only for use by the public addChangeHeap function.

> above that, maybe MOZ_ASSERT(canValidateChangeHeap_);?

Unfortunately, tryOnceToValidateChangeHeap has already cleared canValidateChangeHeap_ by this point.  I considered having addChangeHeap clear canValidateChangeHeap_ instead, but this would allow functions after the first to be validated as change-heap functions.
Oops, bizarrely lost a masm.storePtr some time between initial combined.diff and final patch:
https://hg.mozilla.org/integration/mozilla-inbound/rev/e91da4515772
Depends on: 1079826
This is great. Is it possible to feature-detect this? (Feature detection is a general question for asm.js updates, like v2.)
Unfortunately no.  My advice for Emscripten would be to wait for this feature to get into release and then start shipping resizable heaps by default.  Feature-detection is tricky as long as asm.js stays a pure subset of JS although it is not unthinkable that one day something will get standardized (esp if IE decides to ship full asm.js optimizations [1]).

[1] https://blog.mozilla.org/luke/2014/09/18/asm-js-on-status-modern-ie/
I understand most of this comment but I don't understand why it couldn't be a vendor-prefixed or just implicit attribute on navigator (navigator.asmjs.version, navigator.asmjs.changeHeap, etc). And I know it has been discussed before whether you would have a way to just check whether a module was successfully AOT-compiled, in which case you could feature detect by trying the features you use and be able to show an informative message - 'this is slow because your browser doesn't support the version of asm we use'.

Should I file a bug about this general issue?
There is already bug 952847 on the general issue of "making asm.js semantically visible".  It's a cross-the-rubicon sort of thing where we can no longer say "asm.js is a subset of JS", so it's not likely something we'll do soon.
Depends on: 1080358
Attached patch fix-bugsSplinter Review
Running change-heap on a real workload in the browser pointed out that changeHeap is missing AutoUnprotectCode.  Coding this I realized we're also missing a call to prepareForAsmJS on the new buffer (!!).  In this patch, the two length checks are moved to ChangeHeap because they have to go before prepareForAsmJS (which has to go before AsmJSModule::changeHeap); this is also more symmetric with LinkModuleToHeap which does the checking before calling initHeap.
Attachment #8502946 - Flags: review?(benj)
Comment on attachment 8502946 [details] [diff] [review]
fix-bugs

Review of attachment 8502946 [details] [diff] [review]:
-----------------------------------------------------------------

Good catch!

::: js/src/asmjs/AsmJSLink.cpp
@@ +570,5 @@
>      }
>  
>      Rooted<ArrayBufferObject*> newBuffer(cx, &bufferArg.toObject().as<ArrayBufferObject>());
> +    uint32_t heapLength = newBuffer->byteLength();
> +    if (heapLength & module.heapLengthMask() || heapLength < module.minHeapLength()) {

if you feel like it, can you make a getter for module.hasFixedMinHeapLength_ and MOZ_ASSERT this above this if?

::: js/src/asmjs/AsmJSModule.cpp
@@ +773,5 @@
>      }
>  #endif
>  }
>  
> +// This method assumes the caller has a live AutoUnprotectCode.

And you can even MOZ_ASSERT(!codeIsProtected_) here and below, to prove it.
Attachment #8502946 - Flags: review?(benj) → review+
You need to log in before you can comment on or make changes to this bug.