Closed Bug 635068 Opened 9 years ago Closed 9 years ago

WebGL test array-unit-tests.html fails

Categories

(Core :: JavaScript Engine, defect)

x86_64
Linux
defect
Not set

Tracking

()

RESOLVED FIXED
Tracking Status
blocking2.0 --- .x+

People

(Reporter: bjacob, Unassigned)

References

Details

Attachments

(2 files)

A Int32 typed array is constructing from a JS array containing the value

   -2^31-1

and it gets truncated to

   -2^31

instead of the expected behavior of wrapping it to

   2^31-1
Assignee: nobody → general
Component: Canvas: WebGL → JavaScript Engine
QA Contact: canvas.webgl → general
Attachment #513339 - Attachment is patch: false
Here is the problem:

    nativeFromValue(JSContext *cx, const Value &v)
    {
        if (v.isInt32())
            return NativeType(v.toInt32());

        if (v.isDouble()) {
            double d = v.toDouble();
            if (!ArrayTypeIsFloatingPoint() && JS_UNLIKELY(JSDOUBLE_IS_NaN(d)))
                return NativeType(int32(0));
            return NativeType(d);
            ^^^^^^^^^^^^^^^^^^^^^
        }

In the failing case, NativeType is int32. The C++ spec says that converting from a double outside of the int range to an int type is undefined. In this case, MSVC returns -2^32 (clamping?).

The typed array spec says this should use js_DoubleToECMAInt32 instead. Patch forthcoming.
blocking2.0: --- → ?
Bah, stupid spec. That's going to be slow.
Beta or final? Hard or soft?
blocking2.0: ? → .x
(In reply to comment #4)
> Bah, stupid spec. That's going to be slow.

It's what already happens on assignment (see obj_setProperty) -- it just isn't happening correctly when constructed from an array.  However, for the assignment case, we should check to make sure that this gets correctly jitted...

Also, this is the case of a big number (doesn't fit into an int) being assigned to an int array type.  So we're only likely to hit this case when the author is already doing something less than ideal.
Attached patch proposed patchSplinter Review
Here is a little patch implementing David's idea. It makes the test pass here.

The only thing I wonder is what should we do for smaller integer types such as Int8? This patch converts to ECMA Int32 / Uint32 depending on signedness, and then casts to NativeType. Is that what's wanted?
Attachment #515202 - Flags: review?
Attachment #515202 - Flags: review? → review?(dmandelin)
Attachment #515202 - Attachment is patch: true
Attachment #515202 - Attachment mime type: application/octet-stream → text/plain
Comment on attachment 515202 [details] [diff] [review]
proposed patch

I think this way of handling the smaller types is right--the typed array spec doesn't seem to explain it explicitly, but this way seems to follow the original intent. We should check with Vlad, though.
Attachment #515202 - Flags: review?(vladimir)
Attachment #515202 - Flags: review?(dmandelin)
Attachment #515202 - Flags: review+
Dave and Patrick were discussing the issue in comment 7 just the other day.

/be
In structs.js I used these definitions, translated here into C and assuming input float64_t x:

    // double -> int32
    int32_t result = ToInt32(x); // using the Ecma-262 definition of ToInt32
    
    // double -> int16
    int32_t y = ToInt32(x); // using the Ecma-262 definition of ToInt32
    // [ .... sign-fill .... ] [ ......... mask ........ ]
    // [ 31 ] --------> [ 15 ] X X X X X X X X X X X X X X
    int16_t result = ((y & 0x80000000) >> 16) | (y & 0x7fff);
    
    // double -> int8
    int32_t y = ToInt32(x); // using the Ecma-262 definition of ToInt32
    // [ ........... sign-fill ........... ] [ .. mask . ]
    // [ 31 ] -----------------------> [ 7 ] X X X X X X X
    int8_t result = ((y & 0x80000000) >> 24) | (y & 0x7f);
    
    // double -> uint32
    uint32_t result = ToUint32(x); // using the Ecma-262 definition of ToUint32
    
    // double -> uint16
    uint32_t y = ToUint32(x); // using the Ecma-262 definition of ToUint32
    uint16_t result = y & 0xffff;
    
    // double -> uint8
    uint32_y = ToUint32(x); // using the Ecma-262 definition of ToUint32
    uint8_t result = y & 0xff;

I hope I got the C notation right there. These are defined in JS in:

    https://github.com/dherman/structsjs/blob/master/lib/number.js

I haven't tested these definitions enough, so I won't swear by them. HTH.

Dave
Duplicate of this bug: 554464
Comment on attachment 515202 [details] [diff] [review]
proposed patch

># HG changeset patch
># Parent 228ce2cdb5e8bf9c0684095957404043496d39fc
>diff --git a/js/src/jstypedarray.cpp b/js/src/jstypedarray.cpp
>--- a/js/src/jstypedarray.cpp
>+++ b/js/src/jstypedarray.cpp
>@@ -1119,17 +1119,22 @@ class TypedArrayTemplate
>     {
>         if (v.isInt32())
>             return NativeType(v.toInt32());
> 
>         if (v.isDouble()) {
>             double d = v.toDouble();
>             if (!ArrayTypeIsFloatingPoint() && JS_UNLIKELY(JSDOUBLE_IS_NaN(d)))
>                 return NativeType(int32(0));
>-            return NativeType(d);
>+            if (TypeIsFloatingPoint<NativeType>())
>+                return NativeType(d);
>+            else if (TypeIsUnsigned<NativeType>())
>+                return NativeType(js_DoubleToECMAUint32(d));
>+            else
>+                return NativeType(js_DoubleToECMAInt32(d));

Yep, this looks fine, but I'm surprised brendan didn't have comments about else-after-return style here :-)  Get rid of all the "else"s here, just if () return; if () return; return;.

    - Vlad
Attachment #515202 - Flags: review?(vladimir) → review+
Thanks, Vlad. Those elses are arbitrary considering the if-return just before the first added if did not merit an "else" :-P.

/be
http://hg.mozilla.org/mozilla-central/rev/195a7aceb72b
Status: NEW → RESOLVED
Closed: 9 years ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.