Closed Bug 465980 Opened 13 years ago Closed 12 years ago

crash in array push exceeding length limit [@ @0xffff07c7 - InitArrayElements]


(Core :: JavaScript Engine, defect, P2)






(Reporter: shaver, Assigned: Waldo)



(4 keywords, Whiteboard: [sg:critical?] fixed-in-tracemonkey)

Crash Data


(6 files, 9 obsolete files)

4.34 KB, text/plain
55.87 KB, text/plain
14.01 KB, patch
: review+
: review+
Details | Diff | Splinter Review
6.50 KB, text/plain
2.39 KB, text/plain
41.09 KB, patch
Details | Diff | Splinter Review
Signature: @0xffff07c7Details Frames Modules Raw Dump  Signature @0xffff07c7 
UUID aaacb445-1390-479a-995e-1c4c20081120 
Time 2008-11-20 07:25:33-08 
Uptime 72919 
Product Firefox 
Version 3.1b2pre 
Build ID 20081119053142 
OS Mac OS X 
OS Version 10.5.5 9F33 
CPU x86 
CPU Info GenuineIntel family 6 model 15 stepping 6 
Crash Address 0xffff07c7 
Comments javascript:a = new Array(4294967294); a.push("foo", "bar"); a[4294967294] 

Crashing Thread
Frame Module Signature [Expand] Source 
0  @0xffff07c7  
1 libmozjs.dylib InitArrayElements js/src/jsarray.cpp:1498  
2 libmozjs.dylib array_push_slowly js/src/jsarray.cpp:2109  
3 libmozjs.dylib js_Interpret js/src/jsinterp.cpp:5118  
4 libmozjs.dylib js_Execute js/src/jsinvoke.cpp:1559  
5 libmozjs.dylib JS_EvaluateUCScriptForPrincipals js/src/jsapi.cpp:5187  
6 XUL nsJSContext::EvaluateString dom/src/base/nsJSEnvironment.cpp:1586  
7 XUL nsJSThunk::EvaluateScript dom/src/jsurl/nsJSProtocolHandler.cpp:337  
8 XUL nsJSChannel::EvaluateScript dom/src/jsurl/nsJSProtocolHandler.cpp:734  
9 XUL nsRunnableMethod<nsJSChannel>::Run dom/src/jsurl/nsJSProtocolHandler.cpp:264
Severity: normal → critical
Summary: crash in array push exceeding length limit → crash in array push exceeding length limit [@ @0xffff07c7 - InitArrayElements]
Whiteboard: crash
Flags: blocking1.9.1+
Whiteboard: crash → crash, testcase
Assignee: general → jwalden+bmo
Fun stuff.  Old length + added element count == jsuint(0), so we go off the end of the vector of new elements, of course.  We call InitArrayElements with the very large start, but |end == 0|, and in the non-dense case we have a != check for termination rather than a > or <= check.

Most fun of all, if the object is a dense array (possible only if 64-bit, I think), we ensure an overflowed length, set the length to that overflowed value, then copy from our input vector of new elements to the end of the old start of data (so from |start| elements into the dense vector).  The key is we copy |sizeof(jsval) * (end - start)| bytes of data in the memcpy -- and |end - start| is a large "negative" value, so we start reading off the end of vector and writing potentially past the end of the allocated |obj->dslots|.  This is a fragile situation; I'd love to see the successful exploit that could write off the end of |obj->dslots| into interesting memory and read off the end of |vector| without dying.  Then again, I'd really rather not; marking security-sensitive to be safe.

Is that all?  Wait till I get going!

Array.prototype.push ( models the index of each element via the value n, but according to 5.2 this value is an exact mathematical real number.  So, then, are we supposed to set length to and return a value greater than 2**32 - 1, the maximum array length?  Everything in the current array implementation says length is always modified to be an unsigned 32-bit integer.  Is that a bug, in which case a lot of length-setting code in the mutating array methods will need modification, and in which case going over 2**32-1 is likely to convert a very large length into a small one via ToUint32's modulo 2**32 requirement at some point hidden inside some array method, or is it an erratum which must be addressed by the ECMA committee?

I got most of the way through the handle-overlarge-index part of the patch before realizing this last problem; I'll try to separate that into its own patch and leave the non-uint32 length property stuff to further patches or future bugs, in the interests of fixing the writes off the end of |obj->dslots|.
Group: core-security
Keywords: crash, testcase
Whiteboard: crash, testcase → [sg:critical?]
Hm.  I think I'm wrong about the first part of this, maybe.  It looks like there's a case where arrays are slow-ified that's not in the comment at the top: if an element is set at an index that's "too large".  The max such index is 2**29 - 1, and since the max arg number for push is 2**16-1, we can't overflow from that.  Looking at other calls to InitArrayElements, it looks like only array_splice might overflow, and even there I think it might not.  (But it's late, so I'll look again in the morning when know I can think straight.)
Oh, forgot to note that spec says [[Put]]("length", nonUint32Value) throws, so the array would have the non-index properties but its length would be the max length, as [[Put]](index, v) also changes length if index >= length.  So arrays would still have the invariant maintained, but everything else is going to have to deal with non-uint32 indexes.  Which really means the non-specialized code does have to deal with non-uint32 indexes.
I just keep being wronger and wronger here; we're dense when we hit InitArrayElements (because we're new and empty), so we do ensure a small length and then potentially write past the end of obj->dslots.  I noticed this when I added an assertion that start+count wouldn't overflow in the dense case and ended up tripping it when testing the patch.  This isn't just a 64-bit overwrite concern after all.

My patch here is indeed going to have to address the possibility of the final calculated length not being a uint32, by the way; more spec reading and #jslang discussion have convinced me that's what the spec wants.
Attached patch WIP (obsolete) — Splinter Review
There's more to track down; array_unshift is a bit shifty still with its length overflowing, but at least it's going through stuff that'll slowify the array and not write outside a dense array's dslots.

This part, tho, makes us behave correctly on the to-be-posted testcase, joining Opera and Chrome.  WebKit seems to fail the same way we did with this patch sans the jsdouble cast in InitArrayElements, at a glance.
Attached file Testcase (obsolete) —
This was mis-commented in bug 463320, reproducing comment below:

> I mentioned a recent es-discuss thread on this topic, and Waldo recalled ES3.1
> addressing the issue (or else that it should). If we can catch length overflow
> early before effects are committed, and track or get that into 3.1, we should.
> /be

I skimmed too aggressively; it was a back-and-forth between two people, nothing actually official-ish.
OK -- can you revive that thread and drive it to match the early-exception code you hack here? ;-)

Priority: -- → P2
The es- lists haven't been responsive to the push to move the array-index-handling changes along, so here's my patch as it currently stands.  I think it's in good shape to go forward, but a set of eyes on the code to check I hit all the necessary points would be good.  Tests to follow...
Attachment #351307 - Attachment is obsolete: true
Attachment #354350 - Flags: review?(brendan)
Attached file Tests (obsolete) —
The push tests all pass with this patch in place.  The slice tests fail without out-of-memory for me, which I think might be to be expected here, but I'm not 100% certain -- trying a different twist on running them to see if this was just user error or not...
Attachment #351308 - Attachment is obsolete: true
Attachment #354350 - Flags: review?(brendan)
Comment on attachment 354350 [details] [diff] [review]
Patch implementing current ECMA-262

I figured out how to get the testcase to run and not stop, and it hits one of the assertions I added, eventually.  Need to figure that out first before this is good to go...

Assertion failure: index < MAXINDEX, at /Users/jwalden/moz/js-tm/js/src/jsarray.cpp:449

Hello slow-n.tests!
Waldo, what's up here?
Hm, I forgot I made comment 12 after I posted the patch -- will rerun tests to see where the failure lies...
For backup purposes:

running testArrayPush(4294967294, [foo], false)...
running testArrayPush(4294967294, [foo, bar], true)...
running testArrayPush(4294967294, [foo, bar, baz], true)...
running testArrayPush(4294967295, [foo], true)...
running testArrayPush(4294967295, [foo, bar], true)...
running testArrayPush(4294967295, [foo, bar, baz], true)...
running testArrayUnshift(4294967294, [foo], false)...
running testArrayUnshift(4294967294, [foo, bar], true)...
Assertion failure: index < MAXINDEX, at /Users/jwalden/moz/js-tm/js/src/jsarray.cpp:449
Trace/BPT trap
Attached file Better tests
These tests are basically the past ones, but they include a little more length-testing to give (hopefully) better error messages if they fail.
Attachment #354353 - Attachment is obsolete: true
The better tests pass on OS X for me, at least after I let them run for the necessary eight or nine hours (I think).  Running on Windows now, since I saw some weird failures there with an intermediate version of the patch (failures not reproducible in OS X, oddly)...
any update here?
Attached patch Latest updated patch (obsolete) — Splinter Review
So, honestly, I don't have any idea why the previous patch worked on Mac, because on Windows it hit clear failures that shouldn't have been platform-specific.  This patch should address the problems Windows was encountering -- almost.

There's only one remaining problem with this, at least going from how the tests above run -- we OOM running the unshift tests because, I think, the ids we create for very large indexes aren't being GC'd correctly.  I still haven't figured out what's going wrong with that yet, but it seems like the ids, not being connected to any scope, should go away on their own.  Making the array slow makes the problem go away, but I'm not sure what difference that makes just yet.
Attachment #354350 - Attachment is obsolete: true
Attached file massif report
Attachment #366935 - Flags: review?(jorendorff)
Comment on attachment 366935 [details] [diff] [review]
Latest updated patch

>diff --git a/js/src/jsarray.cpp b/js/src/jsarray.cpp
>--- a/js/src/jsarray.cpp
>+++ b/js/src/jsarray.cpp
>@@ -237,13 +237,14 @@ js_GetLengthProperty(JSContext *cx, JSOb
> }
> static JSBool
>-IndexToValue(JSContext *cx, jsuint index, jsval *vp)
>+IndexToValue(JSContext *cx, jsdouble index, jsval *vp)
> {
>-    if (index <= JSVAL_INT_MAX) {
>-        *vp = INT_TO_JSVAL(index);
>+    jsuint i;
>+    if (JSDOUBLE_IS_INT(index, i)) {
>+        *vp = INT_TO_JSVAL(i);
>         return JS_TRUE;
>     }
>-    return JS_NewDoubleValue(cx, (jsdouble)index, vp);
>+    return JS_NewDoubleValue(cx, index, vp);
> }

This still needs to check that i <= JSVAL_INT_MAX.

>@@ -390,39 +391,48 @@ EnsureCapacity(JSContext *cx, JSObject *
>  * properly rooted and can be used as GC-protected storage for temporaries.
>  */
> static JSBool
>-GetArrayElement(JSContext *cx, JSObject *obj, jsuint index, JSBool *hole,
>+GetArrayElement(JSContext *cx, JSObject *obj, jsdouble index, JSBool *hole,
>                 jsval *vp)
> {
>+    JSAutoTempIdRooter idr(cx);
>     if (index <= JSVAL_INT_MAX) {
>-        id = INT_TO_JSID(index);
>+        *idr.addr() = INT_TO_JSID(index);
>+    } else if (index > jsuint(-1)) {
>+        JSAutoTempValueRooter dval(cx);
>+        if (!js_NewDoubleInRootedValue(cx, index, dval.addr()) ||
>+            !js_ValueToStringId(cx, dval.value(), idr.addr())) {
>+            return JS_FALSE;
>+        }
>+        GC_POKE(cx, ID_TO_JSVAL(;
>     } else {
>-        if (!BigIndexToId(cx, obj, index, JS_FALSE, &id))
>+        JS_ASSERT(jsuint(index) == index);
>+        if (!BigIndexToId(cx, obj, jsuint(index), JS_FALSE, idr.addr()))
>             return JS_FALSE;
>-        if (JSVAL_IS_VOID(id)) {
>+        if (JSVAL_IS_VOID( {
>             *hole = JS_TRUE;
>             *vp = JSVAL_VOID;
>             return JS_TRUE;
>         }
>     }

Can you factor this code and the suspiciously similar code in SetArrayElement and DeleteArrayElement into a static function, maybe DoubleToId (or, ReallyBigIndexToId)?  There are a few trivial differences that seem inadvertent to me.

>@@ -433,20 +443,23 @@ SetArrayElement
>     if (OBJ_IS_DENSE_ARRAY(cx, obj)) {
>+        jsuint idx = jsuint(index);
>         /* Predicted/prefeched code should favor the remains-dense case. */
>+        if (index <= jsdouble(jsuint(-1)) && !INDEX_TOO_SPARSE(obj, idx)) {

Micro-nit: can we say it like this?

  if (index <= jsdouble(jsuint(-1)) {
      jsuint idx = jsuint(index);
      if (!INDEX_TOO_SPARSE(obj, idx)) {

No good reason, I just like to see the cast happening after we check that the result is going to be sensible.  Silencing the warnings in my head.

>+InitArrayElements(JSContext *cx, JSObject *obj, jsuint start, jsuint count, jsval *vector)
> {
>+        if (start > MAXINDEX - count || INDEX_TOO_SPARSE(obj, start + count))
>+            goto sparse_case;

I think INDEX_TOO_SPARSE is not quite what you want in this case.  I think it assumes you're adding only one actual element (and possibly a lot of holes) whereas here we might be adding a large solid block of elements.  So I think you'd end up slowifying unnecessarily.

>+    for (; count > 0 && start < MAXINDEX; start++, count--, vector++) {

I prefer the *vector++ idiom used in the preexisting code.  Get a second opinion from Brendan.

You might need to slowify when INDEX_TOO_BIG(start), which is well before you reach MAXINDEX.

>+    do {
>+        tmp[1] = *vector;
> ...
>+        vector++;
>+    } while (--count > 0);

I would prefer *vector++ here too.

>@@ -2399,13 +2475,13 @@ array_unshift(JSContext *cx, uintN argc,
>         if (!InitArrayElements(cx, obj, 0, argc, argv))
>             return JS_FALSE;
>-        length += argc;
>+        newlen = jsdouble(length) + argc;

That could just be `newlen += argc;` unless I'm missing something.

>diff --git a/js/src/shell/js.cpp b/js/src/shell/js.cpp
>--- a/js/src/shell/js.cpp
>+++ b/js/src/shell/js.cpp
>@@ -114,7 +114,7 @@
>  * Limit the timeout to 30 minutes to prevent an overflow on platfoms
>  * that represent the time internally in microseconds using 32-bit int.
>  */
>-static jsdouble MAX_TIMEOUT_INTERVAL = 1800.0;
>+static jsdouble MAX_TIMEOUT_INTERVAL = 9999.0;//1800.0;

You didn't mean to commit this, right?
(In reply to comment #21)

> >+    for (; count > 0 && start < MAXINDEX; start++, count--, vector++) {
> I prefer the *vector++ idiom used in the preexisting code.  Get a second
> opinion from Brendan.

Yeah, one too many induction variables there. Bring back end as a local, avoid count-- on each loop step.

> You might need to slowify when INDEX_TOO_BIG(start), which is well before you
> reach MAXINDEX.
> >+    do {
> >+        tmp[1] = *vector;
> > ...
> >+        vector++;
> >+    } while (--count > 0);
> I would prefer *vector++ here too.

Yes, that is canonical C going back to K&R and we do it everywhere. Inc/dec-op statements arise but if the pointer is being used they should be fused.

Attachment #366935 - Flags: review?(jorendorff)
Attached patch v2 (obsolete) — Splinter Review
Okay, here's a new patch with the requested changes.  I'm guessing only Brendan will get to this, but if you can get to this, Jason, during a vacation, all the better.
Attachment #366935 - Attachment is obsolete: true
Attachment #367826 - Flags: review?(jorendorff)
Attachment #367826 - Flags: review?(brendan)
Comment on attachment 367826 [details] [diff] [review]

> JS_NewNumberValue(JSContext *cx, jsdouble d, jsval *rval)
> {
>-    jsint i;
>-    CHECK_REQUEST(cx);
>-    if (JSDOUBLE_IS_INT(d, i) && INT_FITS_IN_JSVAL(i)) {
>-        *rval = INT_TO_JSVAL(i);
>-        return JS_TRUE;
>-    }
>-    return JS_NewDoubleValue(cx, d, rval);
>+    CHECK_REQUEST(cx);
>+    return js_NewNumberValue(cx, d, rval);

Igor made a split between js_NewDoubleInRootedValue and js_NewWeaklyRootedDouble, to make callers think twice about rooting. This factoring of js_NewNumberValue goes against that. More below.

> static JSBool
>+IndexToValue(JSContext *cx, jsdouble index, jsval *vp)
> {
>-    if (index <= JSVAL_INT_MAX) {
>-        *vp = INT_TO_JSVAL(index);
>-        return JS_TRUE;
>-    }
>-    return JS_NewDoubleValue(cx, (jsdouble)index, vp);
>+    return js_NewNumberValue(cx, index, vp);
> }

Here there's no relative harm, of course, in the factoring out of js_NewNumberValue. The caller must protect *vp.

>+static JSBool
>+ReallyBigIndexToId(JSContext* cx, jsdouble index, jsid* idp)
>+    JSAutoTempValueRooter dval(cx);
>+    if (!js_NewDoubleInRootedValue(cx, index, dval.addr()) ||
>+        !js_ValueToStringId(cx, dval.value(), idp)) {

Here we see that the caller of *IndexTo* must protect *vp or *idp.

>+        return JS_FALSE;
>+    }
>+    GC_POKE(cx, ID_TO_JSVAL(*idp));

GC_POKE is going away (bug 482038), so don't add it here (if it weren't, we'd either want to pass the old value that was already overwritten in *idp, or a dummy).

>+    JSAutoTempIdRooter idr(cx);
>     if (index <= JSVAL_INT_MAX) {
>+        *idr.addr() = INT_TO_JSID(index);
>+    } else if (index <= jsuint(-1)) {
>+        JS_ASSERT(jsuint(index) == index);
>+        if (!BigIndexToId(cx, obj, jsuint(index), JS_FALSE, idr.addr()))
>             return JS_FALSE;
>+        if (JSVAL_IS_VOID( {
>             *hole = JS_TRUE;
>             *vp = JSVAL_VOID;
>             return JS_TRUE;
>         }
>+    } else {
>+        if (!ReallyBigIndexToId(cx, index, idr.addr()))
>+            return JS_FALSE;
>     }

Something like this if-else three-case structure repeats multiple times (with varying hold handling) -- make a helper?

Good rooting.
>+SetArrayElement(JSContext *cx, JSObject *obj, jsdouble index, jsval v)
> {
>+    JS_ASSERT(index >= 0);
>     if (OBJ_IS_DENSE_ARRAY(cx, obj)) {
>+        /* Predicted/prefetched code should favor the remains-dense case. */
>+        if (index <= jsdouble(jsuint(-1))) {

There's no need for jsdouble casting here (note lack of it in GetArrayElement when testing index < js_DenseArrayCapacity(obj)). uint32 promotes to double losslessly.

>+            jsuint idx = jsuint(index);
>+            if (!INDEX_TOO_SPARSE(obj, idx)) {
>+                JS_ASSERT(idx + 1 > idx);
>+                if (!EnsureCapacity(cx, obj, idx + 1))
>+                    return JS_FALSE;
>+                if (index >= (uint32)obj->fslots[JSSLOT_ARRAY_LENGTH])

C-style cast alert!

>+DeleteArrayElement(JSContext *cx, JSObject *obj, jsdouble index)
> {
>+    JS_ASSERT(index >= 0);
>     if (OBJ_IS_DENSE_ARRAY(cx, obj)) {
>+        jsuint idx = jsuint(index);
>+        if (index <= jsuint(-1) && !INDEX_TOO_SPARSE(obj, idx) &&
>+            idx < js_DenseArrayCapacity(obj)) {

Move index <= jsuint(-1) test into an outer if to delay declaration and init of idx.

>+InitArrayElements(JSContext *cx, JSObject *obj, jsuint start, jsuint count, jsval *vector)
> {
>+    JS_ASSERT(count < MAXINDEX);
>     if (OBJ_IS_DENSE_ARRAY(cx, obj)) {
>+        /*
>+         * If we're treading into non-array indexes or into indexes which are
>+         * sparse, bail to the slow case (which also handles the array
>+         * conversion).
>+         */
>+        if (start > MAXINDEX - count || INDEX_TOO_BIG(start + count))
>+            goto sparse_case;

Downward gotos are more harmful in C++ than C, but even in old SpiderMonkey this is not-so-good disjuctive control flow. Multi-level loop breaking, sure. Here it seems better to put the following forward-predicted/prefetched code in the "then clause" and avoid the goto.

>+  sparse_case:
>+    jsval* end = vector + count;
>+    while (vector != end && start < MAXINDEX) {
>         if (!JS_CHECK_OPERATION_LIMIT(cx) ||
>             !SetArrayElement(cx, obj, start++, *vector++)) {
>             return JS_FALSE;
>         }
>     }
>+    if (vector == end)
>+        return JS_TRUE;

This is good disjunctive control flow, since the subsequent code is unlikely, abnormal.

>+ * Create an integer or double jsval as appropriate for the given jsdouble.
>+ */
>+extern JSBool
>+js_NewNumberValue(JSContext *cx, jsdouble d, jsval *vp);

Comment should say the new double is weakly rooted, so callers should make sure vp is a root (or *vp is rooted, same meaning).

To come back to the initial comment above, js_NewWeaklyRootedNumber might be the better name, even though there's no need to root an int jsval. Thoughts?

Everything in previous comment done save the index-to-id unification, into which I seem to have introduced a bug that I'm working on ferreting out now.  More on that later tonight.
js_NewWeaklyRootedNumber is a good name, makes nice parallels to existing names.

I considered unifying the index-to-id paths in the previous patch but decided against it because it was going to get hairy for big-but-not-really-big indexes when holes could be detected.  I've implemented it here, and I don't really think it's a win -- still pretty complicated, and you have to pass in a number of useless arguments to make it work.  Read and decide for yourselves, but I don't think it's useful.
Attachment #367826 - Attachment is obsolete: true
Attachment #368199 - Flags: review?(jorendorff)
Attachment #368199 - Flags: review?(brendan)
Attachment #367826 - Flags: review?(jorendorff)
Attachment #367826 - Flags: review?(brendan)
Comment on attachment 368199 [details] [diff] [review]
With previous comments addressed

>+static JSBool
>+IndexToId(JSContext* cx, JSObject* obj, jsdouble index, JSBool* hole, JSBool setting, jsid* idp)

Nit: use bool now in statics and other contexts where we don't run into ABI issues. Hope I am not missing a Windows reason not to do this!

Three calls, with hole/setting combos (hole, false), (null, false), and (null, true). This suggests making setting a trailing param with default value false. Also, I think it's best to rename setting createAtom to match what BigIndexToId does.

Looks great otherwise, thanks.

Attachment #368199 - Flags: review?(brendan) → review+ (with bool in new method signatures and with createAtom trailing defaulted argument)
OS: Mac OS X → All
Hardware: x86 → All
Whiteboard: [sg:critical?] → [sg:critical?] fixed-in-tracemonkey
Closed: 12 years ago
Resolution: --- → FIXED
Regression bug should block regressing bug, so release drivers know the patch for the regression bug should be taken along with the patch for the regressing bug.

Blocks: 484750
No longer depends on: 484750
Of course, I was reading and updating the wrong bug... D'oh!

No longer blocks: 484750
Depends on: 484750
Attachment #368199 - Flags: review?(jorendorff) → review+
Depends on: 488989
Flags: in-testsuite+
catch range error
Attachment #373914 - Attachment is obsolete: true
Verified fixed with testcase given in comment 0 on trunk and 1.9.1 with the
following debug builds:

Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.2a1pre)
Gecko/20090422 Minefield/3.6a1pre ID:20090422224452

Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1b4pre)
Gecko/20090422 Shiretoko/3.5b4pre ID:20090422122043
Target Milestone: --- → mozilla1.9.2a1
Depends on: 486326
on 1.9.0 !exploitable reports:
Exploitable - User Mode Write AV starting at MSVCR80!memcpy
Flags: blocking1.9.0.11?
We definitely want this on 1.9.0.x. Will this patch (and the ones in the regressions) apply as-is or will it require back-porting due to trace-monkey changes? Code-freeze for is Wednesday May 6 so might be safer and more realistic.
Flags: wanted1.9.0.x+
Needs backporting, and may need bug 486326 first as well depending on how much of a concern performance parity is on a stable branch...
Flags: blocking1.9.0.11? → blocking1.9.0.12?
Flags: blocking1.9.0.12? → blocking1.9.0.12+
Attached patch Backport to 190, v1 (obsolete) — Splinter Review
Okay, this is kind of a mess.  Lack of autotvrs, and tvrs that just work with jsids rather than with jsvals (look for the hideous jsval punning I use at one point to simplify a bit of code) messes things up pretty well for starters.  Next, js_PrototypeHasIndexedProperties had to be backported for the dense array optimizations to be valid, which means small but potentially complex scope-modification changes.  The terminology is dense length, not capacity, in 190.

Most of the changes couldn't really be removed, but I did omit the array_reverse changes on the grounds that they're probably not important on the web and that they're not worth the trouble for people to review.

The one plus to all of this is that several of the deps are TM-only and not relevant to 190.

Careful eyes appreciated on this, particularly on the rooting and early-returns when leaving via goto out -- the current rooting of large array index jsids actually looks a bit dodgy as there's nothing rooting them around, but I don't know whether a premature collection of the underlying JSStrings could actually have bad results or not right now.
Attachment #379282 - Flags: review?(jorendorff)
Attachment #379282 - Flags: review?(igor)
Attachment #379282 - Flags: review?(brendan)
The previous patch regressed no JS tests in the 190 tree, and I manually tested that the test fodder in this bug and deps work correctly.  bc's help in running against a test suite with confidential tests in it, to double-check that I didn't miss anything in that testing, would be superb.
I see no regressions with the patch on mac os x 10.5.7. js1_5/Array/regress-465980-02.js no longer crashes on 1.9.0 with the patch but still fails with:
Expected value '', Actual value 'unexpected modified-array length for testArrayPush(4294967294, [foo], false, 4294967295)'

Note that 1.9.1 and 1.9.2 do not crash but do fail js1_5/Array/regress-465980-02.js with either a time out, no test results reported or an out of memory error.
Can 190 take a tactical fix for the crash instead of this whole patch? It's huge and IIRC it changes behavior in some odd corner cases.

Separately:  Weeks of *not* thinking about this bug have rekindled suspicions that surely this fix (the one we checked in) is a bit much.  I'll chat with Jeff about it.
I repeat what I said at the time: the array methods are not specified to be implemented using unsigned 32-bit integers, and the wraparound behavior we have now with them is simply incorrect.  The necessary fix for this is to use doubles to represent the array indexes during calculations, as this bug implemented.

Of the three regressions, one is a trivial assertion-botch easily fixed, the second is a similarly small thinko, and the third is a performance regression due to floating-point arithmetic and conversions being slow.  Only the third is a meaningful change, but if performance is to be maintained we must avoid those floating-point conversions when easily possible, as that bug implements.  If we don't care about a noticeable performance regression in 3.0 that happens on our own perf tests and elsewhere -- I think we do care -- then we can omit those parts of the patch.  I'm amenable to doing so if we want to reduce complexity, but I don't think we've fully considered the consequences of doing so, not enough to make an informed choice.
The whole faux-uint32 index and length business in Arrays is a waste, isn't it? It might allow uint32 storage but that's it. Really a botch. It was a committee compromise, IIRC. The old Netscape implementation didn't use uint32 length or anything like that, IIRC (my memory of that code has faded, though).

I agree (save for the ECMA and Netscape comments, where I lack the knowledge to do so).  The spec is what it is now, and the days when we could have made it closer to our implementation (better, something saner) are gone -- need to make the implementation conform to the spec now (and just as important for this bug's sake, fix the security hole).
Whiteboard: [sg:critical?] fixed-in-tracemonkey → [sg:critical?][needs review or new patch?] fixed-in-tracemonkey
Comment on attachment 379282 [details] [diff] [review]
Backport to 190, v1

Whew.  I only have a few nits.

>+    if (*hole || id == JSVAL_VOID) {

JSVAL_IS_VOID(id).  Also I think *hole here is redundant: if *hole is true, we
already know that id is void, so there's no need to check both.

>+    if (JSID_IS_ATOM(id))
>+  out:
>+    if (JSID_IS_ATOM(id))
>+        JS_POP_TEMP_ROOT(cx, &tvr);

It might be better to push a single-value root for ID_TO_VALUE(id) unconditionally.

>+     JS_ASSERT(index >= 0);

Extra space of indentation here.

>     * Walk up the prototype chain and see if this indexed element already

"this" doesn't make sense in context.  (The same comment is in tm tip and it
doesn't make sense there either, unless I'm missing something.)
Attachment #379282 - Flags: review?(jorendorff) → review+
Attached patch Addresses comments (obsolete) — Splinter Review
I agree about the redundancy, just checked *hole; also just pushed a jsval unconditionally, far simpler; removed the extra space; and changed "this" to "an" in the comment.  Reran 190 JS tests, same results with them as before, plus these changes are fairly simple.  Think we're good now, just need approval!
Attachment #379282 - Attachment is obsolete: true
Attachment #383592 - Flags: review+
Attachment #383592 - Flags: approval1.9.0.12?
Attachment #379282 - Flags: review?(igor)
Attachment #379282 - Flags: review?(brendan)
Redundancy in assertions is good, and often JS_ASSERT_IF comes in handy:

    JS_ASSERT_IF(*hole, id == JSVAL_VOID);

Please do prefer to add assertions such as this where they are not vacuous.

I didn't add the assert previously because, from code inspection, it did seem vacuous, but it does take a minute to track through IndexToId to verify that, so at the least it isn't obvious that it's *not* worth doing, so might as well do it.  Here's a new patch posted for the benefit of any interested parties who will end up applying the patch manually so they don't have to do post-application modifications.
Attachment #383592 - Attachment is obsolete: true
Attachment #383764 - Flags: approval1.9.0.12?
Attachment #383592 - Flags: approval1.9.0.12?
Comment on attachment 383764 [details] [diff] [review]
With JS_ASSERT(*hole == (id == JSVAL_VOID)) added

Approved for, a=dveditz for release-drivers
Attachment #383764 - Flags: approval1.9.0.12? → approval1.9.0.12+
Attachment 383764 [details] [diff] checked into 190, 20090617 15:59.
Keywords: fixed1.9.0.12
Whiteboard: [sg:critical?][needs review or new patch?] fixed-in-tracemonkey → [sg:critical?] fixed-in-tracemonkey
Does not affect 1.8.
Flags: wanted1.8.1.x-
Group: core-security

/cvsroot/mozilla/js/tests/js1_5/Array/regress-465980-01.js,v  <--  regress-465980-01.js
initial revision: 1.1

/cvsroot/mozilla/js/tests/js1_5/Array/regress-465980-02.js,v  <--  regress-465980-02.js
initial revision: 1.1
Depends on: 568870
Crash Signature: [@ @0xffff07c7 - InitArrayElements]
You need to log in before you can comment on or make changes to this bug.