Extending dense array beyond 0xf9ffffe fails silently.

RESOLVED FIXED in Firefox 44



4 years ago
3 years ago


(Reporter: arai, Assigned: arai)




Firefox Tracking Flags

(firefox43 wontfix, firefox44 fixed)


(Whiteboard: [post-critsmash-triage][adv-main44-])


(3 attachments, 5 obsolete attachments)

Steps To Reproduce:
  evaluate it in js shell
    let a = []; a.length = 0x0f9fffff; let b = [...a]; b.length;

Expected Result:
  either it prints the length of b (=0x0f9fffff), or throws any exception if it fails

Actual Result:
  it prints nothing and next prompt is shown
    js> let a = []; a.length = 0x0f9fffff; let b = [...a]; b.length;

In JSOP_INITELEM_INC operation, if index is 0x0f9ffffe, NativeObject::growElements fails because newCapacity >= NELEMENTS_LIMIT, but nothing reports error for that failure.

I guess this bug itself won't be security critical, but this might be related to bug 1201783, so marking as security just in case.

I'll post trace for this issue in next comment.
Here is the trace for problematic JSOP_INITELEM_INC operation:

> Interpret(JSContext* cx, RunState& state)
> ...
> {
> ...
>     uint32_t index = REGS.sp[-2].toInt32();
>     if (!InitArrayElemOperation(cx, REGS.pc, obj, index, val))
>         goto error;

index is 0x0f9ffffe.

> InitArrayElemOperation(JSContext* cx, jsbytecode* pc, HandleObject obj, uint32_t index, HandleValue val)
> ...
>         if (!DefineElement(cx, obj, index, val, nullptr, nullptr, JSPROP_ENUMERATE))
>             return false;

> js::DefineElement(ExclusiveContext* cx, HandleObject obj, uint32_t index, HandleValue value,
> ...
>    return DefineProperty(cx, obj, id, value, getter, setter, attrs, result);

> js::DefineProperty(ExclusiveContext* cx, HandleObject obj, HandleId id, HandleValue value,
> ...
>     if (!DefineProperty(cx, obj, id, value, getter, setter, attrs, result))
>         return false;

> js::DefineProperty(ExclusiveContext* cx, HandleObject obj, HandleId id, HandleValue value,
> ...
>     return NativeDefineProperty(cx, obj.as<NativeObject>(), id, desc, result);

> js::NativeDefineProperty(ExclusiveContext* cx, HandleNativeObject obj, HandleId id,
> ...
>        if (!AddOrChangeProperty(cx, obj, id, desc))
>            return false;

> AddOrChangeProperty(ExclusiveContext* cx, HandleNativeObject obj, HandleId id,
> ...
>         uint32_t index = JSID_TO_INT(id);
>         DenseElementResult edResult = obj->ensureDenseElements(cx, index, 1);

index is still 0x0f9ffffe.

> NativeObject::ensureDenseElements(ExclusiveContext* cx, uint32_t index, uint32_t extra)
> ...
>     DenseElementResult result = extendDenseElements(cx, requiredCapacity, extra);
>     if (result != DenseElementResult::Success)
>         return result;

requiredCapacity is 0x0f9fffff (= index + 1).

> NativeObject::extendDenseElements(ExclusiveContext* cx,
> ...
>     if (!growElements(cx, requiredCapacity))
>         return DenseElementResult::Failure;

> NativeObject::growElements(ExclusiveContext* cx, uint32_t reqCapacity)
> ...
>        newAllocated = goodAllocated(reqAllocated, getElementsHeader()->length);

reqCapacity is 0x0f9fffff.
reqAllocated is 0x0fa00001 (=reqCapacity + ObjectElements::VALUES_PER_HEADER).

> NativeObject::goodAllocated(uint32_t reqAllocated, uint32_t length = 0)
> ...
>     static const uint32_t BigBuckets[] = {
> ...
>        143654912, 162529280, 183500800, 206569472, 232783872, 262144000,
>        295698432, 333447168, 375390208, 422576128, 476053504, 535822336,
> ...
>             if (b >= goodAllocated) {
>                 // Found the first bucket greater than or equal to
>                 // |goodAllocated|.
>                 goodAllocated = b;
>                 break;

reqAllocated is 0x0fa00001. goodAllocated is 0x11a00000, because 262144000(=0x0fa00000) < reqAllocated <= 295698432(=0x11a00000)

> NativeObject::growElements(ExclusiveContext* cx, uint32_t reqCapacity)
> ...
>     // Don't let nelements get close to wrapping around uint32_t.
>     if (newCapacity >= NELEMENTS_LIMIT)
>         return false;

newCapacity is 0x119ffffe, NELEMENTS_LIMIT is 0x10000000, |newCapacity >= NELEMENTS_LIMIT| is true, so NativeObject::growElements fails.  But no function reports error upto Interpret, and it fails silently.
Seems that this is not limited to spread, nor hole, but long dense array.
  js> let a = []; for (let i = 0; i <= 0x0f9fffff; i++) a[i] = i; a.length
  js> a.length.toString(16)

sparse array throws OOM
  js> let a = []; for (let i = 0x0e9fffff; i <= 0x0f9fffff; i++) a[i] = i; a.length
  uncaught exception: out of memory
Summary: Array spread with too many holes fails silently. → Extending dense array beyond 0xf9ffffe fails silently.
I guess the last element of BigBuckets before sentinel should be 0xfffffff, as all other values beyond 0x10000000 fails because of 28bit bound.
>     /* Upper bound on the number of elements in an object. */
>     static const uint32_t NELEMENTS_LIMIT = JS_BIT(28);

Does this mean that we cannot have NELEMENTS_LIMIT elements in a dence array, and storing NELEMENTS_LIMIT-th (or near that) element should throw exception?
I managed to allocate array with 0xffffffd length (0xfffffff - ObjectElements::VALUES_PER_HEADER), and storing 0xffffffd-th element fails silently, because it exceeeds 28-bit bound in growElement.

If it *should* fail, we need to report the error from somewhere in the call stack.
Assignee: nobody → arai.unmht
looks like we should make the array sparse when storing 0xffffffd-th element, not 0x10000000-th one, because of ObjectElements::VALUES_PER_HEADER margin.
Comment on attachment 8657401 [details] [diff] [review]
Part 1: Fix the maximum length of a dense array.

Prepared 3 patches.

Part 1 fixed the allocation size for |length >= 0xf9ffffd|, previously next allocation size was 0x11a00000, which exceeds 28bit bound.  So reduced it to 0xfffffff, and removed remaining entries.  Also changed dec to hex for readability.
Attachment #8657401 - Flags: review?(jwalden+bmo)
Attachment #8657401 - Attachment description: (WIP) Part 1: Fix the maximum length of a dense array. → Part 1: Fix the maximum length of a dense array.
Part 2 fixes the behavior when extending dense array length to 0xfffffff, which doesn't exceeds 28-bit bound, but that should become sparse array because of VALUES_PER_HEADER.
Also, reduced the maximum array literal length from NELEMENTS_LIMIT to NELEMENTS_LIMIT - VALUES_PER_HEADER.  IIUC, array literal without spread should always be dense, right?
Attachment #8657425 - Flags: review?(jwalden+bmo)
Part 3 adds OOM error report when NativeObject::ensureElements fails because of exceeding DENSE_ARRAY_LIMIT.  This can happen in array comprehension, which supposes that result array is always dense.

I've once prepared following test, but it hits timeout even on opt-build :/

  let a = [];
  a.length = 0x10000000;
  let b = [for (i of a) i];

   time[s]| options
     17.6 | ""
    100.7 | "--ion-eager --ion-offthread-compile=off"
    103.1 | "--ion-eager --ion-offthread-compile=off --non-writable-jitcode --ion-check-range-analysis --ion-extra-checks --no-sse3 --no-threads"
     17.9 | "--baseline-eager"
     18.1 | "--baseline-eager --no-fpu"
    366.4 | "--no-baseline --no-ion"    <- this hits timeout

So currently this patch has no test.

Without that one, passed jstests and jit-test locally.
Attachment #8657426 - Flags: review?(jwalden+bmo)
Group: core-security → javascript-core-security
Keywords: sec-other
I started trying to review your patch, and a mess of nits piled up such that it seemed easiest to just fix the patch to do what I wanted instead.  So I did.

Salient distinctions:

* the small/large distinction is handled with an early-returning if, followed by the large-case code -- now, anyone caring about a small reqAllocation (say, when debugging) can entirely ignore the large-case code, including BigBuckets and explanatory comments
* NELEMENTS_MAX is gone, replaced by NELEMENTS_LIMIT - 1 directly
* I rearranged a bunch of the comments so the comment atop the function is small and topical, and the comments inside the function are detailed, without being repetitive
* BigBuckets isn't implicitly terminated, but rather treated with a safer range-based for
* |goodAllocated| the function no longer contains a variable named |goodAllocated| (!)
Attachment #8662771 - Flags: review?(arai.unmht)
Attachment #8657401 - Attachment is obsolete: true
Attachment #8657401 - Flags: review?(jwalden+bmo)
Comment on attachment 8662771 [details] [diff] [review]
Part 1: Fix the maximum length of a dense array (alterna-patch)

Review of attachment 8662771 [details] [diff] [review]:

Thank you for brushing up :)
This looks so much better, and easy to trace.
Attachment #8662771 - Flags: review?(arai.unmht) → review+
So far, I tried to confirm the behavior and what the comment says about "small" case.

A) length = (8192 - 2) * 3 / 2 - 1

  let N = (8192 - 2) * 3 / 2 - 1;
  let a = new Array(N);
  for (let i = 0; i < N; i+) a[i] = i;

length is 12284.  When inserting 4094-th element, reqCapacity is 4095, goodCapacity is 8190, so this satisfies |length >= reqCapacity && goodCapacity > (length / 3) * 2| condition, and goodAllocated (goodAmount) is changed *up* to 12286.

through the execution, allocation size grows in following way:
  2048 => 4096 => 12286
             triple (the comment says "at most triple")

Without that rule, it grows in following way:
  2048 => 4096 => 8192 => 16384
So, this rule reduces re-allocation once, and avoided allocating unnecessary elements.

B) length = (8192 - 2) * 3 / 2

  let N = (8192 - 2) * 3 / 2;
  let a = new Array(N);
  for (let i = 0; i < N; i++) a[i] = i;

length is 12285.  When inserting 4094-th element, reqCapacity is 4095, goodCapacity is 8190, so this doesn't satisfy |length >= reqCapacity && goodCapacity > (length / 3) * 2| condition, and goodAllocated stays 8192.
When inserting 8190-th element, reqCapacity is 8191, goodCapacity is 16382, so this satisfies |length >= reqCapacity && goodCapacity > (length / 3) * 2| condition, and goodAllocated is changed *down* to 12287.

through the execution, allocation size grows in following way:
  2048 => 4096 => 8192 => 12287

Without that rule, it grows in following way:
  2048 => 4096 => 8192 => 16384
So, this rule also avoided allocating unnecessary elements.

The length in above examples won't be common tho, looks like it works nicely when storing i-th elements where i < length.
Comment on attachment 8657425 [details] [diff] [review]
Part 2: Make an array sparse when exceeds the limit of dense array length.

Review of attachment 8657425 [details] [diff] [review]:

::: js/src/vm/NativeObject.h
@@ +885,5 @@
>      /* Upper bound on the number of elements in an object. */
>      static const uint32_t NELEMENTS_LIMIT = JS_BIT(28);
> +    /* Dense array is allocated with length + ObjectElements::VALUES_PER_HEADER,
> +     * so it should become sparse when the number exceeds NELEMENTS_LIMIT. */
> +    static const uint32_t DENSE_ARRAY_LIMIT = NELEMENTS_LIMIT - ObjectElements::VALUES_PER_HEADER;

Hmm, these names seem easy to confuse now, i.e. using one name when the other was meant.  Can we make them more distinctively state their purposes?  

    // The maximum size, in sizeof(Value), of the allocation used for an
    // object's dense elements.  (This includes space used to store an
    // ObjectElements instance.)
    static const uint32_t MAX_DENSE_ELEMENTS_ALLOCATION = (1 << 28);

    // The maximum number of usable dense elements in an object.
    static const uint32_t MAX_DENSE_ELEMENTS_COUNT =

Currently, NELEMENTS_LIMIT is a nice binary power, *but* all the code I see treats it as a limit through >= comparisons.  So if someone requests 2**20 allocation, we actually will treat this as an *error*.  That's nuts.  Making this the actual maximum seems much better, as I have it above, should eliminate this problem.  (Although we'll have to change possibly all the comparisons to these two values to account for the differing meaning.)
Thank you for reviewing

I agree that these names are better :)
So, we're going to change these ">=" to ">", and 1 more element will be allowed to allocate, right?  (now allocation *limit* is (1 << 28) + 1)
Comment on attachment 8657425 [details] [diff] [review]
Part 2: Make an array sparse when exceeds the limit of dense array length.

Yeah.  (Or < comparisons will turn to <=, of course.)  Clearing reviews pending patches with the fresher names.
Attachment #8657425 - Flags: review?(jwalden+bmo)
Attachment #8657426 - Flags: review?(jwalden+bmo)
Does "slot" include ObjectElements::VALUES_PER_HEADER?

>        static_assert((NativeObject::NELEMENTS_LIMIT - 1) <= INT32_MAX / sizeof(JS::Value),
this will become following (am I correct?):
        static_assert(NativeObject::MAX_DENSE_ELEMENTS_ALLOCATION <= INT32_MAX / sizeof(JS::Value),
but the assertion fails  (NativeObject::MAX_DENSE_ELEMENTS_ALLOCATION = (1 << 28), INT32_MAX = (1 << 31) - 1, sizeof(JS::Value) = 8)

If above change is correct, we cannot change the *limit*, and we should define MAX_DENSE_ELEMENTS_ALLOCATION as |(1 << 28) - 1|
Flags: needinfo?(jwalden+bmo)
Huh, you're right.  I guess that's why the allocation size is odd that way.  Looks like adding back in the - 1 is the thing to do.
Flags: needinfo?(jwalden+bmo)
Changed names, and LIMIT to MAX.  Also fixed some more usage that seems to intend count/capacity instead of allocation.
Attachment #8657425 - Attachment is obsolete: true
Attachment #8663221 - Flags: review?(jwalden+bmo)
Attachment #8657426 - Attachment is obsolete: true
Attachment #8663223 - Flags: review?(jwalden+bmo)
Comment on attachment 8663221 [details] [diff] [review]
Part 2: Make an array sparse when exceeds the limit of dense array length.

Review of attachment 8663221 [details] [diff] [review]:

As noted on IRC, I have *not* reviewed the NativeObject.cpp bits of this yet, but everything else is more or less fine modulo comments.  I'll pick up on NativeObject.cpp after dinner.

::: js/src/frontend/Parser.cpp
@@ +8764,5 @@
>          bool spread = false, missingTrailingComma = false;
>          uint32_t index = 0;
>          TokenStream::Modifier modifier = TokenStream::Operand;
>          for (; ; index++) {
> +            if (index == NativeObject::MAX_DENSE_ELEMENTS_COUNT) {

Could you make this >=, just in case something goes horrifically wrong somehow and |index| goes totally nutso?  We'd already be off the rails, but doing it this way might conceivably limit the damage in that case, if it happened.

::: js/src/jit/CodeGenerator.cpp
@@ +4302,5 @@
>      Register tempReg = ToRegister(lir->temp());
>      JSObject* templateObject = lir->mir()->templateObject();
>      DebugOnly<uint32_t> count = lir->mir()->count();
> +    MOZ_ASSERT(count <= NativeObject::MAX_DENSE_ELEMENTS_COUNT);

Could you rename |count| and |count()| in the line just above to |length()|?  The "count" of an array is not at all the normal terminology; length, I think the average reader would understand far more readily.

Probably a separate patch, landed before/beneath this one.  Post here and I'll stamp it quickly -- it's right at the edge of r=me territory, but we're talking arrays and lengths and out-of-bounds and then I start getting scared about moving too fast on this without a second set of eyes.  :-)

::: js/src/jit/MCallOptimize.cpp
@@ +439,5 @@
>          // The next several checks all may fail due to range conditions.
>          trackOptimizationOutcome(TrackedOutcome::ArrayRange);
>          // Negative lengths generate a RangeError, unhandled by the inline path.

Assert initLength is non-negative, for good measure?

::: js/src/jit/MIR.cpp
@@ +4115,5 @@
>          MOZ_ASSERT(templateObject()->as<UnboxedArrayObject>().capacity() >= count());
>          return !templateObject()->as<UnboxedArrayObject>().hasInlineElements();
>      }
> +    MOZ_ASSERT(count() <= NativeObject::MAX_DENSE_ELEMENTS_COUNT);

Gah, stupid count() name.  :-)

::: js/src/jit/RangeAnalysis.cpp
@@ +1733,5 @@
>  void
>  MInitializedLength::computeRange(TempAllocator& alloc)
>  {
> +    setRange(Range::NewUInt32Range(alloc, 0, NativeObject::MAX_DENSE_ELEMENTS_COUNT + 1));

I think this was wrong before.  Given

MClz::computeRange(TempAllocator& alloc)
    setRange(Range::NewUInt32Range(alloc, 0, 32));

in our current range analysis code, it looks like the bounds are inclusive -- and initializedLength is at most the count of dense elements.

So remove the | + 1| here, please.

::: js/src/jsgc.h
@@ +180,5 @@
>       * maximum number of fixed slots is needed then the fixed slots will be
>       * unused.
>       */
>      JS_STATIC_ASSERT(ObjectElements::VALUES_PER_HEADER == 2);
> +    if (numSlots > NativeObject::MAX_DENSE_ELEMENTS_ALLOCATION ||

These are the GetGCArrayKind hits:

[jwalden@find-waldo-now js]$ ack GetGCArrayKind
175:GetGCArrayKind(size_t numSlots)

3491:        return GetBackgroundAllocKind(GetGCArrayKind(nelements));

810:        return gc::GetGCArrayKind(numSlots);

The first is the definition, ignorable.

The second is:

        size_t nelements = aobj.getDenseCapacity();
        return GetBackgroundAllocKind(GetGCArrayKind(nelements));

But capacities are element-count-based, not slot-allocation-count-based.  So by this hit, the comparison should be against MAX_DENSE_ELEMENTS_COUNT (and s/numSlots/numElements/).

The third is in GuessArrayGCKind, with these hits:

[jwalden@find-waldo-now js]$ ack GuessArrayGCKind
807:GuessArrayGCKind(size_t numSlots)

3322:    gc::AllocKind allocKind = GuessArrayGCKind(length);
3456:    gc::AllocKind allocKind = GuessArrayGCKind(length);

The first is just definition, ignorable.

The second two are NewDenseFullyAllocatedArrayWithTemplate passing in an array length (number of elements), and NewArray again passing in an array length (number of elements).

So: GetGCArrayKind really takes a number of elements, and GuessArrayGCKind really takes number of elements.  And this should be a comparison against MAX_DENSE_ELEMENTS.
Changed the name of property, method, related local variable, and argument.
Attachment #8663364 - Flags: review?(jwalden+bmo)
Comment on attachment 8663221 [details] [diff] [review]
Part 2: Make an array sparse when exceeds the limit of dense array length.

Review of attachment 8663221 [details] [diff] [review]:

I guess the set of changes needed adds up enough I should probably double-check on this.  Particularly as we're in a sec bug, even if that's mostly because paranoia.

::: js/src/jit-test/tests/arrays/dense-from-sparse.js
@@ +1,5 @@
> +// Appending elements to a dense array should make the array sparse when the
> +// length exceeds the limit, instead of throwing OOM error.
> +
> +function test() {
> +  const MAX_DENSE_ELEMENTS_ALLOCATION = 0xfffffff;

(1 << 28) - 1 is a better statement of this, nobody can eyeball those "f"s and know the number of bits we're talking here.  :-)

::: js/src/vm/NativeObject.cpp
@@ +396,5 @@
>       * the limited number of bits to store shape slots, object growth is
>       * throttled well before the slot capacity can overflow.
>       */
>      NativeObject::slotsSizeMustNotOverflow();

This raises the question (that I remember previously considering, when adding {slots,elements}SizeMustNotOverflow -- resolved by having both of them) of whether we should be treating slots and elements exactly identically (notwithstanding that we don't need to reserve ObjectElements space in slots allocations, so max dense elements count and max slots count are different) -- or at least using variable names tailored to one use only, in both contexts.  "slots" and "NELEMENTS" on adjacent lines was maybe mildly defensible.  The better naming here, seems a bit more jarring than "NELEMENTS" was.

Could we split this constant out to a slots-specific version, MAX_SLOTS_COUNT?  Then use that in the places, like here (maybe *only* here?  at least, that's what it seems like it might be), that are slots-specific.

This also suggests "goodAllocated" should be "goodElementsAllocationAmount" or something like that.

@@ +725,1 @@
>      };

Actually, it'd be nice to have

  MOZ_ASSERT(BigBuckets[ArrayLength(BigBuckets) - 2] <= MAX_DENSE_ELEMENTS_ALLOCATION);

just for general increasing-size sanity.

@@ +750,5 @@
>      CheckedInt<uint32_t> checkedOldAllocated =
>          CheckedInt<uint32_t>(oldCapacity) + ObjectElements::VALUES_PER_HEADER;
>      CheckedInt<uint32_t> checkedReqAllocated =
>          CheckedInt<uint32_t>(reqCapacity) + ObjectElements::VALUES_PER_HEADER;
> +    MOZ_ASSERT(checkedOldAllocated.isValid() && checkedReqAllocated.isValid());

Adding MOZ_ASSERT(checkedOldAllocated.value() <= MAX_DENSE_ELEMENT_ALLOCATION) would be nice, for sanity.
Attachment #8663221 - Flags: review?(jwalden+bmo) → review-
Comment on attachment 8663364 [details] [diff] [review]
Part 0: Rename MNewArray::count to MNewArray::length.

Review of attachment 8663364 [details] [diff] [review]:

::: js/src/jit/MIR.h
@@ +2904,5 @@
>      public NoTypePolicy::Data
>  {
>    private:
>      // Number of space to allocate for the array.
> +    uint32_t length_;

Number of *elements* to allocate
Attachment #8663364 - Flags: review?(jwalden+bmo) → review+
Comment on attachment 8663223 [details] [diff] [review]
Part 3: Report OOM error when ensureElements fails.

Review of attachment 8663223 [details] [diff] [review]:

r- on every single patch as originally posted.  Ugh.  Sorry 'bout that, this code's just that grody.  :-(

::: js/src/vm/NativeObject.cpp
@@ +662,5 @@
> +        ReportOutOfMemory(cx);
> +        return false;
> +    }
> +    if (capacity > getDenseCapacity())
> +        return growElements(cx, capacity);

Entrusting this too-big check to ensureElements, as opposed to shoving it into growElements, seems dodgy.  That sort of requires that every caller make sure to call ensureElements, not growElements, to be safe or so.  Why not put the too-big check in growElements?  Looks like there might be some simplifications we can do to NO::growElements if we do that, too -- like probably eliminating that CheckedInt stuff that seems a little overkillish, to be honest.
Attachment #8663223 - Flags: review?(jwalden+bmo) → review-
Thank you for reviewing :)
Addressed review comments.
Also, merged Part 3 to Part 2, and moved ReportOutOfMemory to glowElements.

then, MAX_SLOTS_COUNT is |(1 << 28) - 1|, right? (instead of |- ObjectElements::VALUES_PER_HEADER|)
Attachment #8663221 - Attachment is obsolete: true
Attachment #8663223 - Attachment is obsolete: true
Attachment #8663992 - Flags: review?(jwalden+bmo)
Comment on attachment 8663992 [details] [diff] [review]
Part 2: Make an array sparse when exceeds the limit of dense array length.

Review of attachment 8663992 [details] [diff] [review]:

::: js/src/jit/MCallOptimize.cpp
@@ +446,2 @@
>              return InliningStatus_NotInlined;
> +        MOZ_ASSERT(int32_t(initLength) >= 0);

Compare against INT32_MAX without casting -- casting an overlarge uint32_t to int32_t is slightly dubious as implementation-defined behavior.

::: js/src/jsgc.h
@@ +187,3 @@
>          return AllocKind::OBJECT2;
> +    }
> +    return slotsToThingKind[numElements + 2];

Hmm, why are there literal 2s here if ObjectElements::VALUES_PER_HEADER is capable of being used in a static_assert?  Please use that directly and remove the static assertion (that uses a deprecated macro and so was doubly bad).
Attachment #8663992 - Flags: review?(jwalden+bmo) → review+
Oh, for what it's worth, I think probably it might be good to shove the too-big reqAllocated check into goodElementsAllocationAmount and make that function fallible, returning the good amount via outparam.  But it seems best to pocket the wins so far, then do that atop all this once it's all landed -- this is dragging on long enough as it is.
it throws OOM here

>ReallocateObjectBuffer(ExclusiveContext* cx, JSObject* obj, T* oldBuffer,
>                       uint32_t oldCount, uint32_t newCount)
>    if (cx->isJSContext()) {
>        Nursery& nursery = cx->asJSContext()->runtime()->gc.nursery;
>        T* buffer =  static_cast<T*>(nursery.reallocateBuffer(obj, oldBuffer,
>                                                              oldCount * sizeof(T),
>                                                              newCount * sizeof(T)));
>        if (!buffer)
>            ReportOutOfMemory(cx);

so it seems to be pure-OOM.
we might have to allow OOM in the test, while we're testing no wrong-OOM happens :/
Group: javascript-core-security → core-security-release
oh, no. these changesets are backed out.
am I missing some field?
Resolution: FIXED → ---
Do you have any idea than just removing the testcase (dense-from-sparse.js) or adding allow-oom flag?
(adding allow-oom there seems to be nonsense...)
Flags: needinfo?(jwalden+bmo)
Reset target milestone and so on. Should this be backed out of m-c so it doesn't hit nightly?

dveditz/waldo, do we want to re-add this to js-security?
Flags: needinfo?(dveditz)
Flags: needinfo?(arai.unmht)
Target Milestone: mozilla44 → ---
(In reply to Tooru Fujisawa [:arai] from comment #32)
> Do you have any idea than just removing the testcase (dense-from-sparse.js)
> or adding allow-oom flag?
> (adding allow-oom there seems to be nonsense...)

As discussed on IRC, I think we can add this, sensibly.  Failure on OOM is a return-false with a pending "out of memory" string set as the pending exception.  That failure mode is distinguishable from the one currently in play, of returning false *without* setting a pending exception.  So I think adding this is quite fine.

That said, a test that allocates $gobs memory like this is also a very reasonable candidate for not landing, and/or not landing as an enabled test.  And I wouldn't be at all surprised if someone started yelling *very* shortly after this lands, such that we disabled anyway.  So if you land with allow-oom in place, be prepared to disable the test even so.
Flags: needinfo?(jwalden+bmo)
(In reply to :Gijs Kruitbosch from comment #33)
> dveditz/waldo, do we want to re-add this to js-security?

Nah, we'll just hope the relanding sticks.
Flags: needinfo?(dveditz)
Closed: 4 years ago4 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla44
Whiteboard: [post-critsmash-triage]
Whiteboard: [post-critsmash-triage] → [post-critsmash-triage][adv-main44-]
Group: core-security-release
You need to log in before you can comment on or make changes to this bug.