Closed Bug 1342045 Opened 4 years ago Closed 4 years ago

wasm: Memory maximum limit checks are off by one

Categories

(Core :: JavaScript Engine: JIT, defect, P3)

defect

Tracking

()

RESOLVED FIXED
mozilla54
Tracking Status
firefox54 --- fixed

People

(Reporter: bbouvier, Assigned: bbouvier)

References

Details

Attachments

(1 file, 1 obsolete file)

Attached patch wip.patch (obsolete) — Splinter Review
So the spec core test that requests a Memory with initial: 0, maximum: 65536 is actually correct. In this case, the max memory size, in number of bytes is precisely 2**32, while we check for UINT32_MAX, which is... 2**32 - 1. So we're off by one here. Fixing this will require having an uint64 maximum limit, only for this edge case, which sounds very unfortunate.

I'm deferring this issue which sounds minor at the moment. Posting a WIP patch with explorations, but this crashes because more places probably want an uint64, and it seems there are more important issues right now.
Priority: -- → P3
I think it shouldn't be semantically visible if we clamp maximums to UINT32_MAX internally.  This would allow us to keep everything as uint32_t.
That's right: the only semantically visible change would be when growing memory from 65535 to 65536 (since we're checking against the max limit), but growing fails if new requested memory byte size > INT32_MAX anyhow, clamping or not.
Right, the meaning of maximum is that the engine is always allowed to fail growing before maximum; it just can't grow *past* maximum.
Attached patch memory.patchSplinter Review
This makes writing the patch much easier, but we'll have to be very careful if/when we're allowed to create ArrayBufferObjects with a length bigger than 2**32 in the future.
Assignee: nobody → bbouvier
Attachment #8840419 - Attachment is obsolete: true
Status: NEW → ASSIGNED
Attachment #8840471 - Flags: review?(luke)
Comment on attachment 8840471 [details] [diff] [review]
memory.patch

Review of attachment 8840471 [details] [diff] [review]:
-----------------------------------------------------------------

Thanks!  I think we can get rid of all the uint64_t's though:

::: js/src/wasm/WasmBinaryConstants.h
@@ +458,5 @@
>  static const unsigned MaxLocals              =    50000;
>  static const unsigned MaxParams              =     1000;
>  static const unsigned MaxBrTableElems        =  1000000;
>  static const unsigned MaxMemoryInitialBytes  = 1024 * 1024 * 1024;
> +static const uint64_t MaxMemoryMaximumBytes  = 4UL * uint64_t(MaxMemoryInitialBytes);

How about making this MaxMemoryMaximumPages and doing the limit check before the `maximumBytes *= PageSize`?  For symmetry, it'd be good to do likewise with MaxMemoryInitialBytes.  Also, I'd spell out the max value directly instead of in terms of initial.

::: js/src/wasm/WasmValidate.cpp
@@ +836,5 @@
>  
>      memory.initial = initialBytes.value();
>  
>      if (memory.maximum) {
> +        CheckedInt<uint64_t> maximumBytes = *memory.maximum;

Then this can stay a uint32_t.

@@ +849,5 @@
> +        if (maxBytesValue == MaxMemoryMaximumBytes)
> +            maxBytesValue = UINT32_MAX;
> +
> +        MOZ_ASSERT(uint64_t(uint32_t(maxBytesValue)) == maxBytesValue);
> +        memory.maximum = Some(uint32_t(maxBytesValue));

And then I think this logic could be:
  memory.maximum = Some(maximumBytes.isValid() ? maximumBytes.value() : UINT32_MAX);
Attachment #8840471 - Flags: review?(luke) → review+
Pushed by bbouvier@mozilla.com:
https://hg.mozilla.org/integration/mozilla-inbound/rev/3453a55b3053
Allow wasm::Memory of size 2**32; r=luke
https://hg.mozilla.org/mozilla-central/rev/3453a55b3053
Status: ASSIGNED → RESOLVED
Closed: 4 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla54
You need to log in before you can comment on or make changes to this bug.