Open Bug 791428 Opened 7 years ago Updated 2 years ago

Setting canvas width or height to huge values via JavaScript gives unexpected results

Categories

(Core :: Canvas: 2D, defect, P3)

17 Branch
x86
macOS
defect

Tracking

()

REOPENED
Tracking Status
firefox57 --- wontfix

People

(Reporter: GPHemsley, Unassigned)

References

(Blocks 1 open bug)

Details

(Whiteboard: [gfx-noted])

Attachments

(4 files, 3 obsolete files)

I originally set out to debug why I was receiving the following error when setting a canvas width to 0x1000000:

NS_ERROR_OUT_OF_MEMORY: Component returned failure code: 0x8007000e (NS_ERROR_OUT_OF_MEMORY) [nsIDOMHTMLCanvasElement.width]

In my attempt to produce a reduced testcase (which I've actually still been unable to do), I discovered that setting canvas width or height to huge values via JavaScript makes the canvas do strange things.

The most obvious thing is that the computed values for the attributes do not match what was requested. This appears to be because some precision is lost, and the values are rounded. For example, requesting a height of 1048576 results in a computed height of 1048580; requesting 16777216 gives 16777200; and requesting 33362175 gives 17895700 (!).

Stranger than that, though is that certain large values of one property will cause the constant value of the other property to decrease! For example, despite all canvas elements having a set width of 64, requesting a height of 18974736 results in a computed height of 17895700 and a computed width of 60.35! And this doesn't occur with all values higher than that—only certain ones.

And sometimes, rather than a rounding issue, the value of the property is computed as 0. For example, requesting a height of 49787136 yields a computed height of 0.

One other thing that I noticed in the Web Inspector involved setting the property to a negative number. For example, setting the height attribute to -1 via JavaScript results in the height attribute in the Inspector displaying a value of 4294967295, instead of -1 or the default value that it is actually computed to have. (Again, in this case, the conflict not apparent in the viewport; it is only apparent in the Inspector HTML. I can also replicate the issue in the equivalent Firebug module.)

Since this appears to involve integer overflow, I'm marking this bug as security just in case. I'm also not sure if this component is the proper place for this bug, given the number of factors in play here.

Attached are two test cases (one for width and one for height) that loop through a bunch of large values. I'm running these in Aurora 17 on 64-bit Mac OS X 10.6.8.
Image frames are capped at 32K in each dimension, in part to prevent integer overflows (and who has the 12 foot monitors it would take to display that?). Are canvases not similarly capped?
(In reply to Daniel Veditz [:dveditz] from comment #2)
> Image frames are capped at 32K in each dimension, in part to prevent integer
> overflows

FWIW, the CheckedInt class that we now have in mfbt (mfbt/CheckedInt.h) is a better way to prevent integer overflows.
Assigning to Anthony for investigation :-)
Matt said he could take a look at this.
QA Contact: mwobensmith
Looks like windows.getComputedStyle(canvas).getPropertyValue("width") is returning 1.67772e+7px
This value is being generated in nsROCSSPrimitiveValue::GetCssText() where it's displaying it as a float number of pixels:

    case CSS_PX :
      {
        float val = nsPresContext::AppUnitsToFloatCSSPixels(mValue.mAppUnits);
        tmpStr.AppendFloat(val);
        tmpStr.AppendLiteral("px");
        break;
      }
AppUnitsToFloatCSSPixels converts its parameter to a float and does the division in float space. I think that loses precision compared to converting the parameter to a double, dividing, and then converting to float.
Removing an unnecessary malloc call in number formatting to offset any performance cost of the other patch.
Comment on attachment 668972 [details] [diff] [review]
Part 1: Clamp pixel sizing to match nscoord_MAX

Review of attachment 668972 [details] [diff] [review]:
-----------------------------------------------------------------

Split out the nsTSubstring.h change into its own patch
Comment on attachment 668974 [details] [diff] [review]
Part 2: Remove malloc from number format

Review of attachment 668974 [details] [diff] [review]:
-----------------------------------------------------------------

why not just declare the temporary buffer in Modified_cnvtf and remove the parameter?
I wanted to see what it looked like when you allocate both buffers in the same place in order to reduce mistakes. I agree that it looks yuck. It would make more sense to leave the temporary buffer in Modified_cnvtf and assert that the size matches.
What is the result of having these sizes wrong? does it only mess up display, show uninitialized memory, or something even more dangerous like write outside our buffer?
Comment on attachment 669386 [details] [diff] [review]
Part 2: Remove malloc from number format v2

Review of attachment 669386 [details] [diff] [review]:
-----------------------------------------------------------------

::: xpcom/string/src/nsTSubstring.cpp
@@ +799,4 @@
>    char *bufp = buf;
>    char *endnum;
>  
> +  MOZ_ASSERT(sizeof(num) == bufsz);

just assert sizeof(num) <= bufsz.
Attachment #669386 - Flags: review?(roc)
Attachment #669386 - Flags: review?(benjamin)
Attachment #669386 - Flags: review+
Comment on attachment 669386 [details] [diff] [review]
Part 2: Remove malloc from number format v2

Instead of asserting, why not just change the parameter? e.g.

Modified_cnvtf(char[40]& buf, int prcsn, double fval). And remove `bufsz` altogether?
Attachment #669386 - Flags: review?(benjamin) → review-
(In reply to Benjamin Smedberg  [:bsmedberg] from comment #20)
> Modified_cnvtf(char[40]& buf, int prcsn, double fval). And remove `bufsz`
> altogether?

Good suggestion. I didn't know you could do that. Now I do. Thanks.
Does that syntax actually prevent you passing a char[1] as the first parameter? I had thought it did not.
So it appears that you have to use a typedef to actually declare this, but if you do:

typedef char charbuf[40];

static void
DoSomething(charbuf& buf);

int main()
{
  char b[2];
  DoSomething(b);

You do get compile errors:

sizedref.cc:14:16: error: invalid initialization of reference of type ‘char (&)[40]’ from expression of type ‘char [2]’
It doesn't need a typedef if you get the brackets in the right place.
Attachment #669386 - Attachment is obsolete: true
Comment on attachment 669845 [details] [diff] [review]
Part 2: Remove malloc from number format v3

Arguably the typedef would be better code hygiene rather than repeating magic "40" several places in the file, but I'll accept this.
Attachment #669845 - Flags: review?(benjamin) → review+
Comment on attachment 669385 [details] [diff] [review]
Part 1: Clamp pixel sizing to match nscoord_MAX v2

[Security approval request comment]
I can't see how this bug is explotable.


How easily can the security issue be deduced from the patch?

Very easily.


Do comments in the patch, the check-in comment, or tests included in the patch paint a bulls-eye on the security problem?

The problem isn't hard to figure out. I can't see how to exploit it though.


Which older supported branches are affected by this flaw?

Probably all of them.


If not all supported branches, which bug introduced the flaw?



Do you have backports for the affected branches? If not, how different, hard to create, and risky will they be?

Backporting this patch should be straightforward.


How likely is this patch to cause regressions; how much testing does it need?

This patch is low risk because dimensions in the millions of pixels have little or no practical application at this stage.
Attachment #669385 - Flags: sec-approval?
Comment on attachment 669845 [details] [diff] [review]
Part 2: Remove malloc from number format v3

Part 2 does not relate to a security issue.
(In reply to Anthony Jones (:kentuckyfriedtakahe) from comment #28)
> How likely is this patch to cause regressions; how much testing does it need?
> 
> This patch is low risk because dimensions in the millions of pixels have
> little or no practical application at this stage.

FWIW, I discovered this when I was processing TIFF images, which tend to be high quality and large.
(In reply to Gordon P. Hemsley [:gphemsley] from comment #30)
> FWIW, I discovered this when I was processing TIFF images, which tend to be
> high quality and large.

It's unlikely to regress anything because it is unlikely that these large images have ever. I'm adding better precision in bug 799815.

Also note that bug 591358 delays memory allocation so you won't see the same out of memory errors.
(In reply to Anthony Jones (:kentuckyfriedtakahe) from comment #28)
> [Security approval request comment]
> I can't see how this bug is explotable.

What happens when the size is grossly different (larger or smaller) than the size of the source image?

If it's smaller is it just clipped, or so we stomp outside an allocated buffer (I'm assuming clipping based on your answer above).

If it's larger what happens to the part of the canvas for which we don't have the expected amount of data? Do we draw stuff based on uninitialized memory? Leave the contents of whatever existed before in the canvas (and if so, is that data accessible to scripts in the page)? Blank it out?
The bug is (In reply to Daniel Veditz [:dveditz] from comment #32)
> (In reply to Anthony Jones (:kentuckyfriedtakahe) from comment #28)
> > [Security approval request comment]
> > I can't see how this bug is explotable.
> 
> What happens when the size is grossly different (larger or smaller) than the
> size of the source image?

The issue that is fixed in this bug is the reporting of the canvas size. If you request a really large canvas it will still try to allocate the buffer (and most likely silently fail). However it was possible to overflow the size in app units. This was still internally using the correct size but reporting a bogus value back as the layout size in pixels.

The fix is to clamp the maximum reported value rather than overflow it and report a negative value. The precision issue relating to this value is covered by bug 799815.

The issue is in the layers above the graphics subsystem and has nothing to do with the actual buffer dimensions.
Comment on attachment 669385 [details] [diff] [review]
Part 1: Clamp pixel sizing to match nscoord_MAX v2

Thanks for the answers. I agree this doesn't sound exploitable and doesn't need to be hidden.
Attachment #669385 - Flags: sec-approval? → sec-approval+
Group: core-security
https://hg.mozilla.org/mozilla-central/rev/d587ce3af2f3
https://hg.mozilla.org/mozilla-central/rev/9e1b8da160a2
Status: NEW → RESOLVED
Closed: 7 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla19
Backed out because of performance regressions: https://hg.mozilla.org/integration/mozilla-inbound/rev/1494dd3c2315
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
If this was backed out, then I don't think it made it into mozilla19.

Is there a new ETA on a fix that doesn't have performance regressions?
Target Milestone: mozilla19 → ---
Assignee: ajones → nobody
Is this still a valid bug?
Flags: needinfo?(mstange)
Flags: needinfo?(mstange) → needinfo?(milan)
Testcases are still showing red for me in Aurora 55.
Sounds like we should keep it around.
Flags: needinfo?(milan)
Whiteboard: [gfx-noted]
Attachment #669385 - Flags: sec-approval+
You need to log in before you can comment on or make changes to this bug.