Open Bug 1282074 Opened 8 years ago Updated 3 months ago

Support very large canvases

Categories

(Core :: Graphics, enhancement, P3)

enhancement

Tracking

()

People

(Reporter: nical, Unassigned)

References

(Blocks 1 open bug)

Details

(Keywords: feature, Whiteboard: [gfx-noted])

We currently have size limitations for canvas: the allocated size must be below 500000000 bytes, and each dimension must be below 32767 pixels.

We inherit the 32767 pixels limitation from some of the 3rd party libraries we use internally to render content, changing that would be a tremendous amount of work so let's focus on the other limit.
The 500000000 bytes limitation is more of a safe guard, because attempting to allocate a contiguous buffer of that size is a pretty good way to put the platform on its knees.
Our current architecture is not designed to support canvases that are that big without causing instability and I think that we should actually impose a stricter size limitation, unless we make some changes to how we handle huge canvases.

Canvases (even when drawn on the CPU) need to be uploaded to a GPU texture for compositing on most platforms. Currently we always upload the entire canvas which causes problems if the canvas is big (spoiler alert: our current limit is already way past what most drivers out there consider reasonable, hence my position on the matter). If we wanted to support large canvases, we would need to split the canvas into smaller textures (we kinda do that already in some cases), and only upload the textures that intersect the display port (we don't do that but we should).

Once we have solved the GPU issues, allocating the CPU surface for a huge canvas is also something may or may not work depending on the amount of contiguous address space available (let alone the amount RAM at all) on both the content and compositor processes. One solution could be to split the CPU canvas into tiles as well (maybe just rely on DrawTargetTiled to dispatch commands). We have to think about whether we want to expose things that will work on an expensive macbook pro but not on a (kinda expensive too) phone, or an older laptop. And that question will only make sense if we decide to put engineering effort into making that thing not cause instability.

To summarize my (opinionated) stance on the subject:
 * Enabling very large allocations is currently a good way to make the browser unstable, and often not only the browser when we ask GPU drivers to do more than what they can stomach. A year ago, a web page was able to kill the x server and all of the open applications on linux with one large canvas (this is just one example).
 * Supporting very large canvases is something that we could do if we decide that we want to spend the required engineering effort (tiled canvas sounds like fun project to work on). I am not against it, although I am against just changing the current size limit without implementing the proper infrastructure.
 * If we solve the stability issues, do we care about exposing capabilities to the web that reliably work only on high-end hardware?
Thank you for this new discussion! Just a question from a non-initiated person: how do large images work? Are they tiled as you describe?
And why canvases that work in the current stable don't work in the latest nightly? Was the  500000000 bytes limitation higher? Why was it changed?
And would it be possible to make the 500.000.000 bytes limit user-configurable? That would allow all users to be on the safe side, and the ones with specific needs and recent hardware (like the users of my web application of large image download) to be able to use Firefox for it.
(In reply to pere.jobs from comment #1)
> Just a question from a non-initiated
> person: how do large images work? Are they tiled as you describe?

Large images have the same limitations as large canvases.
A Large image is allocated in a large buffer on the CPU in the content process. The buffer is sent to the compositor process and the compositor should try to tile it into smaller images to send it to the CPU. (Canvas should do the same) This means we can run out of gpu memory but we should not run into cases where we upload one big texture at once anymore. After checking the code I think that I overstated the issue of creating big GPU textures. Things were different six months ago but we improved in that area and now we should have a tiling fallback in the compositor in most cases and we should fix the other cases regardless of the canvas discussion. We still have the issue that we fill the GPU with textures that are way outside the viewport, though. And the issue that if we make it easy for web content to make the platform run out of memory, the crash rate of the browser increases.

(In reply to pere.jobs from comment #2)
> And why canvases that work in the current stable don't work in the latest
> nightly? Was the  500000000 bytes limitation higher? Why was it changed?

I just checked and this size limit is already applying on stable. I don't know what is causing the difference between nightly and stable.

(In reply to pere.jobs from comment #3)
> And would it be possible to make the 500.000.000 bytes limit
> user-configurable?

It actually is configurable, although a bit hidden because you can easily break your browser if you set a limit that prevents allocating a surface that is smaller than the screen (for instance). You can add a preference in about:config named "gfx.max-alloc-size" and set its value to whatever you want. This setting takes effect after you restart the browser. But be careful if you ask users to fiddle with this, it's easy to break the rest of your browsing experience.
(In reply to pere.jobs from comment #3)
> And would it be possible to make the 500.000.000 bytes limit
> user-configurable? That would allow all users to be on the safe side, and
> the ones with specific needs and recent hardware (like the users of my web
> application of large image download) to be able to use Firefox for it.

We should at least do something like this.  There is nothing more annoying than getting yourself a really good machine, and not being able to take advantage of it.
Whiteboard: [gfx-noted]
> in about:config named "gfx.max-alloc-size" and set its value to whatever you want. This setting takes effect after you restart the browser.

I just tried to set that to very high values, it works. I'm going to add this setting to the FAQ of my application, with the due warning that it may make your browser unusable.
(In reply to Nicolas Silva [:nical] from comment #0)
> canvases, we would need to split the canvas into smaller textures (we kinda
> do that already in some cases), and only upload the textures that intersect
> the display port (we don't do that but we should).

May I know more about what cases we had used to split the canvas into smaller testures? 
Is any reason why we don't split the canvas in all cases?
Flags: needinfo?(nical.bugzilla)
(In reply to pere.jobs from comment #2)
> And why canvases that work in the current stable don't work in the latest
> nightly? Was the  500000000 bytes limitation higher? Why was it changed?

Hi pere,

On linux, I found current canvas backend of stable firefox is cairo, and it is different to nightly. 
Currently nightly is default to skia. Please go to about:config and see the below value.

   gfx.canvas.azure.backends

I tried to change backends to cairo in nightly, it can draw on canvas.
You may try to see if you got the same result.

But on mac, I saw different behavior even set the backends to cairo, For this, I think it should take time to dig into.
(In reply to Vincent Liu[:vliu] from comment #7)
> May I know more about what cases we had used to split the canvas into
> smaller testures? 

The TextureHost do that when uploading to a texture if it supports it and if we detect that the surface is bigger than some threshold. You can grep for BigImageIterator to see how it is used on the compositor side. In some cases the BigImage stuff is implemented by TextureImage (used by BufferTextureHost for example).

> Is any reason why we don't split the canvas in all cases?

We don't actually split the canvas per say, we only split something that we try to upload to a texture on the compositor side (be it the result of a canvas, an image or something else) if the TextureHost implementation has support for that. The canvas surface that we draw into is always one surface at the moment.

There are also TextureHost types that don't implement BigImageIterator, either because we haven't run into a case where we needed it, or because it doesn't make sense it that particular case (for example a DXGITextureHostD3D11 is already a texture on the GPU, so splitting it does not make sense. If it is too big we should have used another texture type in the first place).
Flags: needinfo?(nical.bugzilla)
(In reply to Milan Sreckovic [:milan] from comment #5)
> (In reply to pere.jobs from comment #3)
> > And would it be possible to make the 500.000.000 bytes limit
> > user-configurable? That would allow all users to be on the safe side, and
> > the ones with specific needs and recent hardware (like the users of my web
> > application of large image download) to be able to use Firefox for it.
> 
> We should at least do something like this.  There is nothing more annoying
> than getting yourself a really good machine, and not being able to take
> advantage of it.

I'd filed bug 1282656 to let gfx.max-alloc-size become user-configurable.
Blocks: 633936
(In reply to Nicolas Silva [:nical] from comment #0)
> We currently have size limitations for canvas: the allocated size must be
> below 500000000 bytes, and each dimension must be below 32767 pixels.
> 
> We inherit the 32767 pixels limitation from some of the 3rd party libraries
> we use internally to render content, changing that would be a tremendous
> amount of work so let's focus on the other limit.
> The 500000000 bytes limitation is more of a safe guard, because attempting
> to allocate a contiguous buffer of that size is a pretty good way to put the
> platform on its knees.
> Our current architecture is not designed to support canvases that are that
> big without causing instability and I think that we should actually impose a
> stricter size limitation, unless we make some changes to how we handle huge
> canvases.
> 
> Canvases (even when drawn on the CPU) need to be uploaded to a GPU texture
> for compositing on most platforms. Currently we always upload the entire
> canvas which causes problems if the canvas is big (spoiler alert: our
> current limit is already way past what most drivers out there consider
> reasonable, hence my position on the matter). If we wanted to support large
> canvases, we would need to split the canvas into smaller textures (we kinda
> do that already in some cases), and only upload the textures that intersect
> the display port (we don't do that but we should).
> 
> Once we have solved the GPU issues, allocating the CPU surface for a huge
> canvas is also something may or may not work depending on the amount of
> contiguous address space available (let alone the amount RAM at all) on both
> the content and compositor processes. One solution could be to split the CPU
> canvas into tiles as well (maybe just rely on DrawTargetTiled to dispatch
> commands). We have to think about whether we want to expose things that will
> work on an expensive macbook pro but not on a (kinda expensive too) phone,
> or an older laptop. And that question will only make sense if we decide to
> put engineering effort into making that thing not cause instability.
> 
> To summarize my (opinionated) stance on the subject:
>  * Enabling very large allocations is currently a good way to make the
> browser unstable, and often not only the browser when we ask GPU drivers to
> do more than what they can stomach. A year ago, a web page was able to kill
> the x server and all of the open applications on linux with one large canvas
> (this is just one example).
>  * Supporting very large canvases is something that we could do if we decide
> that we want to spend the required engineering effort (tiled canvas sounds
> like fun project to work on). I am not against it, although I am against
> just changing the current size limit without implementing the proper
> infrastructure.
>  * If we solve the stability issues, do we care about exposing capabilities
> to the web that reliably work only on high-end hardware?

I think we need to be very careful with this. Although in 64-bit builds there may be semi-infinite address space available, the same is not true for physical RAM or even pagefile. In other words if we don't somehow limit the maximum amount of canvas memory a process (or a domain perhaps?) can use, we're making it easy for websites to make a user's computer unbearably slow in general. I'm not certain whether that's desirable.
DevTools has been hoping for something like this for a while now.  We allow users to take screenshots of pages in a "full page" mode that includes the entire height of the page (not just the viewport height).  As you might guess, it's pretty easy to hit the 32767 pixel height limit for long pages (see bug 766661).

We could potentially craft some kind of manual tiling of the page in DevTools JS code, but having the platform handle it would be much nicer.
Blocks: 766661
Yes, 32767 pixels is really small. Many webpages are more tall than this, especially on HiDPI screens, and it would be sad if the full-page screenshot won't work for them.
Taking the screenshot of a very large page is a hard problem and rendering it all in one huge canvas is really not a good solution. Whether we allocate manpower to fix this problem in the platform (through some sort of virtual canvas or a dedicated implementation) or in the devtools is not mine to decide, but it would be a lot of work to do this without bringing the user's computer to its knees.
I think it would already be a great improvement if the bytes limit was not set as a hard number, but as a percentage of the user's available memory. It would at the same time prevent crashes, be more future-proof, and allow users with better hardware to take advantage of it.
(In reply to Nicolas Silva [:nical] from comment #0)
> 
> We inherit the 32767 pixels limitation from some of the 3rd party libraries
> we use internally to render content...

I believe the limit change open to discussion was the 500..., which we could (just over) double to get to the 32k x 32k size, but the 32k per dimension, I imagine coming from short int, seems to be a more difficult thing to deal with.

Nical, what libraries are we talking about?
As a first step, maybe FF could make screenshots of big pages in slices of manageable size (taking partial screenshots in an existing functionality), and just write them as is (in separate files). The user can glue the pictures together with imagemagick or something, this would be still better than nothing.
(In reply to Milan Sreckovic [:milan] from comment #16)
> 
> Nical, what libraries are we talking about?

Cairo and pixman most definitely, Skia probably (would need to have a close look at exactly where SkFixed is used internally, even though the public API mostly advertises SkScalar which is a 32 bits float. For D2D I have no idea.
Depends on: 1434490
Please be aware that setting gfx.max-alloc-size to large values can open you to a security bug, as described in https://bugzilla.mozilla.org/show_bug.cgi?id=1434490 . Please don't use a profile with such a setting for general web browsing.
Type: defect → enhancement

Hello,
Just a little update to say that I still have users of dezoomify hitting this bug regularly. One recent example: https://github.com/lovasoa/dezoomify/issues/359

I've run into the height limitation with the graph display of Mercurial (like e.g. https://hg.mozilla.org/mozilla-central/graph), which apparently uses one large canvas for drawing the commit graph on the left hand side of the page.

Severity: normal → S3
Component: Graphics: Layers → Graphics
You need to log in before you can comment on or make changes to this bug.