Last Comment Bug 767064 - WebGL should prefer 565 (or native) on mobile for alpha-less contexts
: WebGL should prefer 565 (or native) on mobile for alpha-less contexts
Status: RESOLVED FIXED
[games:p1]
:
Product: Core
Classification: Components
Component: Canvas: WebGL (show other bugs)
: unspecified
: ARM Android
: -- normal (vote)
: mozilla17
Assigned To: Vladimir Vukicevic [:vlad] [:vladv]
:
Mentors:
Depends on:
Blocks: gecko-games 775222
  Show dependency treegraph
 
Reported: 2012-06-21 11:22 PDT by Vladimir Vukicevic [:vlad] [:vladv]
Modified: 2013-11-06 14:13 PST (History)
5 users (show)
See Also:
Crash Signature:
(edit)
QA Whiteboard:
Iteration: ---
Points: ---
Has Regression Range: ---
Has STR: ---


Attachments
use 4444/565 when requested on ES2, and have WebGL request 4444/565 (3.77 KB, patch)
2012-06-21 14:08 PDT, Vladimir Vukicevic [:vlad] [:vladv]
jgilbert: review-
Details | Diff | Splinter Review
use 4444/565 when requested on ES2, and have WebGL request 4444/565 (3.94 KB, patch)
2012-07-04 08:13 PDT, Vladimir Vukicevic [:vlad] [:vladv]
jgilbert: review+
jacob.benoit.1: review+
Details | Diff | Splinter Review
allow quick testing of 4444 vs 565 vs 888 (not for landing) (2.32 KB, patch)
2012-07-10 14:40 PDT, Benoit Jacob [:bjacob] (mostly away)
no flags Details | Diff | Splinter Review
updated with prefs (6.29 KB, patch)
2012-07-11 13:23 PDT, Vladimir Vukicevic [:vlad] [:vladv]
jacob.benoit.1: review+
Details | Diff | Splinter Review

Description Vladimir Vukicevic [:vlad] [:vladv] 2012-06-21 11:22:22 PDT
See bug 728524 comment 75 and 76.  Currently, we end up creating an 8888 context or an 8880 one for WebGL, even on mobile.  At best, this is costing us 2x memory bandwidth; at worst, we're hitting format conversions (e.g. rendering 888 -> 565) and other badness along the way.

This seems to have been caused by bug 739775.  ChooseGLFormats ignores 565 unless OES_rgb8_rgba8 isn't supported, and even then it'll *still* choose 888/UNSIGNED_BYTE -- rbColor is only used for creating a MSAA renderbuffer (I'm actually confused how we create a color renderbuffer when samples = 0 and aNeedsReadBuffer = false).

I'll try to create a patch shortly.
Comment 1 Benoit Jacob [:bjacob] (mostly away) 2012-06-21 12:35:09 PDT
Since alpha is the default, any alphaless-only solution will only affect a minority of content. Would it be acceptable to default to 4444 for contexts with alpha?
Comment 2 Benoit Jacob [:bjacob] (mostly away) 2012-06-21 12:45:49 PDT
In fact, DITHER is enabled by default (GLES 2.0.25 page 145, table 6.11) so 4444 should give acceptable quality by default.
Comment 3 Vladimir Vukicevic [:vlad] [:vladv] 2012-06-21 14:08:43 PDT
Created attachment 635458 [details] [diff] [review]
use 4444/565 when requested on ES2, and have WebGL request 4444/565

Two bits here:

One, it reworks GLContext::ChooseGLFormats so that it takes into account the requested bit depth.  If we're on ES2 hardware and the number of color bits is <= 16 or we don't support rgb8/rgba8, we'll do 4444/565.

Two, it changes the webgl format logic to request 4444/565 if the platform screen depth is 16.

Note that on the clear test (with postMessage), 4444 is a *tiny* bit slower than 565 (like 1ms, if that).  Likely due to extra compositing needed when blitting the texture to the output.
Comment 4 Jeff Gilbert [:jgilbert] 2012-06-21 15:18:27 PDT
Comment on attachment 635458 [details] [diff] [review]
use 4444/565 when requested on ES2, and have WebGL request 4444/565

Review of attachment 635458 [details] [diff] [review]:
-----------------------------------------------------------------

I think we should really propose a color depth hint to the WG. Something like 'full-color' which defaults to false. If true, try to use 8-bit-color backings. Else, it's up to the impl, with the goal being that we back the context with a device-appropriate bit-depth.

R- for the forcing of non-8-bit-color textures where 8-bit-color RBs are unsupported.

Hitting for BGR(A) on desktop and sized-formats for low-bit-depth desktops should probably wait until a follow-up bug.

::: gfx/gl/GLContext.cpp
@@ +1278,5 @@
>      GLFormats formats;
>  
> +    // If we're on ES2 hardware and we have an explicit request for 16 bits of color or less
> +    // OR we don't support full 8-bit color, return a 4444 or 565 format.
> +    if (mIsGLES2 && (aCF.colorBits() <= 16 || !IsExtensionSupported(OES_rgb8_rgba8))) {

Do we want to use 565/4444 with ANGLE (and other upcoming desktop GLES)? I think then we should add support for 16bpp rendering on desktop GL as well, uncommon as that is.

Also this precludes support for 8888 textures on >16bit-color system, if GL lacks OES_rgb8_rgba8. While unlikely, there's no reason to preclude this.

Particularly since we just don't use renderbuffers on mobile, currently.

@@ +1301,5 @@
> +
> +        if (aCF.alpha) {
> +            // prefer BGRA8888 on ES2 hardware; if the extension is supported, it
> +            // should be faster.
> +            // XXX why don't we prefer BGRA on desktop?

To support BGRA on desktop, you can't call texImage2d with internalFormat == format == BGR(A). Just add a texColorInternal to GLFormats and make sure that while it's BGR,BGR for GLES, it's RGB,BGR for desktop. To my knowledge, there's no way to specify a texture as BGR-order on desktop GL.
Comment 5 Vladimir Vukicevic [:vlad] [:vladv] 2012-06-21 15:36:03 PDT
(In reply to Jeff Gilbert [:jgilbert] from comment #4)
> Comment on attachment 635458 [details] [diff] [review]
> use 4444/565 when requested on ES2, and have WebGL request 4444/565
> 
> Review of attachment 635458 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> I think we should really propose a color depth hint to the WG. Something
> like 'full-color' which defaults to false. If true, try to use 8-bit-color
> backings. Else, it's up to the impl, with the goal being that we back the
> context with a device-appropriate bit-depth.
> 
> R- for the forcing of non-8-bit-color textures where 8-bit-color RBs are
> unsupported.

That's true.  The problem is that ChooseGLFormats doesn't have enough information; I don't really like the entire machinery, tbh.  My patch at least makes it consistent -- if it'll choose a 565 renderbuffer, it'll choose a 565 texture and the same for 888/8888.  Otherwise, as is the current case, you'll get vastly different performance characteristics depending on whether you end up using the texture format or the renderbuffer format.  Is there any current situation where using a 565 texture instead of an 888 one isn't wanted?

Maybe ChooseGLFormats needs to be split into something with a bit more smarts, with information about what will actually be used?


> ::: gfx/gl/GLContext.cpp
> @@ +1278,5 @@
> >      GLFormats formats;
> >  
> > +    // If we're on ES2 hardware and we have an explicit request for 16 bits of color or less
> > +    // OR we don't support full 8-bit color, return a 4444 or 565 format.
> > +    if (mIsGLES2 && (aCF.colorBits() <= 16 || !IsExtensionSupported(OES_rgb8_rgba8))) {
> 
> Do we want to use 565/4444 with ANGLE (and other upcoming desktop GLES)? I
> think then we should add support for 16bpp rendering on desktop GL as well,
> uncommon as that is.

I doubt angle exports 565/4444; we won't get it (we'll pick an 8888 config).  If it does export one, well, then we'll get it if requested -- but WebGL won't, since the screen depth will be 24 so it won't request 565/4444.

> Also this precludes support for 8888 textures on >16bit-color system, if GL
> lacks OES_rgb8_rgba8. While unlikely, there's no reason to preclude this.

Yep, see above.  Given the unlikely scenario, I'd rather untangle this in a followup...

> @@ +1301,5 @@
> > +
> > +        if (aCF.alpha) {
> > +            // prefer BGRA8888 on ES2 hardware; if the extension is supported, it
> > +            // should be faster.
> > +            // XXX why don't we prefer BGRA on desktop?
> 
> To support BGRA on desktop, you can't call texImage2d with internalFormat ==
> format == BGR(A). Just add a texColorInternal to GLFormats and make sure
> that while it's BGR,BGR for GLES, it's RGB,BGR for desktop. To my knowledge,
> there's no way to specify a texture as BGR-order on desktop GL.

EXT_bgra?  But yes, the internal format should still be RGB/RGBA, that's true.  The 'format' should be BGR/BGRA.  We only expose one 'texColor' in this formats, where we should probably expose both.  (However, EXT_bgra only matters for upload, and we assume we have it given GetOptimalReadFormats() usage.  We do the BGR swap in a shader anyway, so that we don't have to depend on it, but it'd be nice if our textures just had the right colors by default.)
Comment 6 Jeff Gilbert [:jgilbert] 2012-06-21 16:17:14 PDT
EXT_bgra is core for GL 2+, but it only allows passing BGR(A) as |format| to texImage2D. If you don't use a sized |internalFormat|, GL can guess that you want it stored in BGR(A) and do so if it's capable. At their core, though, |format| and |type| just specify the format of the |data| being uploaded, while |internalFormat| designates the actual storage format.

The ANGLE thing actually is a problem, I think. If someone's running a 16bpp display (for whatever reason), and we have an AA context, our MSAA draw FBO will be either 565 or 4444, even though our backing PBuffer is 888(8).

I think there are a few cases for 8-bit colors on <8-bit-color devices, such as offline rendering and full-quality screenshots. If we're doing readback anywhere, I think we also want to have 24 or 32bpp backings anyway, as well.

I don't understand what you mean when you say ChooseGLFormats doesn't have enough information. What more does it need?
Comment 7 Jeff Gilbert [:jgilbert] 2012-06-21 16:20:38 PDT
I should add that the implementation can store textures however it likes, so long as it's functionally transparent. What isn't performance-transparent is that BGRA is more useful for use for readback, and indeed seems to reflect the way textures are stored internally on desktop GL for most drivers.

The only reason we care about BGR(A) is in case of readback. For other cases, it's irrelevant how texels are stored internally.
Comment 8 Vladimir Vukicevic [:vlad] [:vladv] 2012-06-21 17:19:58 PDT
(In reply to Jeff Gilbert [:jgilbert] from comment #6)
> EXT_bgra is core for GL 2+, but it only allows passing BGR(A) as |format| to
> texImage2D. If you don't use a sized |internalFormat|, GL can guess that you
> want it stored in BGR(A) and do so if it's capable. At their core, though,
> |format| and |type| just specify the format of the |data| being uploaded,
> while |internalFormat| designates the actual storage format.
> 
> The ANGLE thing actually is a problem, I think. If someone's running a 16bpp
> display (for whatever reason), and we have an AA context, our MSAA draw FBO
> will be either 565 or 4444, even though our backing PBuffer is 888(8).

Hrm, why will the backing pbuffer be 888?  (I just checked -- ANGLE does actually expose 565, but not 4444 [no such d3d format].)  We'll create it using the same config... and honestly, I don't see why we'd overcomplicate this to support someone on Windows with a 16bpp display.

> I think there are a few cases for 8-bit colors on <8-bit-color devices, such
> as offline rendering and full-quality screenshots. If we're doing readback
> anywhere, I think we also want to have 24 or 32bpp backings anyway, as well.

I absolutely agree.  But I think that's beyond the scope of this specific bug -- we need a way for content to say "I want high quality" for webgl, and perhaps for others, which we don't have currently.  We should figure out how to do that in another bug; I'm happy to add an experimental mozHighQuality or something to the webgl context creation attribs.

> I don't understand what you mean when you say ChooseGLFormats doesn't have
> enough information. What more does it need?

Well, it has enough info, as long as we make it consistent like my patch does -- returning 565 formats for both textures and renderbuffers, instead of returning 8-bit for one and 565 for the other.
Comment 9 Vladimir Vukicevic [:vlad] [:vladv] 2012-06-21 17:21:34 PDT
(In reply to Jeff Gilbert [:jgilbert] from comment #7)
> I should add that the implementation can store textures however it likes, so
> long as it's functionally transparent. What isn't performance-transparent is
> that BGRA is more useful for use for readback, and indeed seems to reflect
> the way textures are stored internally on desktop GL for most drivers.
> 
> The only reason we care about BGR(A) is in case of readback. For other
> cases, it's irrelevant how texels are stored internally.

It can do whatever it wants, but if the internal format is actually BGR (which it often is, due to compat with windows and d3d), then uploading texture data in BGR, as in ThebesTextureLayer/BasicTextureImage can provide a perf boost.  Though when I looked at it last on the desktop, it ended up not mattering much, and it was simplest to just swap BGR in the shaders which is what I ended up doing.  But, for this bug, I'm happy to take out those XXX comments :D
Comment 10 Jeff Gilbert [:jgilbert] 2012-06-21 19:12:01 PDT
Yeah, but we're not actually uploading here, so that part doesn't matter too much.
Comment 11 Vladimir Vukicevic [:vlad] [:vladv] 2012-06-22 07:33:11 PDT
Right.  So ignore the BGR issue, I'll take out the comments. :)

So, coming back though -- what do you want me to change here?  I don't think that the renderbuffer issue is something that we should fix (or even can, without reintroducing this exact same bug).  I'd maintain that the function should give similar RB and texture formats if it can.

ANGLE going into 16bpp mode would only happen on a windows desktop in 16bpp, which I think is going to be extremely rare, so I'd rather not try to handle things there either.
Comment 12 Jeff Gilbert [:jgilbert] 2012-06-22 13:28:29 PDT
I don't have confidence that out of our half a billion users, none will hit this ANGLE+16bpp issue. It will cause antialiased contexts to fail to blit from the MSAA FBO into the PBuffer backing, since the color formats won't match.

The PBuffer path in ProviderEGL supports asking for 565 (and it looks like 4444 should work too), but I'm not sure if it'll work. Also, currently, we try to request 8-bit-color first, regardless of the ContextFormat. Not only that, but if mCreationFormat is 565/4444, and we create an 888(8) PBuffer anyways, we don't update either it or mActualFormat.

The minimal fix for this bug is to give ANGLE an exemption (for now), such that it always takes the >16bpp path. A follow-up bug can go through and correct this logic in ProviderEGL, and remove the exemption.
Comment 13 Vladimir Vukicevic [:vlad] [:vladv] 2012-06-22 13:46:40 PDT
Sure, I can add an exemption for ANGLE.  Simple enough :)
Comment 14 Benoit Jacob [:bjacob] (mostly away) 2012-07-01 04:28:02 PDT
Comment on attachment 635458 [details] [diff] [review]
use 4444/565 when requested on ES2, and have WebGL request 4444/565

r- from jgilbert -> waiting for new version
Comment 15 Vladimir Vukicevic [:vlad] [:vladv] 2012-07-04 08:13:01 PDT
Created attachment 639104 [details] [diff] [review]
use 4444/565 when requested on ES2, and have WebGL request 4444/565

Updated; added XP_WIN ifndef around 565 check to avoid ANGLE.
Comment 16 Jeff Gilbert [:jgilbert] 2012-07-05 17:23:34 PDT
Comment on attachment 639104 [details] [diff] [review]
use 4444/565 when requested on ES2, and have WebGL request 4444/565

Review of attachment 639104 [details] [diff] [review]:
-----------------------------------------------------------------

We should do BGRA (maybe BGR?) on desktop in a follow-up bug.

::: gfx/gl/GLContext.cpp
@@ +1301,5 @@
> +
> +        if (aCF.alpha) {
> +            // prefer BGRA8888 on ES2 hardware; if the extension is supported, it
> +            // should be faster.
> +            // XXX why don't we prefer BGRA on desktop?

BGRA is not a valid internal format on desktop. We could (and should) add a texColorInternal (or similar) to GLFormats, for which <texColorInternal,texColor> should be <RGB(A),BGR(A)> on GL and <BGR(A),BGR(A)> on GLES.

Our texColorType on desktop should probably be GL_UNSIGNED_INT_8888_REV for BGRA, also. Probably doesn't matter, though.

@@ +1315,2 @@
>          } else {
> +            // XXX why don't we prefer BGR on desktop?

We should probably do it for symmetry's sake. (see above for why we don't do it yet)

I somehow doubt it'll be any faster, though.
Comment 17 Vladimir Vukicevic [:vlad] [:vladv] 2012-07-06 06:21:14 PDT
https://hg.mozilla.org/integration/mozilla-inbound/rev/1f9c66d66df4

Nuked the BGR comments.  We should file a separate bug to check on desktop, though you're probably right that it won't matter much.
Comment 18 Vladimir Vukicevic [:vlad] [:vladv] 2012-07-06 08:09:23 PDT
Backed out -- gfxPlatform::getScreenDepth isn't implemented on some platforms, and asserts!  Grrrrr.  I am going to just change it to a NS_WARNING, and have it return 0 by default.
Comment 19 Benoit Jacob [:bjacob] (mostly away) 2012-07-06 14:39:58 PDT
That would be my fault, sorry about that :)
Comment 20 Benoit Jacob [:bjacob] (mostly away) 2012-07-06 14:48:25 PDT
Comment on attachment 639104 [details] [diff] [review]
use 4444/565 when requested on ES2, and have WebGL request 4444/565

Review of attachment 639104 [details] [diff] [review]:
-----------------------------------------------------------------

Nothing to add to Jeff's comments + the gfxPlatform issue.
Comment 21 Vladimir Vukicevic [:vlad] [:vladv] 2012-07-09 10:14:40 PDT
Once again, https://hg.mozilla.org/integration/mozilla-inbound/rev/a516a86f854d
Comment 22 Ryan VanderMeulen [:RyanVM] 2012-07-09 18:03:51 PDT
https://hg.mozilla.org/mozilla-central/rev/a516a86f854d
Comment 23 Phil Ringnalda (:philor, back in August) 2012-07-09 22:27:17 PDT
Backed out in https://hg.mozilla.org/mozilla-central/rev/02b26fb307b4 - turns out that it takes about 12 hours to notice Android mochitest bustage, but eventually Ryan did notice that on the few times native mochitest-1 managed to run, and managed to start up, and managed not to crash, it was failing 422 WebGL mochitests every time, a la https://tbpl.mozilla.org/php/getParsedLog.php?id=13372355&tree=Firefox
Comment 24 Vladimir Vukicevic [:vlad] [:vladv] 2012-07-10 07:28:59 PDT
Ugh... it might be that the webgl tests are not safe for 5/6/5 contexts, especially the ones that check for exact pixel values.  This is going to be a problem :/
Comment 25 Benoit Jacob [:bjacob] (mostly away) 2012-07-10 14:40:42 PDT
Created attachment 640793 [details] [diff] [review]
allow quick testing of 4444 vs 565 vs 888 (not for landing)

We have a nontrivial battle to win here: the current WebGL spec actually requires 8 bit per channel. So we need it changed. To that effect, we need some solid data on a few different phones. This patch helps get that data by setting some env vars (check the code) to switch between 444, 565, 888.
Comment 26 Jeff Gilbert [:jgilbert] 2012-07-10 14:51:16 PDT
Yep, I hadn't realized it, but the spec currently requires 8-bits per color channel.
I agree that we should land this pref'd off (or similar), but be able to switch to it to demonstrate its advantages.
Comment 27 Benoit Jacob [:bjacob] (mostly away) 2012-07-10 14:51:35 PDT
On try, on top of Oleg's inbound push so this should have the fast-omtc patches as well as Vlad's patch here.
https://tbpl.mozilla.org/?tree=Try&rev=2c27ac9c6e60
Comment 28 Benoit Jacob [:bjacob] (mostly away) 2012-07-10 14:52:40 PDT
Landing this preffed off is probably a good short-term move but we really want to get enough data to
 - either confirm it's worth it, and convince others it is
 - or infirm that it's worth it
Comment 29 Vladimir Vukicevic [:vlad] [:vladv] 2012-07-10 15:49:32 PDT
I'll recreate this patch and add two prefs for testing:

webgl.prefer_16bit -- basically what this patch does, do a 565/444 on 16-bit displays
webgl.default_no_alpha -- unless alpha is explicitly specified, give contexts with no alpha

That'll give us a way to test content out and see what perf is like.  I'd rather not do the env var route since that's a pain on Android, and makes it hard to let people do demos with the flags set if we decide we'd like to get there in the future.
Comment 30 Vladimir Vukicevic [:vlad] [:vladv] 2012-07-11 13:23:18 PDT
Created attachment 641178 [details] [diff] [review]
updated with prefs

Same patch, but does the 16-bit change behind a pref, and adds a new pref for default-alpha:

webgl.prefer-16bit
webgl.default-no-alpha

Both default to false, so should be no change from today.
Comment 31 Vladimir Vukicevic [:vlad] [:vladv] 2012-07-17 18:21:38 PDT
https://hg.mozilla.org/integration/mozilla-inbound/rev/41eb4e7896e5
Comment 32 Ed Morley [:emorley] 2012-07-18 05:53:45 PDT
https://hg.mozilla.org/mozilla-central/rev/41eb4e7896e5
Comment 33 Jeff Gilbert [:jgilbert] 2012-08-14 19:25:58 PDT
(In reply to Vladimir Vukicevic [:vlad] [:vladv] from comment #30)
> Created attachment 641178 [details] [diff] [review]
> updated with prefs
> 
> Same patch, but does the 16-bit change behind a pref, and adds a new pref
> for default-alpha:
> 
> webgl.prefer-16bit
> webgl.default-no-alpha
> 
> Both default to false, so should be no change from today.

It's actually "webgl.prefer-16bpp". ("bpp" not "bit")

Note You need to log in before you can comment on or make changes to this bug.