Closed Bug 1077183 Opened 10 years ago Closed 10 years ago

Untangle effective vs. unsized internalformats in the WebGL implementation

Categories

(Core :: Graphics: CanvasWebGL, defect)

Other Branch
defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla35

People

(Reporter: bjacob, Assigned: bjacob)

References

Details

Attachments

(1 file, 3 obsolete files)

Attached patch WIP unfinished patch (obsolete) — Splinter Review
      No description provided.
Attachment #8499235 - Attachment description: effective → WIP unfinished patch
Attached patch WIP unfinished patch (obsolete) — Splinter Review
Attachment #8499235 - Attachment is obsolete: true
Renaming the bug because the finally working form of the patch makes it clear that the problem was much bigger than just storing the wrong data in WebGLTexture::ImageInfo.

The OpenGL ES 3.0.3 spec clearly, explicitly distinguishes between two kinds of internal formats: "effective" vs "unsized" ones. The correspondence is given in Table 3.2.

It's interesting that the terminology "effective internal format" was absent from the ES 3.0.0 spec and so is a new addition in some point release between 3.0.0 and 3.0.3. Apparently we're not the only ones to have been confused until recently :-)

Anyway this bug is the continuation of bug 1072680. There, we untangled "format" vs "internalformat" values. Here, inside "internalformat"s we're now untangling the two different things that are named "internalformat"s:  "effective" internalformats fully describe everything about a texel format including the type and size of each component (e.g. RGBA32F vs RGBA16F vs RGBA16I), while "unsized" internalformats only describe which components are present (e.g. RGBA vs RGB).

Turns out that that latter confusion was just as big as the one fixed in bug 1072680 --- so here's another big patch to review!

This is needed for WebGL2 because WebGL2 exposes effective internalformats in its API - entry points specifying texture images now accept effective internalformats for their 'internalformat' parameters. What's more: the texStorage2D and texStorage3D *only* accept effective internalformat values. All that is a big departure from WebGL1 which only knew unsized internalformats e.g. RGBA8 wasn't part of the WebGL1 API.
Blocks: 1071335
Depends on: 1072680
Summary: Use single effective internalformat fields instead of (unsized internalformat, type) pairs internally in the WebGL impl → Untangle effective vs. unsized internalformats in the WebGL implementation
See above comment for context.
Attachment #8499969 - Flags: review?(jgilbert)
Attachment #8499969 - Flags: review?(dglastonbury)
Attachment #8499241 - Attachment is obsolete: true
Note that I have carefully checked that this patch doesn't regressed at all our conformance status on the 1.0.3 beta conformance suite. So when reviewing, if you wonder if something might be a conformance problem, see if you can trust that the 1.0.3 tests are exhaustive enough!
Rebased, went through tryserver iterations, should be ready to land (note that prereq bugs have all landed by now), hopefully final try:

https://treeherder.mozilla.org/ui/#/jobs?repo=try&revision=a313f873aab4
Attachment #8499969 - Attachment is obsolete: true
Attachment #8499969 - Flags: review?(jgilbert)
Attachment #8499969 - Flags: review?(dglastonbury)
Attachment #8500135 - Flags: review?(jgilbert)
Attachment #8500135 - Flags: review?(dglastonbury)
This is green! Can land without further bitrotting when it's r+!
Comment on attachment 8500135 [details] [diff] [review]
Untangle effective vs. unsized internalformats

Review of attachment 8500135 [details] [diff] [review]:
-----------------------------------------------------------------

Can we talk about how to do this?

It seems really clear to me that the most fool-proof way to do this is to maintain a map of (GLenum effective format)->(format info), such that most checks become:
FormatInfo(format).IsFloat32(), or FormatInfo(format).HasDepth().
Why do we need a more complicated method for this? Using macros as arguments to injected #includes should be a huge red flag.

I think we should be very adverse to trying to convert (effective format)->(format, type), expecting that it matches (format, type)->(effective format), which is well defined, but non-1-to-1.

::: dom/canvas/WebGLContextUtils.cpp
@@ +124,5 @@
> +    if (internalformat == table_effectiveinternalformat) { \
> +        return table_type; \
> +    }
> +
> +#include "WebGLInternalFormatsTable.h"

I really don't like this.
Why can't we keep a static map of (effective format)->(info) and query that?

@@ +171,5 @@
> +
> +void
> +UnsizedInternalFormatAndTypeFromEffectiveInternalFormat(TexInternalFormat effectiveinternalformat,
> +                                                        TexInternalFormat* out_internalformat,
> +                                                        TexType* out_type)

I do not like this function.
Effective format does not map one-to-one onto internal format and type.
Why is this desired? Can we not satisfy the callee's usecases with in a more well-defined way?

::: dom/canvas/WebGLStrongTypes.h
@@ +300,5 @@
>      STRONG_GLENUM_VALUE(LUMINANCE),
>      STRONG_GLENUM_VALUE(LUMINANCE_ALPHA),
> +    STRONG_GLENUM_VALUE(ALPHA8),
> +    STRONG_GLENUM_VALUE(LUMINANCE8),
> +    STRONG_GLENUM_VALUE(LUMINANCE8_ALPHA8),

I don't super love this. Strictly speaking, this should be a different TexEffectiveFormat.
Attachment #8500135 - Flags: review?(jgilbert) → review-
This is really subjective. I would be interested in Dan's review, because if I'm going to rewrite my patch to adapt to one person's taste, I'd rather wait until I know the taste of everyone involved.

Just replying on the factual parts:

> I think we should be very adverse to trying to convert (effective
> format)->(format, type), expecting that it matches (format,
> type)->(effective format), which is well defined, but non-1-to-1.

What do you mean? A look at the table shows readily that the mapping is 1-to-1 in both directions.

What's not 1-to-1 is the mapping of API params to effective internalformats, as the API supports two modes of specifying internalformats (sized or unsized+type). But that's not relevant here. Here we are explicitly referring to 'effective' and 'unsized' internalformats, removing this ambiguity.

But the first question is, does it matter? Do you see any code around here relying on that 1-to-1 assumption?

> @@ +171,5 @@
> > +
> > +void
> > +UnsizedInternalFormatAndTypeFromEffectiveInternalFormat(TexInternalFormat effectiveinternalformat,
> > +                                                        TexInternalFormat* out_internalformat,
> > +                                                        TexType* out_type)
> 
> I do not like this function.
> Effective format does not map one-to-one onto internal format and type.

Same as above - I don't understand what's not one-to-one here, and I don't understand why it would matter if it weren't.

> Why is this desired? Can we not satisfy the callee's usecases with in a more
> well-defined way?

A look at the patch will show how this is used. For instance, in many places, we want to know if a format is a float format, or half-float, or if it is a depth format, or if it has alpha.

For these use cases, we used to have lots of complex boolean expression or switch statements which had to be correctly updated everytime a new matching format would be added.

Now they just use that function. For instance, we now determine if a format is float by getting its GL type, and checking that. We now determine if a format has alpha by getting its unsized version and checking that, which is 10x fewer things to check againt (e.g. just RGBA covers RGBA8, RGBA8I, RGBA16I, etc...)

I think that that alone is a huge improvement that justifies keeping this. The API is moving away from (unsized format, type) pairs, because they don't belong there, but internally, this splitting is very useful as evidenced by our own code.

Moreover, there is this function computing DriverFormat's, that explicitly wants to do this mapping back to old (format, type) pairs.

> 
> ::: dom/canvas/WebGLStrongTypes.h
> @@ +300,5 @@
> >      STRONG_GLENUM_VALUE(LUMINANCE),
> >      STRONG_GLENUM_VALUE(LUMINANCE_ALPHA),
> > +    STRONG_GLENUM_VALUE(ALPHA8),
> > +    STRONG_GLENUM_VALUE(LUMINANCE8),
> > +    STRONG_GLENUM_VALUE(LUMINANCE8_ALPHA8),
> 
> I don't super love this. Strictly speaking, this should be a different
> TexEffectiveFormat.

I agree, but this is good enough, and goes most of the way already by having the logic clearly distinguish effective vs unsized internalformats, so it makes it much easier for someone else to write such a patch introducing an EffectiveInternalFormat type.

Note that it's not trivial to get right. Technically there are 3 different internalformat types:
 * effective i.e. sized
 * unsized
 * API internalformat (can be either of the two above kinds).
with obviously nasty duplication.

So this is work for a follow-up bug if needed.
(post-vidyo)
Ok, since this is blocking a bunch of stuff, and does centralize a bunch of this stuff, let's review it. We can propose improvements later.
Comment on attachment 8500135 [details] [diff] [review]
Untangle effective vs. unsized internalformats

Review of attachment 8500135 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/canvas/WebGLContextUtils.cpp
@@ +124,5 @@
> +    if (internalformat == table_effectiveinternalformat) { \
> +        return table_type; \
> +    }
> +
> +#include "WebGLInternalFormatsTable.h"

We can deal with this for now.

@@ +171,5 @@
> +
> +void
> +UnsizedInternalFormatAndTypeFromEffectiveInternalFormat(TexInternalFormat effectiveinternalformat,
> +                                                        TexInternalFormat* out_internalformat,
> +                                                        TexType* out_type)

I'll take a look at this afterwards, in my own time.

::: dom/canvas/WebGLFramebuffer.h
@@ +52,5 @@
>          bool IsDefined() const;
>  
>          bool IsDeleteRequested() const;
>  
> +        TexInternalFormat EffectiveInternalFormat() const;

Right, so it's not so much a TexInternalFormat as just an InternalFormat, since it also applies to renderbuffers.

::: dom/canvas/WebGLStrongTypes.h
@@ +300,5 @@
>      STRONG_GLENUM_VALUE(LUMINANCE),
>      STRONG_GLENUM_VALUE(LUMINANCE_ALPHA),
> +    STRONG_GLENUM_VALUE(ALPHA8),
> +    STRONG_GLENUM_VALUE(LUMINANCE8),
> +    STRONG_GLENUM_VALUE(LUMINANCE8_ALPHA8),

This is close enough for now.
Attachment #8500135 - Flags: review- → review+
Comment on attachment 8500135 [details] [diff] [review]
Untangle effective vs. unsized internalformats

(comments still welcome!)
Attachment #8500135 - Flags: review?(dglastonbury)
Assignee: nobody → bjacob
https://hg.mozilla.org/mozilla-central/rev/97068dca449e
Status: NEW → RESOLVED
Closed: 10 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla35
Depends on: 1156585
You need to log in before you can comment on or make changes to this bug.