Use the same GL Texture for all gralloc buffer bindings on the compositor side

RESOLVED FIXED in mozilla24

Status

()

defect
RESOLVED FIXED
6 years ago
6 years ago

People

(Reporter: nical, Assigned: nical)

Tracking

23 Branch
mozilla24
ARM
Gonk (Firefox OS)
Points:
---
Dependency tree / graph

Firefox Tracking Flags

(Not tracked)

Details

Attachments

(2 attachments, 1 obsolete attachment)

At the moment we use a hack to unlock gralloc buffers on the compositor side: we bind it to a null egl image. It does the job, but can be terribly slow. A better way to do it is to use a single texture for all the gralloc buffers on the compositor side, just like android does.

I propose that CompositorOGL holds a reference to a GL texture, and the TextureSourceOGL in the gralloc case asks this texture handle to the compositor when executing BindTexture, rather than using it's own gl texture.

This should be easy to implement since all the TextureHostOGL/TextureSourceOGL can hold a reference to the compositor.
Right now we actually only keep a reference to the GLContext when SetCompositor is called but we can keep a ref to the compositor instead.

Doing that should improve performances in at least some hardware, because we would do the same thing as android does.
That would be very nice and is the way that Android works, as explained there:
https://wiki.mozilla.org/Platform/GFX/Gralloc#SurfaceTexture
Assignee: nobody → nical.bugzilla
Comment on attachment 756043 [details] [diff] [review]
Use one global GLTexture per texture unit rather than one per GrallocTextureHost (for use when compositing only)

Review of attachment 756043 [details] [diff] [review]:
-----------------------------------------------------------------

Very, very happy to see this patch coming together. Looks like we just need another round.

::: gfx/layers/opengl/TextureHostOGL.cpp
@@ +30,5 @@
> +  MOZ_ASSERT(aUnit < 3);
> +  switch (aUnit) {
> +    case LOCAL_GL_TEXTURE0: return 0;
> +    case LOCAL_GL_TEXTURE1: return 1;
> +    case LOCAL_GL_TEXTURE2: return 2;

Hey, you could do instead:

  return aUnit - LOCAL_GL_TEXTURE0;

It's OK. Everyone does it. We do it already in our WebGL impl. The GL symbolic constants have the right values for this to work, and they're not going to change, ever.

So in fact, this is a standard GL idiom and doesn't really need a helper function.

@@ +794,5 @@
> +  if (!sGLTexture[TexUnitToIndex(aTextureUnit)]) {
> +    mGL->fGenTextures(1, &sGLTexture[TexUnitToIndex(aTextureUnit)]);
> +  }
> +
> +  mGL->fActiveTexture(LOCAL_GL_TEXTURE0);

I am unclear on why you want to do this on TEXTURE0 specifically. Also, I don't understand why it's OK for this function to modify the texture binding on TEXTURE0 even when aTextureUnit is different.

@@ +846,5 @@
> +GrallocTextureHostOGL::DeleteStaticTextures(gl::GLContext* aGL)
> +{
> +  if (!aGL) {
> +    if (sGLTexture[0] || sGLTexture[1] || sGLTexture[2]) {
> +      NS_WARNING("trying to release textures without a GLContext!");

Couldn't this be a hard assertion? It doesn't seem like something that is user-triggerable or something that would happen in a sane browser.

@@ +930,5 @@
> +
> +  if (!sGLTexture[0]) {
> +    mGL->fGenTextures(1, &sGLTexture[0]);
> +  }
> +  mGL->fActiveTexture(LOCAL_GL_TEXTURE0);

Is it OK for this function to modify the TEXTURE0 binding without restoring it? Just asking. We don't seem to have very consistent semantics about that kind of thing.

::: gfx/layers/opengl/TextureHostOGL.h
@@ +655,3 @@
>    EGLImage mEGLImage;
> +  // when drawing all
> +  static GLuint sGLTexture[3];

sGLTextures?

Also, you unconditionally declare sGLTexture, but only define it #ifdef MOZ_WIDGET_GONK. So when linking outside of B2G, I expect that you'll get undefined symbols.
Attachment #756043 - Flags: review?(bjacob)
I forgot one important nit: why does sGLTextures need to be a static-storage thing? It seems that by moving it to a suitable other class, you could let it be a regular member, which would be less hacky, more future proof (what if future b2g phones have 2 screens with 2 compositors running at the same time).
(In reply to Benoit Jacob [:bjacob] from comment #3)
> ::: gfx/layers/opengl/TextureHostOGL.cpp
> @@ +30,5 @@
> > +  MOZ_ASSERT(aUnit < 3);
> > +  switch (aUnit) {
> > +    case LOCAL_GL_TEXTURE0: return 0;
> > +    case LOCAL_GL_TEXTURE1: return 1;
> > +    case LOCAL_GL_TEXTURE2: return 2;
> 
> Hey, you could do instead:
> 
>   return aUnit - LOCAL_GL_TEXTURE0;
> 

Good to know, thx.

> 
> So in fact, this is a standard GL idiom and doesn't really need a helper
> function.
> 
> @@ +794,5 @@
> > +  if (!sGLTexture[TexUnitToIndex(aTextureUnit)]) {
> > +    mGL->fGenTextures(1, &sGLTexture[TexUnitToIndex(aTextureUnit)]);
> > +  }
> > +
> > +  mGL->fActiveTexture(LOCAL_GL_TEXTURE0);
> 
> I am unclear on why you want to do this on TEXTURE0 specifically. Also, I
> don't understand why it's OK for this function to modify the texture binding
> on TEXTURE0 even when aTextureUnit is different.

My bad, I merged two methods, one that knew about texture unit and the other one did not, so the code originating from the later was using unit 0 by default.

> @@ +930,5 @@
> > +
> > +  if (!sGLTexture[0]) {
> > +    mGL->fGenTextures(1, &sGLTexture[0]);
> > +  }
> > +  mGL->fActiveTexture(LOCAL_GL_TEXTURE0);
> 
> Is it OK for this function to modify the TEXTURE0 binding without restoring
> it? Just asking. We don't seem to have very consistent semantics about that
> kind of thing.
I think it is okay, our compositor code doesn't expect things from the state of texture bindings, and this method is a debug facility that should not be called in the middle of, say, a DrawQuad call.

> Also, you unconditionally declare sGLTexture, but only define it #ifdef
> MOZ_WIDGET_GONK. So when linking outside of B2G, I expect that you'll get
> undefined symbols.

GrallocTextureHostOGL is only declared inside MOZ_WIDGET_GONK so it looks okay to me (or did I miss something?)

Anyways I moved these textures in CompositorOGL. I was trying to avoid leaking gralloc-specific concepts into CompositorOGL, but the point about being future proof wins.
Comment on attachment 756477 [details] [diff] [review]
Use one global GLTexture per texture unit rather than one per GrallocTextureHost (for use when compositing only)

Review of attachment 756477 [details] [diff] [review]:
-----------------------------------------------------------------

::: gfx/layers/opengl/TextureHostOGL.cpp
@@ +907,5 @@
>  GrallocTextureHostOGL::GetAsSurface() {
> +  gl()->MakeCurrent();
> +
> +  GLuint tex = mCompositor->GetTemporaryTexture(LOCAL_GL_TEXTURE0);
> +  gl()->fActiveTexture(LOCAL_GL_TEXTURE0);

Can you explain the discrepancy between this function not restoring the active texture unit, and BindTexture restoring it?

::: gfx/layers/opengl/TextureHostOGL.h
@@ +647,3 @@
>    void DeleteTextures();
>  
> +  RefPtr<CompositorOGL> mCompositor;

This gave me pause, because I was afraid that that might create a cycle; but I verified, it doesn't --- the compositor only holds on to the GLContext.
Attachment #756477 - Flags: review?(bjacob) → review+
(In reply to Benoit Jacob [:bjacob] from comment #8)
> Comment on attachment 756477 [details] [diff] [review]
> Use one global GLTexture per texture unit rather than one per
> GrallocTextureHost (for use when compositing only)
> 
> Review of attachment 756477 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> ::: gfx/layers/opengl/TextureHostOGL.cpp
> @@ +907,5 @@
> >  GrallocTextureHostOGL::GetAsSurface() {
> > +  gl()->MakeCurrent();
> > +
> > +  GLuint tex = mCompositor->GetTemporaryTexture(LOCAL_GL_TEXTURE0);
> > +  gl()->fActiveTexture(LOCAL_GL_TEXTURE0);
> 
> Can you explain the discrepancy between this function not restoring the
> active texture unit, and BindTexture restoring it?

I am confused, they both finish with GL_TEXTURE0 as the active texture unit.
BindTexture doesn't really restore the texture unit, it just sets it back to the default value (unit 0) regardless of the previous state.
GetAsSurface just uses unit 0.

> 
> ::: gfx/layers/opengl/TextureHostOGL.h
> @@ +647,3 @@
> >    void DeleteTextures();
> >  
> > +  RefPtr<CompositorOGL> mCompositor;
> 
> This gave me pause, because I was afraid that that might create a cycle; but
> I verified, it doesn't --- the compositor only holds on to the GLContext.

yes, The compositor is not meant to hold on to anything but what it needs to perform drawing commands. If we ever want Compositor to hold on to a layer or a WhateverHost, it would mean making serious changes to the design.
woops, busted the build with a warning, this should fix it:
https://hg.mozilla.org/integration/mozilla-inbound/rev/5dda8a9a6c2d
...and had another bustage fix to submit:
https://hg.mozilla.org/integration/mozilla-inbound/rev/9ce19443a9dc
Wait, just a question --- why do we have up to 3 texture units here? Is this for planar textures?
(In reply to Benoit Jacob [:bjacob] from comment #14)
> Wait, just a question --- why do we have up to 3 texture units here? Is this
> for planar textures?

Because we use up to three textures at the same time. Even though in the case of gralloc we actually never use more than two, I felt like making it more general, because we may in the future go up to three or use this facility for something that is not gralloc. if a texture unit is never used we won't create the texture, so the only overhead is storing one extra GLuint per compositor.
Where does the number '3' come from? Is it just the max number of textures that any of our ShaderPrograms will try to sample from?
yes, but actually no, i got it wrong: it is actually 4 if we have a Shmem YCbCr texture + a mask (but still at most 2 with gralloc), my bad.

I'll fix this in a followup for concistency.
(and add a comment since it's apparently not obvious)
Actually, I am more concerned now. What if, in the future, a new kind of path is added where we use 5 textures, and forget to update this? I would like you to make this a nsTArray, or a nsAutoTArray, and have it dynamically upsized as needed.
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Comment on attachment 759706 [details] [diff] [review]
use a resizable array for temporary textures

Review of attachment 759706 [details] [diff] [review]:
-----------------------------------------------------------------

r+ with nits.

::: gfx/layers/opengl/CompositorOGL.cpp
@@ +286,5 @@
> +  // lazily grow the array of temporary textures
> +  if (mTextures.Length() <= index) {
> +    int prevLength = mTextures.Length();
> +    mTextures.SetLength(index + 1);
> +    for(unsigned i = prevLength; i <= index; ++i) {

For unsigned array indices, please use size_t. It's what the language wants you to use for this, it's more explicit and it helps the compiler a tiny bit for array arithmetic on x86-64 (otherwise in some cases the compiler has to add an instruction to expand 32bit values to 64bit to use them as offsets).

@@ +306,3 @@
>      gl()->MakeCurrent();
> +    gl()->fDeleteTextures(mTextures.Length(), &mTextures[0]);
> +    mTextures.SetLength(0);

You probably want the SetLength call outside of the if (gl()).
Attachment #759706 - Flags: review?(bjacob) → review+
https://hg.mozilla.org/mozilla-central/rev/72ccd2cc8ff4
Status: REOPENED → RESOLVED
Closed: 6 years ago6 years ago
Resolution: --- → FIXED
This doesn't work for different texture targets.
Correct! Please open a new bug and make sure that Nical sees it.
Blocks: 906671
You need to log in before you can comment on or make changes to this bug.