If you think a bug might affect users in the 57 release, please set the correct tracking and status flags for Release Management.

implement gralloc and IPC backend for gl streaming

RESOLVED FIXED in mozilla24

Status

()

Core
Graphics
RESOLVED FIXED
5 years ago
3 years ago

People

(Reporter: vlad, Assigned: vlad)

Tracking

(Blocks: 2 bugs)

unspecified
mozilla24
x86_64
Windows 8
Points:
---
Dependency tree / graph

Firefox Tracking Flags

(Not tracked)

Details

Attachments

(4 attachments, 6 obsolete attachments)

34.92 KB, patch
jrmuizel
: review+
jgilbert
: review+
nical
: review+
Details | Diff | Splinter Review
8.56 KB, patch
nical
: review+
Details | Diff | Splinter Review
4.13 KB, patch
nical
: review+
Details | Diff | Splinter Review
4.13 KB, text/plain
Details
Now that we have GL streaming, we need to finish up and land the IPC/gralloc back end for it.   We have a working patch in progress which I will attach shortly.
Created attachment 716541 [details] [diff] [review]
existing patch
Blocks: 835165
One major problem this has is that the R and B channels are swapped.  We're using the BGR shader when we should be using the RGB shader.
Jeff (G), can you take a look at this today, especially the RGB shader issue?  We'll need to get that resolved so that we can make builds for MWC.
Assignee: nobody → jgilbert
Blocks: 843685
(In reply to Vladimir Vukicevic [:vlad] [:vladv] from comment #3)
> Jeff (G), can you take a look at this today, especially the RGB shader
> issue?  We'll need to get that resolved so that we can make builds for MWC.

Yes. I did most of the code for this yesterday, but couldn't test it before end of day.
Created attachment 716992 [details] [diff] [review]
use RGB (not BGR) programs for GL-generated gralloc buffers

I'm not real clear on what gUseBackingSurface is, so I might have added more checks than we need. This flips us back over to having the correct colors, though.
Attachment #716992 - Flags: feedback?(vladimir)
I'm not sure what's up with the patch set I have, but I don't seem to be getting frame updates properly with or without this patch. I'll try reapplying these things.
Hm, that doesn't seem to apply on top of the other patch that's here -- in particular, your followon has a SharedSurface_Gralloc constructor that takes a "buffer" parameter as the second-to-last that's not in the previous patch.
Blocks: 844166
Created attachment 740892 [details] [diff] [review]
Part 1, use Gralloc buffers for WebGL rendering

(This is on top of m-c github git commit 306d5023693cef5348d5a4869e904c824a7c6db0; the Apr 16 m-c merge to birch.)

Part 1 reworks and simplifies the original patch in this bug.  It removes all synchronization -- the assumption is that we'll be able to rely on genlock or other synchronization mechanisms to get implicit synchronization.  (Currently, this seems to be broken on all b2g devices except for hamachi/buri; on the others, any fast WebGL rendering will cause severe flicker.  This is ok!  That's being worked on at the drive/base layer.)

It adds a SharedSurfaceGralloc and uses the existing TripleBuffer stream types.

After this patch, WebGL works, but the red and blue channels are swapped which is fixed in part 2.
Assignee: jgilbert → vladimir
Attachment #716541 - Attachment is obsolete: true
Attachment #740892 - Flags: review?(jmuizelaar)
Attachment #740892 - Flags: review?(jgilbert)
Created attachment 740895 [details] [diff] [review]
Part 2, add explicit usage/format graphicbuffer allocators, and a swapRB flag

(This is on top of Part 1.)

This patch attempts to bring some sanity to our graphicbuffer & thebes formats.  Previously, we were allocating a RGBA_8888 or RGBX_8888 GraphicBuffer, and telling thebes that it was ImageFormatARGB32, which actually has the RB bytes swapped compared to the android pixel format.  Then on the host side, we were uncoditionally assuming that RGBA_8888 was actually B8G8R8A8 and using a BGRA pixel shader.  This caused WebGL to have inverted colors (and likely would have been a problem for SkiaGL as well), because when rendering with GL, it uses the actual format of the GraphicBuffer.

Here we hadd an explicit "swapRB" flag to SurfaceDescriptorGralloc.  It's taken into account on the host side when reading from the surface (or rather, when figuring out what our 2D format is that corresponds to the descriptor's format).  It's set by default unless a descriptor is created for HW rendering only, in which case it's left off.
Attachment #716992 - Attachment is obsolete: true
Attachment #716992 - Flags: feedback?(vladimir)
Attachment #740895 - Flags: review?(nical.bugzilla)
Attachment #740895 - Flags: review?(jmuizelaar)
I'm going to look at doing some refactoring on top of Part 2 as a Part 3 -- we have way too many slightly-similar gralloc-allocation functions, both on PLayers and on PImageBridge.  We should be able to get to one function and clean up a bunch of code.
B2G build of unagi/otoro/inari uses "mozilla-b2g / hardware_qcom_display" it disables genlock. It might affect to the flicker.

https://github.com/mozilla-b2g/hardware_qcom_display/commit/30a429c4cec89547edaf22338b1d132886ea72d6

https://github.com/mozilla-b2g/hardware_qcom_display
Sadly no, there's more to it than just that :/  I've had a genlock-enabled unagi build and it shows the same issues.
Comment on attachment 740892 [details] [diff] [review]
Part 1, use Gralloc buffers for WebGL rendering

Review of attachment 740892 [details] [diff] [review]:
-----------------------------------------------------------------

A bunch of questions, mostly.

::: gfx/gl/SharedSurfaceGralloc.cpp
@@ +139,5 @@
> +void
> +SharedSurface_Gralloc::Fence()
> +{
> +    // no need to fence -- we're relying on genlock
> +    // write locks/read locks

Any chance we could get a (cached) preference here to glFinish? It would make it trivial to see if an implementation is not giving us the locking we need, if we can flip the pref and the artifacts go away.

::: gfx/gl/SharedSurfaceGralloc.h
@@ +37,5 @@
> +
> +protected:
> +    GLLibraryEGL* const mEGL;
> +    layers::ShadowLayerForwarder* const mSLF;
> +    layers::SurfaceDescriptorGralloc mDesc;

Does this patch work? It doesn't look like it's holding onto an android::sp<GraphicBuffer>, like I believe we found we needed to do previously.

::: gfx/layers/basic/BasicCanvasLayer.cpp
@@ +250,1 @@
>                                            screen->PreserveBuffer());

Switching to 2-space misaligned this argument.

@@ +261,5 @@
> +#ifdef MOZ_B2G
> +            // [Basic/OGL Layers, OOPC]
> +            factory = new SurfaceFactory_Gralloc(mGLContext, screen->Caps(), BasicManager());
> +#else
> +            NS_NOTREACHED("isCrossProcess but not on B2G!");

Prefer MOZ_NOT_REACHED.

::: gfx/layers/client/CanvasClient.cpp
@@ +11,5 @@
>  #include "nsXULAppAPI.h"
>  #include "GLContext.h"
>  #include "SurfaceStream.h"
> +#ifdef MOZ_B2G
> +#include "SharedSurfaceGralloc.h"

I don't think we need to guard against including this in non-b2g builds.

@@ +97,5 @@
> +  if (isCrossProcess) {
> +    // swap staging -> consumer so we can send it to the compositor
> +    SharedSurface* surf = stream->SwapConsumer();
> +    if (!surf) {
> +      printf_stderr("surf is null post-SwapConsumer!\n");

This should be expected for the first frame (or two), so we probably don't even want to spew about it in debug builds.

@@ +107,5 @@
> +      mTextureClient->SetDescriptor(grallocSurf->GetDescriptor());
> +    } else {
> +      // XXX we can't do anything here, without doing readback.  All
> +      // the other types depend on passing a SurfaceStreamDescriptor,
> +      // e.g.  a raw SurfaceStream pointer.

Does this fall back to readback? That's what we should do in this case. The first frame (or so) will always be a Basic SharedSurface (readback), so we definitely need to handle this.

::: gfx/layers/ipc/ShadowLayerUtilsGralloc.cpp
@@ +205,5 @@
>      new GraphicBuffer(aSize.width, aSize.height, format,
>                        GraphicBuffer::USAGE_SW_READ_OFTEN |
>                        GraphicBuffer::USAGE_SW_WRITE_OFTEN |
> +                      GraphicBuffer::USAGE_HW_TEXTURE |
> +                      GraphicBuffer::USAGE_HW_RENDER));

Would it not be better to specifically only ask for what we need, rather than asking for everything?
For WebGL gralloc buffers, we should only need HW_TEXTURE and HW_RENDER.
Attachment #740892 - Flags: review?(jgilbert) → review-
FWIW, I think 'isRBSwapped' would be better than the command-sounding 'swapRB'.
(In reply to Jeff Gilbert [:jgilbert] from comment #13)
> A bunch of questions, mostly.
> 
> ::: gfx/gl/SharedSurfaceGralloc.cpp
> @@ +139,5 @@
> > +void
> > +SharedSurface_Gralloc::Fence()
> > +{
> > +    // no need to fence -- we're relying on genlock
> > +    // write locks/read locks
> 
> Any chance we could get a (cached) preference here to glFinish? It would
> make it trivial to see if an implementation is not giving us the locking we
> need, if we can flip the pref and the artifacts go away.

Sadly, it doesn't -- the only thing that seems to reliably truly force the proper kind of sync is calling glReadPixels.  I guess I could do that and read just a 1x1 pixel maybe; I'll give that a try.

> ::: gfx/gl/SharedSurfaceGralloc.h
> @@ +37,5 @@
> > +
> > +protected:
> > +    GLLibraryEGL* const mEGL;
> > +    layers::ShadowLayerForwarder* const mSLF;
> > +    layers::SurfaceDescriptorGralloc mDesc;
> 
> Does this patch work? It doesn't look like it's holding onto an
> android::sp<GraphicBuffer>, like I believe we found we needed to do
> previously.

The patch works, but that's a good question.  The SurfaceDescriptor holds on to just a gralloc actor, and I believe the GraphicBuffer itself is held alive by the EGLImage/GL Texture or maybe the GraphicBufferActor.  The actor becomes a real GraphicBuffer on the b2g (compositor) side again, via a GrallocBufferActor -- which has the sp<GraphicBuffer> in it. (Actually, I believe it may be a GraphicBufferActor on the child side too, in which case there's where the sp<> lives.)


> @@ +107,5 @@
> > +      mTextureClient->SetDescriptor(grallocSurf->GetDescriptor());
> > +    } else {
> > +      // XXX we can't do anything here, without doing readback.  All
> > +      // the other types depend on passing a SurfaceStreamDescriptor,
> > +      // e.g.  a raw SurfaceStream pointer.
> 
> Does this fall back to readback? That's what we should do in this case. The
> first frame (or so) will always be a Basic SharedSurface (readback), so we
> definitely need to handle this.

This one doesn't, because we just don't pass anything over.  I'm not sure if there's an easy way to make it do readback here.

> ::: gfx/layers/ipc/ShadowLayerUtilsGralloc.cpp
> @@ +205,5 @@
> >      new GraphicBuffer(aSize.width, aSize.height, format,
> >                        GraphicBuffer::USAGE_SW_READ_OFTEN |
> >                        GraphicBuffer::USAGE_SW_WRITE_OFTEN |
> > +                      GraphicBuffer::USAGE_HW_TEXTURE |
> > +                      GraphicBuffer::USAGE_HW_RENDER));
> 
> Would it not be better to specifically only ask for what we need, rather
> than asking for everything?
> For WebGL gralloc buffers, we should only need HW_TEXTURE and HW_RENDER.

Yes, that's done in the part 2 patch here which lets us pass that info down.  Technically this isn't needed; passing HW_RENDER doesn't seem to change any behaviour, but the SW_* flags definitely do; we'll get a speedup to not use them.

(In reply to Jeff Gilbert [:jgilbert] from comment #14)
> FWIW, I think 'isRBSwapped' would be better than the command-sounding
> 'swapRB'.

it is actually isRBSwapped.  I just put the wrong name in the patch description.
Comment on attachment 740895 [details] [diff] [review]
Part 2, add explicit usage/format graphicbuffer allocators, and a swapRB flag

Review of attachment 740895 [details] [diff] [review]:
-----------------------------------------------------------------

::: gfx/layers/ipc/ShadowLayerUtilsGralloc.cpp
@@ +230,5 @@
> +  uint32_t format = aFormat;
> +  uint32_t usage = aUsage;
> +
> +  if (format == 0 && usage == 0) {
> +    // If not specified, assume software rendering with hw 

nit: trailing space

@@ +236,5 @@
> +    format = PixelFormatForContentType(aContent);
> +    usage = GraphicBuffer::USAGE_SW_READ_OFTEN |
> +            GraphicBuffer::USAGE_SW_WRITE_OFTEN |
> +            GraphicBuffer::USAGE_HW_TEXTURE;
> +  } else if (!format || !usage) {

Nit: a few lines above you use "if (var == 0)" syntax and here "if (!var)" it's better to stay consistent.
Attachment #740895 - Flags: review?(nical.bugzilla) → review+
The patch of "use Gralloc buffers for WebGL rendering" works for skiaGL enabled.

But I found content crash issue when app resumed from background.

During resume stage, the TextureHost received the "send_delete" signal and tried to free graphic buffers that used by content app. And then it caused content process crash.

I attached the detail log on http://www.pastebin.mozilla.org/2358957.

Will update later if have any good solution.
Yeah, that crash is known -- I was waiting until bug 862324 landed because it makes things a bit clearer there.
Attachment #740892 - Flags: review?(bjacob)
Attachment #740895 - Flags: review?(bjacob)
Comment on attachment 740895 [details] [diff] [review]
Part 2, add explicit usage/format graphicbuffer allocators, and a swapRB flag

Review of attachment 740895 [details] [diff] [review]:
-----------------------------------------------------------------

::: gfx/layers/ipc/PLayers.ipdl
@@ +59,3 @@
>     */
> +  sync PGrallocBuffer(gfxIntSize size, gfxContentType content,
> +                      uint32_t format, uint32_t usage)

Let's just drop content completely. The callers can figure out what content type they want. I think it also makes sense for callers to be explicit about their usage flags.
Attachment #740895 - Flags: review?(jmuizelaar) → review-
Comment on attachment 740892 [details] [diff] [review]
Part 1, use Gralloc buffers for WebGL rendering

Review of attachment 740892 [details] [diff] [review]:
-----------------------------------------------------------------

::: gfx/gl/SharedSurfaceGralloc.h
@@ +37,5 @@
> +
> +protected:
> +    GLLibraryEGL* const mEGL;
> +    layers::ShadowLayerForwarder* const mSLF;
> +    layers::SurfaceDescriptorGralloc mDesc;

This is indeed super evil as we found out while debugging bug 862324.

A SurfaceDescriptorGralloc just has a raw pointer to a PGrallocBufferParent and a raw pointer to a PGrallocBufferChild.

These PGrallocBuffer* are managed by IPDL-generated code and can go away at *any* time an IPC message is received. In particular, if a channel error occurs while receiving an IPC message (which happens typically on a content process crash), then the PGrallocBuffer* objects are destroyed. At this point, any SurfaceDescriptorGralloc in the B2G main process becomes just a pair of dangling pointers (erm, that sounded funny).

So the consequence of this is that the lifespan of a SurfaceDescriptorGralloc, or really of a SurfaceDescriptor in general, should ALWAYS be limited to the handling of a single IPC message. Do not store SurfaceDescriptors long-term, where long-term means over a period of time during which another IPC message could be handled.
Attachment #740892 - Flags: review?(bjacob) → review-
Note: the way to hold on to a gralloc buffer is android::sp<android::GraphicBuffer>
Comment on attachment 740892 [details] [diff] [review]
Part 1, use Gralloc buffers for WebGL rendering

Review of attachment 740892 [details] [diff] [review]:
-----------------------------------------------------------------

I don't have any additional objections.
Attachment #740892 - Flags: review?(jmuizelaar) → review+
Attachment #740895 - Flags: review?(bjacob)
(In reply to Jeff Muizelaar [:jrmuizel] from comment #19)
> ::: gfx/layers/ipc/PLayers.ipdl
> @@ +59,3 @@
> >     */
> > +  sync PGrallocBuffer(gfxIntSize size, gfxContentType content,
> > +                      uint32_t format, uint32_t usage)
> 
> Let's just drop content completely. The callers can figure out what content
> type they want. I think it also makes sense for callers to be explicit about
> their usage flags.

Hmm, can they decide between 16bpp and 32bpp properly on the content side?
(In reply to Vladimir Vukicevic [:vlad] [:vladv] from comment #18)
> Yeah, that crash is known -- I was waiting until bug 862324 landed because
> it makes things a bit clearer there.

Forgot to mention that the crash was generated with bug 862324 patch applied.
As Benoit Jacob said, I will check the life-cycle of graphic buffer usage.
Created attachment 743561 [details] [diff] [review]
renew surfacestream

I found that the TextureClient(CompositableClient) of CanvasClientWebGL was re-created when content app switched to foreground.

If we limited the life cycle of SurfaceStream as the same as TextureClient, the content app crashed issue on comment 17 could be solved.

Attached patch for reference.
(In reply to Vladimir Vukicevic [:vlad] [:vladv] from comment #23)
> (In reply to Jeff Muizelaar [:jrmuizel] from comment #19)
> > ::: gfx/layers/ipc/PLayers.ipdl
> > @@ +59,3 @@
> > >     */
> > > +  sync PGrallocBuffer(gfxIntSize size, gfxContentType content,
> > > +                      uint32_t format, uint32_t usage)
> > 
> > Let's just drop content completely. The callers can figure out what content
> > type they want. I think it also makes sense for callers to be explicit about
> > their usage flags.
> 
> Hmm, can they decide between 16bpp and 32bpp properly on the content side?

I think that's a reasonable requirement. If we need to expose some additional information to make that possible we should do that.
(In reply to pchang from comment #25)
> Created attachment 743561 [details] [diff] [review]
> renew surfacestream
> 
> I found that the TextureClient(CompositableClient) of CanvasClientWebGL was
> re-created when content app switched to foreground.
> 
> If we limited the life cycle of SurfaceStream as the same as TextureClient,
> the content app crashed issue on comment 17 could be solved.
> 
> Attached patch for reference.

Ah ha, looks good; thanks Peter.  I'll update the patches later today and integrate Jeff and others' comments.
Wait, doesn't look good -- with that addition, we're recreating the gralloc buffer every frame.  That's not good for performance :)
During my testing, I didn't see buffer recreated for each frame. You can refer the link from http://www.pastebin.mozilla.org/2364551.

The gralloc buffers are re-created only when app switches to foreground.
Another thought is to free surfacestream in ClearCachedResources call. Resource could be free when app switches to background, and it's good for B2G which has limited memory resource.

But I'm not sure it is correct behavior or not.
Created attachment 746881 [details] [diff] [review]
renew surfacestream v2

Attached clean up version.

https://mxr.mozilla.org/mozilla-central/source/gfx/layers/client/ClientCanvasLayer.cpp#22
In above source code, the canvasclient (CompositableClient) tries to re-initialize for every IPC transaction.
And the SurfaceDescriptorGralloc held by old surfacestream becomes invalid.

Therefore, this patch tries to renew surfacestream when new IPC transaction happens to solve PGrallocBuffer* crash.

e.g., when app goes to foreground, it will only recreate the mConsumer (was held by B2G in last IPC transaction) inside GLScreenBuffer::Morph().
It won't create new gralloc buffer during frame update.
Attachment #743561 - Attachment is obsolete: true
Flags: needinfo?(jgilbert)
Comment on attachment 746881 [details] [diff] [review]
renew surfacestream v2

This patch couldn't fix crash in cut the rope because it tried to create multiple canvas layers that shared the same GLContext (also share the same surfacestream) and got the crash problem to handle surfacesteram.

Create bug 869347 to fix canvas crash issues.
Attachment #746881 - Attachment is obsolete: true
Created attachment 750512 [details] [diff] [review]
Updated combined patch

This, amazingly, seems to work well.

It incorporates the above feedback, as well as pchang's patch from bug 844166 (to not have TextureHost nuke the buffer from under us when it goes away).

nical, I know you're changing some of the texture host/client code soon; hopefully you can take a look at what we have here and keep it in mind for the refactor.
Attachment #750512 - Flags: review?(nical.bugzilla)
Attachment #750512 - Flags: review?(jmuizelaar)
Attachment #750512 - Flags: review?(jgilbert)
Er, pchang's patch from bug 869347 (which is the same bug as 844166)
Comment on attachment 750512 [details] [diff] [review]
Updated combined patch

Review of attachment 750512 [details] [diff] [review]:
-----------------------------------------------------------------

Aside from the formatting I would love changed, and the question which I'm pretty sure I know the answer to, it looks good.

That you say it seems to work is the most surprising part, but I suppose we had to get it right eventually.

::: gfx/gl/SharedSurfaceGralloc.h
@@ +82,5 @@
> +class SurfaceFactory_Gralloc
> +    : public SurfaceFactory_GL
> +{
> +protected:
> +    layers::ISurfaceAllocator* const mAllocator;

A bare pointer here means that the allocator will always outlast the factory, right?

::: gfx/layers/client/CanvasClient.cpp
@@ +112,4 @@
>  
> +    if (surf->Type() == SharedSurfaceType::Gralloc) {
> +      SharedSurface_Gralloc* grallocSurf = SharedSurface_Gralloc::Cast(surf);
> +      mTextureClient->SetDescriptor(grallocSurf->GetDescriptor());

You should be able to condense the #ifdef down to these two lines. If you want more assurance that isCrossProcess is never true on non-b2g, can we use asserts instead of #ifdef-ing off big blocks of code, and introducing weird structuring like "} else \n #endif \n {"?

@@ +119,5 @@
> +      // e.g.  a raw SurfaceStream pointer.
> +    }
> +  } else
> +#endif
> +  {

I really dislike this pattern of blocks leaking across #ifdef boundaries.
Attachment #750512 - Flags: review?(jgilbert) → review+
Flags: needinfo?(jgilbert)
Comment on attachment 750512 [details] [diff] [review]
Updated combined patch

Review of attachment 750512 [details] [diff] [review]:
-----------------------------------------------------------------

Thanks for keeping me in the loop with texture changes

::: content/media/omx/OmxDecoder.h
@@ +23,4 @@
>    public:
>      VideoGraphicBuffer(const android::wp<android::OmxDecoder> aOmxDecoder,
>                         android::MediaBuffer *aBuffer,
> +                       SurfaceDescriptor aDescriptor);

nit: a SurfaceDescriptor is 68 bytes, so it's probably better to pass it as a const reference. Even if it doesn't matter in this case, I know that I, for instance, tend to reproduce what I read in the code base. So people should probably not be given a reason to be tempted to pass SurfaceDescriptors by copy in hotter parts of the code.

::: gfx/gl/SharedSurfaceGralloc.h
@@ +63,5 @@
> +
> +public:
> +    virtual ~SharedSurface_Gralloc();
> +
> +    virtual void Fence();

nit: please mark methods MOZ_OVERRIDE when applicable
Attachment #750512 - Flags: review?(nical.bugzilla) → review+
Attachment #750512 - Flags: review?(jmuizelaar) → review+
Comment on attachment 750512 [details] [diff] [review]
Updated combined patch

Review of attachment 750512 [details] [diff] [review]:
-----------------------------------------------------------------

::: gfx/gl/SharedSurfaceGralloc.cpp
@@ +72,5 @@
> +
> +    EGLDisplay display = egl->Display();
> +    EGLClientBuffer clientBuffer = buffer->getNativeBuffer();
> +    EGLint attrs[] = {
> +        LOCAL_EGL_IMAGE_PRESERVED, LOCAL_EGL_TRUE,

no need for IMAGE_PRESERVED

::: gfx/gl/SharedSurfaceGralloc.h
@@ +37,5 @@
> +
> +protected:
> +    GLLibraryEGL* const mEGL;
> +    layers::ISurfaceAllocator* const mAllocator;
> +    layers::SurfaceDescriptorGralloc mDesc;

Check if this is correct -- should we keep this around, or have a GraphicBuffer?

::: gfx/layers/basic/BasicCanvasLayer.cpp
@@ +15,5 @@
>  #include "SharedSurfaceGL.h"
>  #include "SharedSurfaceEGL.h"
> +#ifdef MOZ_B2G
> +#include "SharedSurfaceGralloc.h"
> +#endif

Why is this here?

::: gfx/layers/client/CanvasClient.cpp
@@ +115,5 @@
> +      mTextureClient->SetDescriptor(grallocSurf->GetDescriptor());
> +    } else {
> +      // XXX we can't do anything here, without doing readback.  All
> +      // the other types depend on passing a SurfaceStreamDescriptor,
> +      // e.g.  a raw SurfaceStream pointer.

We should never get this.  GLScreenBuffer::CreateScreenBuffer should just create a gralloc factory right at the start for B2G. (And create the right thing.)

::: gfx/layers/client/ClientCanvasLayer.cpp
@@ +44,5 @@
> +            factory = new SurfaceFactory_Gralloc(mGLContext, screen->Caps(), ClientManager());
> +#else
> +            // we could do readback here maybe
> +            NS_NOTREACHED("isCrossProcess but not on B2G!");
> +#endif

This should assert that the right thing was created, if we create the right thing.

Some Gaia apps can be not-crossProcess in Gaia, we need to check on that!  No WebGL as root should be a rule, we need to enforce that.
Created attachment 753155 [details] [diff] [review]
Additional fixes

Fixes on top of the combined patch, based on review comments and discussions.
Attachment #740892 - Attachment is obsolete: true
Attachment #740895 - Attachment is obsolete: true
Attachment #753155 - Flags: review?(nical.bugzilla)
Comment on attachment 753155 [details] [diff] [review]
Additional fixes

Review of attachment 753155 [details] [diff] [review]:
-----------------------------------------------------------------

::: gfx/gl/SharedSurfaceGralloc.h
@@ +8,4 @@
>  
>  #include "SharedSurfaceGL.h"
>  #include "mozilla/layers/LayersSurfaces.h"
> +#include "mozilla/layers/ShadowLayers.h"

I'd prefer that SurfaceFactory_Gralloc's constructor be implemented in the .cpp and that you add a binch of forward declarations so that we can avoid including ShadowLayers.h in a header. ShadowLayers.h has quite a lot of include dependencies.

::: gfx/layers/client/CanvasClient.cpp
@@ +117,5 @@
> +#ifdef MOZ_B2G
> +    SharedSurface_Gralloc* grallocSurf = SharedSurface_Gralloc::Cast(surf);
> +    mTextureClient->SetDescriptor(grallocSurf->GetDescriptor());
> +#else
> +    MOZ_ASSERT(false);

nittiest nit: maybe a little error message would be nice, since we will eventually have non-B2G cross process stuff some day.
Attachment #753155 - Flags: review?(nical.bugzilla) → review+
https://hg.mozilla.org/integration/mozilla-inbound/rev/0f0c080ed1ba

with nical's comments fixed above.  If I missed anything, let's do it as a followup because I got tired of carrying around this giant patch. :)
Aaaand backed out, because of OSX assertion failure on my MOZ_ASSERT that we only had one ShadowLayerForwarder active at a time.
Ok, yeah, we do have multiple ShadowLayerForwarders active at one time.  This is really annoying, because it means I'm going to have to add a whole lot more plumbing just to pass the ShadowLayerForwarder down.  We don't even want the SLF, we just want a way to allocate gralloc buffers... but we have to be able to pass them down via that PLayers, so it has to be allocated by it for now.
Created attachment 758038 [details] [diff] [review]
Additional fixes, part 2

I know you guys will hate this patch.  It's the simplest thing to work though, and it's not all that awful.

This removes the global static ShadowLayerForwarder, since that wasn't valid; there can be more than one.  Instead, it passes the ISurfaceAllocator as an optional part of SurfaceCaps.  With this patch, OSX works and passes tests, and B2G works, including for the first frame (which it didn't before).
Attachment #758038 - Flags: review?(nical.bugzilla)

Updated

4 years ago
Attachment #758038 - Flags: review?(nical.bugzilla) → review+
I think it's better than having a global reference to the forwarder actually
Once more, with feeling!

 https://hg.mozilla.org/integration/mozilla-inbound/rev/a33154335e37
https://hg.mozilla.org/mozilla-central/rev/a33154335e37
Status: NEW → RESOLVED
Last Resolved: 4 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla24

Updated

4 years ago
Blocks: 880095
It seems that by this change, camera preview and hw video codec playback does not work on b2g master.
Created attachment 758964 [details]
stack trace of crash when allocating buffers for hw video codec

during allocation of buffers for hw video codec, b2g process crash. The file is a stack trace.
By temporarily changing "MOZ_NOT_REACHED("Unknown gralloc pixel format");" part in BytesPerPixelForPixelFormat(), video could playback.
(In reply to Sotaro Ikeda [:sotaro] from comment #47)
> It seems that by this change, camera preview and hw video codec playback
> does not work on b2g master.

Camera problem(Bug 879478) might be a different problem. I am not sure about it.
(In reply to Sotaro Ikeda [:sotaro] from comment #47)
> It seems that by this change, camera preview and hw video codec playback
> does not work on b2g master.

Created Bug 880268 for video playback.
Depends on: 880268
With skiaGL, 2D canvas app got crash at SharedSurface_Gralloc::Create.
Create bug 881169 to fix.
(In reply to Sotaro Ikeda [:sotaro] from comment #47)
> It seems that by this change, camera preview and hw video codec playback
> does not work on b2g master.

Bug 880780 is the camera preview.  WebRTC work week folks seem to be blocked on this.
Blocks: 880780
Flags: needinfo?(vladimir)
Flags: needinfo?(vladimir)
You need to log in before you can comment on or make changes to this bug.