Closed Bug 757341 Opened 13 years ago Closed 12 years ago

Hardware accelerated camera preview

Categories

(Firefox OS Graveyard :: General, defect, P1)

All
Gonk (Firefox OS)
defect

Tracking

(blocking-kilimanjaro:+, blocking-basecamp:+)

RESOLVED FIXED
blocking-kilimanjaro +
blocking-basecamp +

People

(Reporter: kanru, Assigned: kanru)

References

Details

Attachments

(3 files, 17 obsolete files)

19.22 KB, patch
roc
: review+
BenWa
: review+
cjones
: feedback+
Details | Diff | Splinter Review
6.95 KB, patch
roc
: review+
Details | Diff | Splinter Review
63.95 KB, patch
Details | Diff | Splinter Review
Just like in bug 714408 we can get a buffer handle from camera that we can convert to texture and feed to GPU directly.
Attached patch GonkIOSurface (obsolete) — Splinter Review
Patch extract from bug 714408 to support SurfaceTexture image type in ImageLayerOGL. Do we really have to use SurfaceTextureGonk? Modifying ISurfaceTexture.h and GraphicBuffer.h in Android source is not good.
Assignee: nobody → kchen
WIP. Patch based on bug 740997, removed the CameraNativeWindow. Instead use SurfaceTexture which also implements ANativeWindow and knows how to queue buffers and share buffers across process. TDO: Create the texture on rendering side, figure out why texture crop is [0,0,0,0], lots of GL error 0x500
(In reply to Kan-Ru Chen [:kanru] from comment #1) > Created attachment 625913 [details] [diff] [review] > GonkIOSurface > > Patch extract from bug 714408 to support SurfaceTexture image type in > ImageLayerOGL. Do we really have to use SurfaceTextureGonk? Modifying > ISurfaceTexture.h and GraphicBuffer.h in Android source is not good. I also thought that Modifying the source was not good. Then at first, I tried to implement bug 714408 without modifying SurfaceTexture and SurfaceTextureClient. But for the following reasons, I created SurfaceTextureGonk and SurfaceTextureClientGonk. - SurfaceTexture need to use gecko's GL infrastructure. - SurfaceTexture need to use gecko's EGL infrastructure. - SurfaceTexture get ISurfaceComposer(SurfaceFlinger) interface via android's service manager and use it. - SurfaceTexture uses GraphicBufferAlloc instance within SurfaceFlinger's process to allocate GraphicBuffer. - SurfaceTextureClient get ISurfaceComposer(SurfaceFlinger) interface via android's service manager within SurfaceTextureClientGonk::query().
(In reply to Kan-Ru Chen [:kanru] from comment #2) > > TDO: Create the texture on rendering side, figure out why texture crop is > [0,0,0,0], lots of GL error 0x500 "texture crop is [0,0,0,0]" says there is no actual data for texture/rendering. But a GraphicBuffer is present for rendering. The situation do not happen in normal situations. It seems an error situation.
(In reply to Sotaro Ikeda from comment #4) > (In reply to Kan-Ru Chen [:kanru] from comment #2) > > > > TDO: Create the texture on rendering side, figure out why texture crop is > > [0,0,0,0], lots of GL error 0x500 > > "texture crop is [0,0,0,0]" says there is no actual data for > texture/rendering. But a GraphicBuffer is present for rendering. The > situation do not happen in normal situations. It seems an error situation. According to the SurfaceTextureGonk log, setCrop is only called once by camera: V/SurfaceTextureGonk( 5562): [unnamed-5562-0] setCrop: crop=[0,0,0,0] Does it need some initialize from our side?
http://androidxref.com/source/xref/frameworks/base/libs/gui/SurfaceTextureClient.cpp#468 SurfaceTextureClient sets the crop region to zero when the window geometry is changed. I'm starting to wonder what this crop region is for and whether we should use it to set the texture coord or not.
(In reply to Sotaro Ikeda from comment #4) > (In reply to Kan-Ru Chen [:kanru] from comment #2) > > "texture crop is [0,0,0,0]" says there is no actual data for > texture/rendering. But a GraphicBuffer is present for rendering. The > situation do not happen in normal situations. It seems an error situation. Correction the above was my misunderstanding. sorry... http://androidxref.com/source/xref/frameworks/base/libs/gui/SurfaceTextureClient.cpp#468 SurfaceTexture::computeCurrentTransformMatrix() says, when texture crop is [0,0,0,0], it says crop region is not set and the entire GraphicBuffer region is valid. So need to change the code around setting texture area like following. > if (!mCurrentCrop.isEmpty()) { // when crop is set. > > } else { // when crop is not set. entire region is valid > > }
> SurfaceTexture::computeCurrentTransformMatrix() says, when texture crop is > [0,0,0,0], it says crop region is not set and the entire GraphicBuffer > region is valid. So need to change the code around setting texture area like > following. > following is correct reference to SurfaceTexture::computeCurrentTransformMatrix() http://androidxref.com/source/xref/frameworks/base/libs/gui/SurfaceTexture.cpp#804
Attached patch Draw GraphicBuffer directly (obsolete) — Splinter Review
This patch adds the BindNativeBuffer method to GLContext and uses that in ImageLayerOGL to avoid passing SurfaceTexture around. The advantage of this approach is that we can pass each GraphicBuffer across processes and we don't have to modify Android source code.
Attachment #625913 - Attachment is obsolete: true
Attachment #628259 - Flags: feedback?(ikeda.sohtaroh)
This patch is based on the ICS camera patch in bug 740997. In this version I kept the CameraNativeWindow and used that to maintain the buffer queue.
Attachment #625915 - Attachment is obsolete: true
(In reply to Kan-Ru Chen [:kanru] from comment #9) > Created attachment 628259 [details] [diff] [review] > Draw GraphicBuffer directly > > This patch adds the BindNativeBuffer method to GLContext and uses that in > ImageLayerOGL to avoid passing SurfaceTexture around. > > The advantage of this approach is that we can pass each GraphicBuffer across > processes and we don't have to modify Android source code. Forgot the mention that we probably should create a new shader program instead modify the original one.
Comment on attachment 628259 [details] [diff] [review] Draw GraphicBuffer directly Review of attachment 628259 [details] [diff] [review]: ----------------------------------------------------------------- this implementation is better than modifying the android source code:-) We found out that we cannot use SurfaceTexture in gecko without modifying it. Then we should not use SurfaceTexture class within gecko, otherwise we implement ANativeWindow derived class and use it like attachment 628262 [details] [diff] [review] does.
blocking-basecamp: --- → +
blocking-kilimanjaro: --- → +
Kan-Ru, I plan on leveraging CameraNativeWindow for video playback (bug 759506) to create a shared GonkNativeWindow. I suspect the CameraNativeWindow contained here and in bug 740997 are out of date. If so, could you please update it so that we don't diverge?
(In reply to Diego Wilson from comment #13) > Kan-Ru, I plan on leveraging CameraNativeWindow for video playback (bug > 759506) to create a shared GonkNativeWindow. I suspect the > CameraNativeWindow contained here and in bug 740997 are out of date. If so, > could you please update it so that we don't diverge? Will upload soon.
Attached patch Draw GraphicBuffer directly (obsolete) — Splinter Review
refresh
Attachment #628259 - Attachment is obsolete: true
Attachment #628259 - Flags: feedback?(ikeda.sohtaroh)
refresh
Attachment #628262 - Attachment is obsolete: true
Attached patch Draw GraphicBuffer directly (obsolete) — Splinter Review
A little refactoring and updated shader program.
Attachment #630423 - Attachment is obsolete: true
Updated patch. I added some comments. I'm not sure if we want to keep the slow path.
Attachment #630424 - Attachment is obsolete: true
Attachment #631326 - Flags: review?(bgirard)
Comment on attachment 631326 [details] [diff] [review] Draw GraphicBuffer directly Rob says he can take a look. Thanks!
Attachment #631326 - Flags: review?(bgirard) → review?(roc)
Comment on attachment 631326 [details] [diff] [review] Draw GraphicBuffer directly Review of attachment 631326 [details] [diff] [review]: ----------------------------------------------------------------- ::: gfx/layers/ImageLayers.cpp @@ +449,5 @@ > +void > +GonkIOSurfaceImage::SetData(const Data& aData) > +{ > + mGraphicBuffer = aData.mGraphicBuffer; > + mSize = aData.mPicSize; We need some comments explaining what the GraphicBuffer is and how we handle ownership of it. @@ +455,5 @@ > + > +already_AddRefed<gfxASurface> > +GonkIOSurfaceImage::GetAsSurface() > +{ > + return 0; This will need to be fixed at some point. Please add a comment saying that. For now, return nsnull. ::: gfx/layers/ImageLayers.h @@ +888,5 @@ > #endif > > +#ifdef MOZ_WIDGET_GONK > +class GraphicBufferLocked { > + NS_INLINE_DECL_THREADSAFE_REFCOUNTING(GraphicBufferLocked) Add a comment explaining what this class is for. @@ +908,5 @@ > +protected: > + android::GraphicBuffer* mGraphicBuffer; > +}; > + > +class THEBES_API GonkIOSurfaceImage : public Image { Image objects are supposed to immutable, i.e. they don't change while we keep the Image alive. is that true for GonkIOSurfaceImages? @@ +935,5 @@ > + { > + return mGraphicBuffer->GetNativeBuffer(); > + } > + > + virtual void Unlock() Why is unlock virtual? What does it actually mean, i.e., when should someone call it? I see that we call it after rendering. That seems wrong since we have to be able to render the same Image more than once. Something might keep an Image around for a long time. ::: gfx/layers/opengl/LayerManagerOGLShaders.txt @@ +235,5 @@ > } > #endif > @end > > +// Single texture in RGBA format, but uses external image Explain what "external image" means
(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) (away June 9-19) from comment #21) > @@ +908,5 @@ > > +protected: > > + android::GraphicBuffer* mGraphicBuffer; > > +}; > > + > > +class THEBES_API GonkIOSurfaceImage : public Image { > > Image objects are supposed to immutable, i.e. they don't change while we > keep the Image alive. is that true for GonkIOSurfaceImages? They don't change until we call Unlock(). > @@ +935,5 @@ > > + { > > + return mGraphicBuffer->GetNativeBuffer(); > > + } > > + > > + virtual void Unlock() > > Why is unlock virtual? What does it actually mean, i.e., when should someone > call it? It could be non virtual. > I see that we call it after rendering. That seems wrong since we have to be > able to render the same Image more than once. Something might keep an Image > around for a long time. How could we tell when the Image will be released? The image buffer is shared with the camera producer thread and is a limited resource. We have to release it otherwise the camera thread will be stalled quickly. I had been bitten by a bug that video was not playing and the camera thread was stalled. I originally put Unlock() in the destructor so I tried to unlock it right after we rendered it. Later it turns out the bug is that we forgot to call video.play() in camera.js. I should try my original approach again.
Attached patch Draw GraphicBuffer directly v1.1 (obsolete) — Splinter Review
Attachment #631326 - Attachment is obsolete: true
Attachment #631326 - Flags: review?(roc)
Attachment #631800 - Flags: review?(roc)
Comment on attachment 631800 [details] [diff] [review] Draw GraphicBuffer directly v1.1 Review of attachment 631800 [details] [diff] [review]: ----------------------------------------------------------------- ::: gfx/layers/ImageLayers.cpp @@ +452,5 @@ > +void > +GonkIOSurfaceImage::SetData(const Data& aData) > +{ > + mGraphicBuffer = aData.mGraphicBuffer; > + mSize = aData.mPicSize; This could just go inline in the class declaration. @@ +459,5 @@ > +already_AddRefed<gfxASurface> > +GonkIOSurfaceImage::GetAsSurface() > +{ > + // We need to fix this and return a ASurface at some point. > + return nsnull; So could this. Yes, we do need to fix this! ::: gfx/layers/ImageLayers.h @@ +894,5 @@ > + * mGraphicBuffer is owned by the producer thread, but when it is > + * wrapped by GraphicBufferLocked and passed to the compositor, the > + * buffer content is guaranteed to not change until Unlock() is > + * called. Each producer must maintain their own buffer queue and > + * implement the GraphicBufferLocked::Unlock() intrerface. interface Why does GraphicBufferLocked need to be a separate object from the GonkIOSurfaceImage? Could GonkIOSurfaceImage directly reference GraphicBuffer* and do the unlock itself? An explanation of why this object has to be its own object could go into the comment here. @@ +904,5 @@ > + GraphicBufferLocked(android::GraphicBuffer* aGraphicBuffer) > + : mGraphicBuffer(aGraphicBuffer) > + {} > + > + virtual ~GraphicBufferLocked() {} Why does this need to be virtual? @@ +906,5 @@ > + {} > + > + virtual ~GraphicBufferLocked() {} > + > + virtual void Unlock() {} Why does this need to be virtual? @@ +910,5 @@ > + virtual void Unlock() {} > + > + virtual void* GetNativeBuffer() > + { > + return mGraphicBuffer->getNativeBuffer(); Why does this need to be virtual? ::: gfx/layers/opengl/ImageLayerOGL.cpp @@ +459,5 @@ > + > + gl()->fActiveTexture(LOCAL_GL_TEXTURE0); > + gl()->BindExternalBuffer(data->mTexture.GetTextureID(), ioImage->GetNativeBuffer()); > + // Unlocked image might be reused immediately by the producer thread. > + // ioImage->Unlock(); Should just delete this comment? @@ +472,5 @@ > + program->SetLayerTransform(GetEffectiveTransform()); > + program->SetLayerOpacity(GetEffectiveOpacity()); > + program->SetRenderOffset(aOffset); > + program->SetTextureUnit(0); > + program->LoadMask(GetMaskLayer()); I'd really like to avoid copying all this code! Maybe we need to separate it out into a helper function?
(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #24) > Comment on attachment 631800 [details] [diff] [review] > Draw GraphicBuffer directly v1.1 > > Review of attachment 631800 [details] [diff] [review]: > ----------------------------------------------------------------- > > ::: gfx/layers/ImageLayers.h > @@ +894,5 @@ > > + * mGraphicBuffer is owned by the producer thread, but when it is > > + * wrapped by GraphicBufferLocked and passed to the compositor, the > > + * buffer content is guaranteed to not change until Unlock() is > > + * called. Each producer must maintain their own buffer queue and > > + * implement the GraphicBufferLocked::Unlock() intrerface. > > interface Nice catch.. > Why does GraphicBufferLocked need to be a separate object from the > GonkIOSurfaceImage? Could GonkIOSurfaceImage directly reference > GraphicBuffer* and do the unlock itself? An explanation of why this object > has to be its own object could go into the comment here. Because each producer might have its own unlocking method. Therefore they have to create a subclass of the GraphicBufferLocked class. > @@ +904,5 @@ > > + GraphicBufferLocked(android::GraphicBuffer* aGraphicBuffer) > > + : mGraphicBuffer(aGraphicBuffer) > > + {} > > + > > + virtual ~GraphicBufferLocked() {} > > Why does this need to be virtual? This class will be subclassed. > @@ +906,5 @@ > > + {} > > + > > + virtual ~GraphicBufferLocked() {} > > + > > + virtual void Unlock() {} > > Why does this need to be virtual? This class will be subclassed. > @@ +910,5 @@ > > + virtual void Unlock() {} > > + > > + virtual void* GetNativeBuffer() > > + { > > + return mGraphicBuffer->getNativeBuffer(); > > Why does this need to be virtual? This class will be subclassed. I foresee there will be different method to get the native buffer, or not. > ::: gfx/layers/opengl/ImageLayerOGL.cpp > @@ +459,5 @@ > > + > > + gl()->fActiveTexture(LOCAL_GL_TEXTURE0); > > + gl()->BindExternalBuffer(data->mTexture.GetTextureID(), ioImage->GetNativeBuffer()); > > + // Unlocked image might be reused immediately by the producer thread. > > + // ioImage->Unlock(); > > Should just delete this comment? Yeah, a leftover! I removed it locally. > @@ +472,5 @@ > > + program->SetLayerTransform(GetEffectiveTransform()); > > + program->SetLayerOpacity(GetEffectiveOpacity()); > > + program->SetRenderOffset(aOffset); > > + program->SetTextureUnit(0); > > + program->LoadMask(GetMaskLayer()); > > I'd really like to avoid copying all this code! Maybe we need to separate it > out into a helper function? Me too! We could do it in followup bug.
-- a/gfx/gl/GLContextProviderEGL.cpp +++ b/gfx/gl/GLContextProviderEGL.cpp @@ -377,16 +377,35 @@ public: LOCAL_EGL_BACK_BUFFER); if (success == LOCAL_EGL_FALSE) return false; mBound = false; return true; } + bool BindExternalBuffer(GLuint texture, void* buffer) + { +#if defined(MOZ_WIDGET_GONK) || defined(MOZ_WIDGET_ANDROID) + EGLint attrs[] = { + LOCAL_EGL_IMAGE_PRESERVED_KHR, LOCAL_EGL_TRUE, + LOCAL_EGL_NONE, + }; + EGLImageKHR image = sEGLLibrary.fCreateImageKHR(EGL_DISPLAY(), + EGL_NO_CONTEXT, + LOCAL_EGL_NATIVE_BUFFER_ANDROID, + buffer, attrs); The "KHR" suffix was dropped at the tip. You can replace this with: EGLImage image = sEGLLibrary.fCreateImage(EGL_DISPLAY(), EGL_NO_CONTEXT, EGL_NATIVE_BUFFER_ANDROID, buffer, attrs); See bug 762259 for details
Comment on attachment 631800 [details] [diff] [review] Draw GraphicBuffer directly v1.1 Review of attachment 631800 [details] [diff] [review]: ----------------------------------------------------------------- ::: gfx/gl/GLContextProviderEGL.cpp @@ +388,5 @@ > + EGLint attrs[] = { > + LOCAL_EGL_IMAGE_PRESERVED_KHR, LOCAL_EGL_TRUE, > + LOCAL_EGL_NONE, > + }; > + EGLImageKHR image = sEGLLibrary.fCreateImageKHR(EGL_DISPLAY(), The "KHR" suffix was dropped at the tip. You can replace this with: EGLImage image = sEGLLibrary.fCreateImage(EGL_DISPLAY(), EGL_NO_CONTEXT, EGL_NATIVE_BUFFER_ANDROID, buffer, attrs); See bug 762259 for details
(In reply to Kan-Ru Chen [:kanru] from comment #25) > > Why does GraphicBufferLocked need to be a separate object from the > > GonkIOSurfaceImage? Could GonkIOSurfaceImage directly reference > > GraphicBuffer* and do the unlock itself? An explanation of why this object > > has to be its own object could go into the comment here. > > Because each producer might have its own unlocking method. Therefore they > have to create a subclass of the GraphicBufferLocked class. Right, OK. > > @@ +472,5 @@ > > > + program->SetLayerTransform(GetEffectiveTransform()); > > > + program->SetLayerOpacity(GetEffectiveOpacity()); > > > + program->SetRenderOffset(aOffset); > > > + program->SetTextureUnit(0); > > > + program->LoadMask(GetMaskLayer()); > > > > I'd really like to avoid copying all this code! Maybe we need to separate it > > out into a helper function? > > Me too! We could do it in followup bug. I think you should do it now rather than making a copy of all this code.
(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #28) > > > @@ +472,5 @@ > > > > + program->SetLayerTransform(GetEffectiveTransform()); > > > > + program->SetLayerOpacity(GetEffectiveOpacity()); > > > > + program->SetRenderOffset(aOffset); > > > > + program->SetTextureUnit(0); > > > > + program->LoadMask(GetMaskLayer()); > > > > > > I'd really like to avoid copying all this code! Maybe we need to separate it > > > out into a helper function? > > > > Me too! We could do it in followup bug. > > I think you should do it now rather than making a copy of all this code. OK.
Found that we can use BindAndDrawQuadWithTextureRect (maybe BindAndDrawQuad?) directly. Maybe this belongs to another bug? It was last modified by bug 736716.
Attachment #631800 - Attachment is obsolete: true
Attachment #631800 - Flags: review?(roc)
Attachment #635678 - Flags: review?(roc)
Attachment #635678 - Flags: feedback?(jones.chris.g)
Comment on attachment 635678 [details] [diff] [review] Draw GraphicBuffer directly v1.2 Review of attachment 635678 [details] [diff] [review]: ----------------------------------------------------------------- r+ with that fixed. This needs review from a GL person. Trying Ali Juma. ::: gfx/layers/ImageLayers.h @@ +890,5 @@ > > +#ifdef MOZ_WIDGET_GONK > +/** > + * The gralloc buffer maintained by android GraphicBuffer can be > + * shared between the compositor thread and the producer thread. The Let's put this Gonk code in its own header file, GonkIOSurfaceImage.h, and just #include where we need it.
Attachment #635678 - Flags: review?(roc)
Attachment #635678 - Flags: review?(ajuma)
Attachment #635678 - Flags: review+
Comment on attachment 635678 [details] [diff] [review] Draw GraphicBuffer directly v1.2 Passing this to BenWa.
Attachment #635678 - Flags: review?(ajuma) → review?(bgirard)
Comment on attachment 635678 [details] [diff] [review] Draw GraphicBuffer directly v1.2 My feedback is basically useless here, but note that we'll want this work to dovetail with bug 765734. IIUC, this is the missing "magically bind GraphicBuffer to texture" that I don't know how to do, which is fantastic.
Attachment #635678 - Flags: feedback?(jones.chris.g) → feedback+
Comment on attachment 635678 [details] [diff] [review] Draw GraphicBuffer directly v1.2 Review of attachment 635678 [details] [diff] [review]: ----------------------------------------------------------------- ::: gfx/gl/GLContextProviderEGL.cpp @@ +418,5 @@ > + EGL_NO_CONTEXT, > + EGL_NATIVE_BUFFER_ANDROID, > + buffer, attrs); > + fBindTexture(GL_TEXTURE_EXTERNAL_OES, texture); > + sEGLLibrary.fImageTargetTexture2DOES(GL_TEXTURE_EXTERNAL_OES, image); There is a bug on the ICS graphic drivers of the Akami. It doesn't bind the texture correctly unless fImageTargetTexture2DOES() is called before fBindTexture(). AFAICT the order shouldn't matter for any other device. Could the order of the calls above be switched?
BTW other than my comment above, the lastest GONK_IO_SURFACE patch works like a charm for OMX video zero-copy (bug 759506). Hopefully I'll have something to share on that front soon
Comment on attachment 635678 [details] [diff] [review] Draw GraphicBuffer directly v1.2 Review of attachment 635678 [details] [diff] [review]: ----------------------------------------------------------------- ::: gfx/gl/GLContextProviderEGL.cpp @@ +418,5 @@ > + EGL_NO_CONTEXT, > + EGL_NATIVE_BUFFER_ANDROID, > + buffer, attrs); > + fBindTexture(GL_TEXTURE_EXTERNAL_OES, texture); > + sEGLLibrary.fImageTargetTexture2DOES(GL_TEXTURE_EXTERNAL_OES, image); Also I believe the EGLImage should be destroyed here by adding: sEGLLibrary.fDestroyImage(EGL_DISPLAY(), image); image = nsnull; See CreateBackingSurface() for another example. Otherwise the EGLImage may not be released properly
(In reply to Diego Wilson from comment #36) > Comment on attachment 635678 [details] [diff] [review] > Draw GraphicBuffer directly v1.2 > > Review of attachment 635678 [details] [diff] [review]: > ----------------------------------------------------------------- > > ::: gfx/gl/GLContextProviderEGL.cpp > @@ +418,5 @@ > > + EGL_NO_CONTEXT, > > + EGL_NATIVE_BUFFER_ANDROID, > > + buffer, attrs); > > + fBindTexture(GL_TEXTURE_EXTERNAL_OES, texture); > > + sEGLLibrary.fImageTargetTexture2DOES(GL_TEXTURE_EXTERNAL_OES, image); > > Also I believe the EGLImage should be destroyed here by adding: > > sEGLLibrary.fDestroyImage(EGL_DISPLAY(), image); > image = nsnull; > > See CreateBackingSurface() for another example. Otherwise the EGLImage may > not be released properly Thanks! I'll update the patch.
Comment on attachment 635678 [details] [diff] [review] Draw GraphicBuffer directly v1.2 Review of attachment 635678 [details] [diff] [review]: ----------------------------------------------------------------- Looks good, just want to check the shader extension usage. ::: gfx/layers/ImageLayers.h @@ +895,5 @@ > + * mGraphicBuffer is owned by the producer thread, but when it is > + * wrapped by GraphicBufferLocked and passed to the compositor, the > + * buffer content is guaranteed to not change until Unlock() is > + * called. Each producer must maintain their own buffer queue and > + * implement the GraphicBufferLocked::Unlock() intrerface. interface* ::: gfx/layers/opengl/LayerManagerOGLShaders.txt @@ +241,5 @@ > +// supported by OpenGL ES. It is up to the implementation exactly what > +// formats are accepted. It is specified in the OES_EGL_image_external > +// extension. > +@shader sRGBAExternalTextureLayer<mask:,Mask,Mask3D>FS > +#extension GL_OES_EGL_image_external : require I want to check with gal about this. I remember we discussed this on a bug that it wasn't always required.
Attachment #635678 - Flags: review?(bgirard)
Attachment #635678 - Flags: review+
Attachment #635678 - Flags: feedback?(gal)
Comment on attachment 635678 [details] [diff] [review] Draw GraphicBuffer directly v1.2 The code is correct as written but you won't be able to render a gralloc buffer we drew into ourselves with OES. That is intended for external sources with vendor defined formats (such as NV12 in this case). For regular RGBA gralloc buffers you have to use a regular image. We can double check this with our chipset vendor friends.
Attachment #635678 - Flags: feedback?(gal) → feedback+
I re-read attachment 635678 [details] [diff] [review]. It seems that current implementation request to call following functions in every rendering. - sEGLLibrary.fCreateImage() - sEGLLibrary.fDestroyImage() I do not know whether such use is safe. But it is very different how android uses EGLImages. In android case, EGL re-creation happen only when GraphicBuffer is re-created.
Create and Destroy image are at the very least slow in many implementations and should be avoided.
This is FYI. In SurfaceTexture class there is a function to check whether color format is external. SurfaceTexture::isExternalFormat(uint32_t format) http://androidxref.com/source/xref/frameworks/base/libs/gui/SurfaceTexture.cpp#849
No longer depends on: 759506
Blocks: 774552
No longer blocks: 774552
Depends on: 774552
Attachment #647349 - Attachment is obsolete: true
Note this does not work with non-OMTC.
Attachment #630422 - Attachment is obsolete: true
Attachment #631331 - Attachment is obsolete: true
Attachment #648407 - Attachment is obsolete: true
Attachment #648900 - Flags: review?(jones.chris.g)
(In reply to Kan-Ru Chen [:kanru] from comment #46) > Created attachment 648900 [details] [diff] [review] > Camera direct texturing with async video > > Note this does not work with non-OMTC. This is taking me a while to review, but quick question: what happens if the camera preview fails to alloc a gralloc buffer?
Comment on attachment 648900 [details] [diff] [review] Camera direct texturing with async video >diff --git a/dom/camera/GonkCameraHwMgr.cpp b/dom/camera/GonkCameraHwMgr.cpp >+#define CAMERA_USE_DIRECT_TEXTURE 0 >+ Why do we want to disable this by default? When would we ever not use it? >+void >+GonkCameraHardware::OnNewFrame() >+ android::GonkNativeWindow* window = static_cast<android::GonkNativeWindow*>(mWindow.get()); >+ nsRefPtr<mozilla::layers::GraphicBufferLocked> buffer = window->lockCurrentBuffer(); using namespace android; using namespace mozilla::layers; >diff --git a/gfx/layers/ipc/PGrallocBuffer.ipdl b/gfx/layers/ipc/PGrallocBuffer.ipdl > async protocol PGrallocBuffer { >- manager PImageContainer or PLayers; >+ manager PImageBridge or PImageContainer or PLayers; Remove the WIP support for gralloc in PImageContainer, since we're not going to use it. >diff --git a/gfx/layers/ipc/PImageBridge.ipdl b/gfx/layers/ipc/PImageBridge.ipdl >+ sync PGrallocBuffer(gfxIntSize size, uint32_t format, uint32_t usage) Need to document |usage| here or point at where docs can be found. >diff --git a/gfx/layers/ipc/ShadowLayerUtilsGralloc.h b/gfx/layers/ipc/ShadowLayerUtilsGralloc.h >+ static PGrallocBufferParent* >+ Create(const gfxIntSize& aSize, const uint32_t& aFormat, const uint32_t& aUsage, >+ MaybeMagicGrallocBufferHandle* aOutHandle); Document the |aUsage| param --- size/format are standard in the rest of gecko gfx/, but usage isn't. >diff --git a/gfx/layers/ipc/SharedImageUtils.h b/gfx/layers/ipc/SharedImageUtils.h Are we not going to add gralloc support here? Do want to do that in a followup? >diff --git a/gfx/layers/opengl/ImageLayerOGL.cpp b/gfx/layers/opengl/ImageLayerOGL.cpp >+ } else if (img >+ && (img->type() == SharedImage::TSurfaceDescriptor) >+ && (img->get_SurfaceDescriptor().type() == SurfaceDescriptor::TSurfaceDescriptorGralloc)) { >+ gl()->MakeCurrent(); >+ gl()->fActiveTexture(LOCAL_GL_TEXTURE0); >+ gl()->BindExternalBuffer(mGLTexture.GetTextureID(), graphicBuffer->getNativeBuffer()); Does this Bind() call end up holding a strong reference to the EGL resource that the gralloc buffer maps to? The rule for the rest of the ImageBridgeParent images is that they can only be held during one Render() call, because the lifetime of ImageBridgeParent resources doesn't match up with any LayerManager. When the underlying gralloc buffer is destroyed, how do we unbind the image from this texture? >@@ -927,16 +940,33 @@ ShadowImageLayerOGL::RenderLayer(int aPr >+ } else if (mGLTexture.IsAllocated()) { >+ ShaderProgramOGL *program = mOGLManager->GetProgram(RGBAExternalLayerProgramType, GetMaskLayer()); >+ Are you sure we only hit this path with gralloc buffers? That's not clear to me. It seems like if we try to use the external-buffer GLSL program with a non-externally-bound texture, bad things might happen ... If this is always the case, we need to document or add an assert here describing why. ... after reading below I think that this is true. If so, let's name this texture more descriptively, something like mExternalBufferTexture. >diff --git a/gfx/layers/opengl/ImageLayerOGL.h b/gfx/layers/opengl/ImageLayerOGL.h >+ GLTexture mGLTexture; This code is getting complicated enough that we need to document these members better. Please rename this member to something more descriptive (I suggested mExternalBufferTexture above), and document when it's used. This isn't a complete review, but there's quite a bit going on in this patch, and I don't understand how it all fits together. Let's catch up on IRC later today.
Attachment #648900 - Flags: review?(jones.chris.g)
(In reply to Chris Jones [:cjones] [:warhammer] from comment #47) > (In reply to Kan-Ru Chen [:kanru] from comment #46) > > Created attachment 648900 [details] [diff] [review] > > Camera direct texturing with async video > > > > Note this does not work with non-OMTC. > > This is taking me a while to review, but quick question: what happens if the > camera preview fails to alloc a gralloc buffer? Then camera probably won't work because the camera preview needs the gralloc buffer even if it doesn't use direct texturing.
(In reply to Chris Jones [:cjones] [:warhammer] from comment #48) > Comment on attachment 648900 [details] [diff] [review] > Camera direct texturing with async video > > >diff --git a/gfx/layers/ipc/PGrallocBuffer.ipdl b/gfx/layers/ipc/PGrallocBuffer.ipdl > > > async protocol PGrallocBuffer { > >- manager PImageContainer or PLayers; > >+ manager PImageBridge or PImageContainer or PLayers; > > Remove the WIP support for gralloc in PImageContainer, since we're not > going to use it. PImageContainer can't be removed from here, since we have the PImageContainer -> SharedImage -> SurfaceDescriptor -> SurfaceDescriptorGralloc -> PGrallocBuffer dependency. > >diff --git a/gfx/layers/ipc/ShadowLayerUtilsGralloc.h b/gfx/layers/ipc/ShadowLayerUtilsGralloc.h > > >+ static PGrallocBufferParent* > >+ Create(const gfxIntSize& aSize, const uint32_t& aFormat, const uint32_t& aUsage, > >+ MaybeMagicGrallocBufferHandle* aOutHandle); > > Document the |aUsage| param --- size/format are standard in the rest > of gecko gfx/, but usage isn't. I'm afraid that format also needs document, as it's a gralloc format but not format defined in gecko gfx/ > >diff --git a/gfx/layers/ipc/SharedImageUtils.h b/gfx/layers/ipc/SharedImageUtils.h > > Are we not going to add gralloc support here? Do want to do that in a > followup? Probably in bug 767480. But we don't want to release the gralloc buffer here in the case of hardware decoder and camera, since they manage the buffer lifetime. > >diff --git a/gfx/layers/opengl/ImageLayerOGL.cpp b/gfx/layers/opengl/ImageLayerOGL.cpp > > >+ } else if (img > >+ && (img->type() == SharedImage::TSurfaceDescriptor) > >+ && (img->get_SurfaceDescriptor().type() == SurfaceDescriptor::TSurfaceDescriptorGralloc)) { > >+ gl()->MakeCurrent(); > >+ gl()->fActiveTexture(LOCAL_GL_TEXTURE0); > >+ gl()->BindExternalBuffer(mGLTexture.GetTextureID(), graphicBuffer->getNativeBuffer()); > > Does this Bind() call end up holding a strong reference to the EGL > resource that the gralloc buffer maps to? The rule for the rest of > the ImageBridgeParent images is that they can only be held during one > Render() call, because the lifetime of ImageBridgeParent resources > doesn't match up with any LayerManager. > > When the underlying gralloc buffer is destroyed, how do we unbind the > image from this texture? Bind the texture will add reference to the EGL resource. The resource is freed when the number of references drops to zero. Therefore when (1) camera frees the gralloc buffer (2) ImageLayerOGL deletes the texture, all resources are freed.
Priority: -- → P1
Attachment #648900 - Attachment is obsolete: true
Attachment #650418 - Flags: review?(jones.chris.g)
Comment on attachment 650418 [details] [diff] [review] Camera direct texturing with async video Review of attachment 650418 [details] [diff] [review]: ----------------------------------------------------------------- ::: ipc/glue/ProtocolUtils.h @@ +108,5 @@ > +// #if defined(DEBUG) > +// return !!PR_GetEnv("MOZ_IPC_MESSAGE_LOG"); > +// #else > +// return false; > +// #endif Oops. This was included by accident.
Kan-Ru, I really wanted to get this landed today but ended up being occupied by other things. Please enjoy your Saturday :), but if you'll be around tomorrow we can work together to get this landed. Otherwise, I can apply my suggested changes and then land myself. Let me know! :)
Fix two resource release bugs.
Attachment #650418 - Attachment is obsolete: true
Attachment #650418 - Flags: review?(jones.chris.g)
Attachment #650840 - Flags: review?(jones.chris.g)
I have a question. In which process is GonkCameraHardware instantiated? In chrome process or content process? IMHO, GonkCameraHardware should be singleton within whole b2g system and need to be outside of content process.
These patches no longer compile, fyi. (In reply to Kan-Ru Chen [:kanru] from comment #50) > (In reply to Chris Jones [:cjones] [:warhammer] from comment #48) > > Comment on attachment 648900 [details] [diff] [review] > > Camera direct texturing with async video > > > > >diff --git a/gfx/layers/ipc/PGrallocBuffer.ipdl b/gfx/layers/ipc/PGrallocBuffer.ipdl > > > > > async protocol PGrallocBuffer { > > >- manager PImageContainer or PLayers; > > >+ manager PImageBridge or PImageContainer or PLayers; > > > > Remove the WIP support for gralloc in PImageContainer, since we're not > > going to use it. > > PImageContainer can't be removed from here, since we have the > PImageContainer -> SharedImage -> SurfaceDescriptor -> > SurfaceDescriptorGralloc -> PGrallocBuffer dependency. > What's the problem here?
Comment on attachment 650840 [details] [diff] [review] Camera direct texturing with async video. v1.1 >diff --git a/dom/camera/GonkCameraHwMgr.cpp b/dom/camera/GonkCameraHwMgr.cpp >+#define CAMERA_USE_DIRECT_TEXTURE 1 Let's remove this ifdef. Per IRC chat we already assume that the camera HAL is ICS and can work with gralloc buffers. If either assumption went wrong, we would need to check dynamically anyway. >diff --git a/dom/camera/GonkCameraPreview.cpp b/dom/camera/GonkCameraPreview.cpp >+void >+GonkCameraPreview::ReceiveFrame(mozilla::layers::GraphicBufferLocked* aBuffer) >+{ >+ nsRefPtr<Image> image = mImageContainer->CreateImage(&format, 1); >+ image->AddRef(); nsRefPtr already AddRef's this: where does the extra AddRef go? >diff --git a/gfx/layers/ipc/ImageBridgeChild.cpp b/gfx/layers/ipc/ImageBridgeChild.cpp >+bool >+ImageBridgeChild::DeallocSurfaceDescriptorGralloc(const SurfaceDescriptor& aBuffer) >+ ReentrantMonitor barrier("DeallocSurfaceDescriptor Lock"); >+ ReentrantMonitorAutoEnter autoMon(barrier); In all the code here, the synchronous proxies can't (shouldn't!) be reentered, so please use the regular Monitor. I'm not leaving comments on the other uses. >diff --git a/gfx/layers/ipc/ImageContainerChild.h b/gfx/layers/ipc/ImageContainerChild.h >+ nsTArray<nsRefPtr<Image> > mImageQueue; I'm not sure I understand why we need this. The descriptor returned by GonkIOSurfaceImage is a SurfaceDescriptorGralloc, right? That means it's backed by a GrallocBufferActor, which keeps a strong ref to the underlying GraphicBuffer. Why do we need the additional references here? What bug did you run into? >diff --git a/gfx/layers/ipc/PGrallocBuffer.ipdl b/gfx/layers/ipc/PGrallocBuffer.ipdl > async protocol PGrallocBuffer { >- manager PImageContainer or PLayers; >+ manager PImageBridge or PImageContainer or PLayers; Shouldn't need to do this ... let me know what error you're seeing. >diff --git a/gfx/layers/ipc/SharedImageUtils.h b/gfx/layers/ipc/SharedImageUtils.h > template<typename Deallocator> > void DeallocSharedImageData(Deallocator* protocol, const SharedImage& aImage) > { >- } else if (aImage.type() == SharedImage::TSurfaceDescriptor) { >+ } else if (aImage.type() == SharedImage::TSurfaceDescriptor && >+ aImage.get_SurfaceDescriptor().type() == SurfaceDescriptor::TShmem) { It seems to me that we shouldn't be calling this for gralloc buffers used for camera (and I guess HW-decoded video?), when GonkIOSurface owns the buffer. ShadowLayer*::DeallocSurface() will __delete__() the underlying actor, but this code doesn't do that. We'll have another problem too: we can't tell the difference here between gralloc buffers used for camera/HW-decoded video and those used for CPU-decoded video, which aren't "special" (not owned by GonkIOSurface). The rest of this looks pretty good, but I need to understand the questions above before I can r+.
Attachment #650840 - Flags: review?(jones.chris.g)
(In reply to Chris Jones [:cjones] [:warhammer] from comment #57) > Comment on attachment 650840 [details] [diff] [review] > Camera direct texturing with async video. v1.1 > > >+ nsRefPtr<Image> image = mImageContainer->CreateImage(&format, 1); > >+ image->AddRef(); > > nsRefPtr already AddRef's this: where does the extra AddRef go? I have the same question! This code is merely copied from the original ReceiveFrame. VideoSegment::AppendFrame wants a already_AddRefed<Image> so I think that's why. In other places in gecko we pass image.forget() instead. > >diff --git a/gfx/layers/ipc/ImageContainerChild.h b/gfx/layers/ipc/ImageContainerChild.h > > >+ nsTArray<nsRefPtr<Image> > mImageQueue; > > I'm not sure I understand why we need this. The descriptor returned > by GonkIOSurfaceImage is a SurfaceDescriptorGralloc, right? That > means it's backed by a GrallocBufferActor, which keeps a strong ref to > the underlying GraphicBuffer. > > Why do we need the additional references here? What bug did you run > into? GonkIOSurfaceImage needs to know when to return the buffer to the producing thread. The buffer is returned when GonkIOSurfaceImage destructs. > >diff --git a/gfx/layers/ipc/PGrallocBuffer.ipdl b/gfx/layers/ipc/PGrallocBuffer.ipdl > > > async protocol PGrallocBuffer { > >- manager PImageContainer or PLayers; > >+ manager PImageBridge or PImageContainer or PLayers; > > Shouldn't need to do this ... let me know what error you're seeing. As discussed on IRC, this is a bug in IPDL because PImageContainer operates on SharedImage which contains PGrallocBuffer so it has to manage PGrallocBuffer. > >diff --git a/gfx/layers/ipc/SharedImageUtils.h b/gfx/layers/ipc/SharedImageUtils.h > > > template<typename Deallocator> > > void DeallocSharedImageData(Deallocator* protocol, const SharedImage& aImage) > > { > > >- } else if (aImage.type() == SharedImage::TSurfaceDescriptor) { > >+ } else if (aImage.type() == SharedImage::TSurfaceDescriptor && > >+ aImage.get_SurfaceDescriptor().type() == SurfaceDescriptor::TShmem) { > > It seems to me that we shouldn't be calling this for gralloc buffers > used for camera (and I guess HW-decoded video?), when GonkIOSurface > owns the buffer. ShadowLayer*::DeallocSurface() will __delete__() the > underlying actor, but this code doesn't do that. > > We'll have another problem too: we can't tell the difference here > between gralloc buffers used for camera/HW-decoded video and those > used for CPU-decoded video, which aren't "special" (not owned by > GonkIOSurface). This indeed is a problem as I'm now implementing the gralloc buffer backed video buffer for CPU-decoded video. Maybe we can add a "bool managed" field to SurfaceDescriptorGralloc and depend on it?
(In reply to Kan-Ru Chen [:kanru] from comment #58) > (In reply to Chris Jones [:cjones] [:warhammer] from comment #57) > > Comment on attachment 650840 [details] [diff] [review] > > Camera direct texturing with async video. v1.1 > > > > >+ nsRefPtr<Image> image = mImageContainer->CreateImage(&format, 1); > > >+ image->AddRef(); > > > > nsRefPtr already AddRef's this: where does the extra AddRef go? > > I have the same question! This code is merely copied from the original > ReceiveFrame. VideoSegment::AppendFrame wants a > already_AddRefed<Image> so I think that's why. In other places in > gecko we pass image.forget() instead. > We should find out. This might be related to the problem where camera preview eventually stops working ... > > >diff --git a/gfx/layers/ipc/ImageContainerChild.h b/gfx/layers/ipc/ImageContainerChild.h > > > > >+ nsTArray<nsRefPtr<Image> > mImageQueue; > > > > I'm not sure I understand why we need this. The descriptor returned > > by GonkIOSurfaceImage is a SurfaceDescriptorGralloc, right? That > > means it's backed by a GrallocBufferActor, which keeps a strong ref to > > the underlying GraphicBuffer. > > > > Why do we need the additional references here? What bug did you run > > into? > > GonkIOSurfaceImage needs to know when to return the buffer to the > producing thread. The buffer is returned when GonkIOSurfaceImage > destructs. > I see. This needs to be documented in the patch. > > >diff --git a/gfx/layers/ipc/SharedImageUtils.h b/gfx/layers/ipc/SharedImageUtils.h > > > > > template<typename Deallocator> > > > void DeallocSharedImageData(Deallocator* protocol, const SharedImage& aImage) > > > { > > > > >- } else if (aImage.type() == SharedImage::TSurfaceDescriptor) { > > >+ } else if (aImage.type() == SharedImage::TSurfaceDescriptor && > > >+ aImage.get_SurfaceDescriptor().type() == SurfaceDescriptor::TShmem) { > > > > It seems to me that we shouldn't be calling this for gralloc buffers > > used for camera (and I guess HW-decoded video?), when GonkIOSurface > > owns the buffer. ShadowLayer*::DeallocSurface() will __delete__() the > > underlying actor, but this code doesn't do that. > > > > We'll have another problem too: we can't tell the difference here > > between gralloc buffers used for camera/HW-decoded video and those > > used for CPU-decoded video, which aren't "special" (not owned by > > GonkIOSurface). > > This indeed is a problem as I'm now implementing the gralloc buffer > backed video buffer for CPU-decoded video. Maybe we can add a > "bool managed" field to SurfaceDescriptorGralloc and depend on it? Based on the gymnastics we have to do for GonkIOSurfaceImage, that's probably the best we can do. I would call the field "external" though. Would like to see one more patch.
un-bitrot
Attachment #648897 - Attachment is obsolete: true
Attachment #648897 - Flags: review?
Attachment #652660 - Flags: review?(roc)
Attachment #650840 - Attachment is obsolete: true
Attachment #652661 - Flags: review?(jones.chris.g)
Comment on attachment 652661 [details] [diff] [review] Camera direct texturing with async video. v1.2 >diff --git a/dom/camera/GonkCameraPreview.cpp b/dom/camera/GonkCameraPreview.cpp >+ mVideoSegment.AppendFrame(image.forget(), 1, gfxIntSize(mWidth, mHeight)); Please document here that AppendFrame() takes over this reference (assuming it does! ;) ). This doesn't match COM semantics so it's a little strange to read. >diff --git a/gfx/layers/ipc/LayersSurfaces.ipdlh b/gfx/layers/ipc/LayersSurfaces.ipdlh >+ /** >+ * Whether this SurfaceDescriptor's life time is managed by external >+ * sources Let's say here We can have one source producing gralloc buffers and sharing them with another source that may also produce its own gralloc buffers. This happens for camera preview buffers sent to video code. When that happens, the producer can mark the buffer as "external" to prevent its consumer from mistakenly freeing the buffer. >diff --git a/gfx/layers/ipc/SharedImageUtils.h b/gfx/layers/ipc/SharedImageUtils.h >+ } else if (aImage.type() == SharedImage::TSurfaceDescriptor && >+ aImage.get_SurfaceDescriptor().type() == SurfaceDescriptor::TShmem) { You could go ahead and add the |external| check for gralloc buffers, but doing that in the video bug is fine too. Looks much better, thanks! r=me with comment changes above.
Attachment #652661 - Flags: review?(jones.chris.g) → review+
Address nits and fix the Monitor usage.
Attachment #652661 - Attachment is obsolete: true
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: