Closed Bug 938034 Opened 11 years ago Closed 10 years ago

[B2G] GetUserMedia can provide recording callback in media stream

Categories

(Core :: WebRTC, defect)

ARM
Gonk (Firefox OS)
defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla37

People

(Reporter: rlin, Assigned: ayang)

References

(Blocks 1 open bug)

Details

Attachments

(2 files, 24 obsolete files)

10.21 KB, patch
ayang
: review+
Details | Diff | Splinter Review
19.93 KB, patch
ayang
: review+
Details | Diff | Splinter Review
The gUM can provide camera preview video frame callback now. But the frame fps/size isn't high enough. For Recording high quality video, We need media stream should have the capacity to pass out the high fps/quality video frame from camera [Recording Callback].
Assignee: nobody → slee
Summary: [B2G] GetUserMedia can provide camera recording callback in media stream → [B2G] GetUserMedia can provide recording callback in media stream
Note - this issue is related to gUM specifically, not the media recording API.
No longer blocks: MediaRecording
Blocks: 969312
Assignee: slee → jolin
B2G camera video source was mixed with WebRTC video source. Create its own class to make further modifications easier.
Attachment #8381185 - Flags: feedback?(slee)
Attachment #8381185 - Flags: feedback?(cku)
For HW encoder to recognize data from GonkCameraSource.
Attachment #8381186 - Flags: feedback?(pchang)
Attachment #8381186 - Flags: feedback?(cku)
For media engine to create GonkCameraSource and read from it without blocking.
Attachment #8381187 - Flags: feedback?(cku)
Create a GonkMediaSource to read data from camera and send them through media stream to direct consumer.
Attachment #8381188 - Flags: feedback?(slee)
Attachment #8381188 - Flags: feedback?(cku)
Attachment #8381187 - Flags: feedback?(slee)
Comment on attachment 8381185 [details] [diff] [review] Part 1: [Refactor] Extract B2G camera video source to its own class. Review of attachment 8381185 [details] [diff] [review]: ----------------------------------------------------------------- This patch is cool. By moving out B2G specific MediaEngineVideoSource implementation from single MediaEngineWebRTCVideoSource into MediaEngineGonkVideoSource
Attachment #8381185 - Flags: feedback?(cku) → feedback+
Comment on attachment 8381186 [details] [diff] [review] Part 2: [WIP] Add new image type for B2G camera metadata buffer. Review of attachment 8381186 [details] [diff] [review]: ----------------------------------------------------------------- Sorry, I can't give a better name for this type.
Attachment #8381186 - Flags: feedback?(cku) → feedback+
Comment on attachment 8381186 [details] [diff] [review] Part 2: [WIP] Add new image type for B2G camera metadata buffer. Review of attachment 8381186 [details] [diff] [review]: ----------------------------------------------------------------- ::: gfx/layers/ImageTypes.h @@ +30,5 @@ > + * which encapsulates a "metadata" buffer read from camera hardware and is > + * only used by B2G HW video encoder. > + */ > + B2G_METADATA_BUFFER_CAMERA, > + Encapsulate in MOZ_B2G_CAMERA?
Comment on attachment 8381188 [details] [diff] [review] Part 4: [WIP] Pulling data from Gonk camera in media engine. Review of attachment 8381188 [details] [diff] [review]: ----------------------------------------------------------------- Good. I should goto Bug 969312 after this final patch. Although it's just an WIP, but it's a clear one! Good job, John ::: content/media/webrtc/MediaEngine.h @@ +142,5 @@ > + // Gonk camera has independent data stream for video recording. > +#ifdef MOZ_B2G_CAMERA > + virtual nsresult StartRecording() { return NS_ERROR_NOT_IMPLEMENTED; } > + virtual nsresult StopRecording() { return NS_ERROR_NOT_IMPLEMENTED; } > +#endif Why we need these two functions in base class? These two functions are called in MediaEngineGonkVideoSource only ::: content/media/webrtc/MediaEngineGonkVideo.cpp @@ +40,4 @@ > NS_IMPL_ADDREF_INHERITED(MediaEngineGonkVideoSource, CameraControlListener) > NS_IMPL_RELEASE_INHERITED(MediaEngineGonkVideoSource, CameraControlListener) > > +struct GonkCameraImage MOZ_FINAL : public layers::Image Kind of wired here. layers::Image is a class and GonkCameraImage is a structure @@ +46,5 @@ > + : Image(aData, ImageFormat::B2G_METADATA_BUFFER_CAMERA) > + , mSize(aWidth, aHeight) > + {} > + > + ~GonkCameraImage() virtual MOZ_OVERRIDE @@ +53,5 @@ > + android::MediaBuffer* p = static_cast<android::MediaBuffer*>(mImplData); > + p->release(); > + mImplData = nullptr; > + } > + } Client side(such as WebRTC encoder or media encoder) will receive this image from VideoSegment, right? How they handle this image? static_cast<android::MediaBuffer*>GetImplData()? @@ +143,5 @@ > + ALOGE("CameraSource buffer ptr:%p size:%d time:%lld delta:%lld duration:%lld", > + buffer->data() + buffer->range_offset(), buffer->range_length(), time, > + delta, rawSegment.GetDuration()); > + aSource->AppendToTrack(aID, &emptySegment, &rawSegment); > + } OK, so, here is the magic.
Attachment #8381188 - Flags: feedback?(cku) → feedback+
Comment on attachment 8381188 [details] [diff] [review] Part 4: [WIP] Pulling data from Gonk camera in media engine. Review of attachment 8381188 [details] [diff] [review]: ----------------------------------------------------------------- ::: content/media/webrtc/MediaEngineGonkVideo.cpp @@ +523,5 @@ > + mCameraSourceStarted = true; > + return NS_OK; > + } else { > + return NS_ERROR_FAILURE; > + } if (mCameraSource->start() == android::OK) { return NS_OK; } else { mCameraSource.clear(); return NS_ERROR_FAILURE; } @@ +532,5 @@ > +{ > + if (mCameraSourceStarted) { > + mCameraSource->stop(); > + mCameraSourceStarted = false; > + } if (mCameraSource) { mCameraSource->stop(); mCameraSource.clear(); } ::: content/media/webrtc/MediaEngineGonkVideo.h @@ +192,5 @@ > > void ChooseCapability(const MediaEnginePrefs &aPrefs); > + > + android::sp<android::GonkCameraSource> mCameraSource; > + bool mCameraSourceStarted; mCameraSourceStarted is not required. Set mCameraSource as null after stop or start fail, use mCameraSource pointer value to tell whether stated
Attachment #8381187 - Flags: feedback?(cku) → feedback+
Comment on attachment 8381186 [details] [diff] [review] Part 2: [WIP] Add new image type for B2G camera metadata buffer. Review of attachment 8381186 [details] [diff] [review]: ----------------------------------------------------------------- ::: gfx/layers/ImageTypes.h @@ +30,5 @@ > + * which encapsulates a "metadata" buffer read from camera hardware and is > + * only used by B2G HW video encoder. > + */ > + B2G_METADATA_BUFFER_CAMERA, > + How about rename to METADATA_GRALLOC? Because the usage case may not limit to camera only. Please also explain its structure somewhere.
Attachment #8381186 - Flags: feedback?(pchang) → feedback+
Comment on attachment 8381185 [details] [diff] [review] Part 1: [Refactor] Extract B2G camera video source to its own class. Looks good to me.
Attachment #8381185 - Flags: feedback?(slee) → feedback+
Comment on attachment 8381187 [details] [diff] [review] Part 3: [WIP] Add functions in Gonk camera classes for media engine Review of attachment 8381187 [details] [diff] [review]: ----------------------------------------------------------------- ::: dom/camera/GonkCameraSource.cpp @@ +741,5 @@ > } > > +bool GonkCameraSource::hasData() { > + Mutex::Autolock autoLock(mLock); > + return mStarted && !mFramesReceived.empty(); You can only check !mFramesReceived.empty(). When GonkCameraSource is stopped, mFramesReceived must be empty.
Attachment #8381187 - Flags: feedback?(slee) → feedback+
Comment on attachment 8381188 [details] [diff] [review] Part 4: [WIP] Pulling data from Gonk camera in media engine. Review of attachment 8381188 [details] [diff] [review]: ----------------------------------------------------------------- ::: content/media/webrtc/MediaEngineGonkVideo.cpp @@ +117,3 @@ > // This can fail if either a) we haven't added the track yet, or b) > // we've removed or finished the track. > + if (aSource->AppendToTrack(aID, &(segment), &emptySegment)) { Why do you need append empty segment here. @@ +120,4 @@ > aLastEndTime = target; > } > > + if (mCameraSourceStarted && mCameraSource->hasData()) { You can only check when mCameraSource is null or not. @@ +139,5 @@ > + mLastFrameTime = time; > + > + VideoSegment rawSegment; > + rawSegment.AppendFrame(frame.forget(), delta, IntSize(mWidth, mHeight)); > + ALOGE("CameraSource buffer ptr:%p size:%d time:%lld delta:%lld duration:%lld", There is no ALOGE definition here. @@ +143,5 @@ > + ALOGE("CameraSource buffer ptr:%p size:%d time:%lld delta:%lld duration:%lld", > + buffer->data() + buffer->range_offset(), buffer->range_length(), time, > + delta, rawSegment.GetDuration()); > + aSource->AppendToTrack(aID, &emptySegment, &rawSegment); > + } It seems that when start recording, you may insert one real and one empty segment to display(line 119). You also insert one real and one empty segment to direct listener. Do the listeners need to modify for accepting this scenario? @@ +506,5 @@ > +{ > + if (mCameraSourceStarted) { > + return NS_OK; > + } > + You can only check whether mCameraSource is null or not when you apply cj's suggestion.
Attachment #8381188 - Flags: feedback?(slee) → feedback+
(In reply to C.J. Ku[:CJKu] from comment #10) > Comment on attachment 8381188 [details] [diff] [review] > Part 4: [WIP] Pulling data from Gonk camera in media engine. > > Review of attachment 8381188 [details] [diff] [review]: > ----------------------------------------------------------------- > > ::: content/media/webrtc/MediaEngine.h > @@ +142,5 @@ > > + // Gonk camera has independent data stream for video recording. > > +#ifdef MOZ_B2G_CAMERA > > + virtual nsresult StartRecording() { return NS_ERROR_NOT_IMPLEMENTED; } > > + virtual nsresult StopRecording() { return NS_ERROR_NOT_IMPLEMENTED; } > > +#endif > > Why we need these two functions in base class? > These two functions are called in MediaEngineGonkVideoSource only They're in superclass because eventually I'd like start/stop recording through notifications from media stream. Will move them to subclass if that cannot be done. > > ::: content/media/webrtc/MediaEngineGonkVideo.cpp > @@ +40,4 @@ > > NS_IMPL_ADDREF_INHERITED(MediaEngineGonkVideoSource, CameraControlListener) > > NS_IMPL_RELEASE_INHERITED(MediaEngineGonkVideoSource, CameraControlListener) > > > > +struct GonkCameraImage MOZ_FINAL : public layers::Image > > Kind of wired here. layers::Image is a class and GonkCameraImage is a > structure Right. Sorry that I was just being lazy here. Will fix it. > > @@ +46,5 @@ > > + : Image(aData, ImageFormat::B2G_METADATA_BUFFER_CAMERA) > > + , mSize(aWidth, aHeight) > > + {} > > + > > + ~GonkCameraImage() > > virtual MOZ_OVERRIDE Will mark this class with MOZ_FINAL to indicate that it's not meant to be subclass-ed. As for MOZ_OVERRIDE, we don't actually override *destructor*, right? > > @@ +53,5 @@ > > + android::MediaBuffer* p = static_cast<android::MediaBuffer*>(mImplData); > > + p->release(); > > + mImplData = nullptr; > > + } > > + } > > Client side(such as WebRTC encoder or media encoder) will receive this image > from VideoSegment, right? > > How they handle this image? > static_cast<android::MediaBuffer*>GetImplData()? Exactly! > > @@ +143,5 @@ > > + ALOGE("CameraSource buffer ptr:%p size:%d time:%lld delta:%lld duration:%lld", > > + buffer->data() + buffer->range_offset(), buffer->range_length(), time, > > + delta, rawSegment.GetDuration()); > > + aSource->AppendToTrack(aID, &emptySegment, &rawSegment); > > + } > > OK, so, here is the magic.
(In reply to C.J. Ku[:CJKu] from comment #11) > Comment on attachment 8381188 [details] [diff] [review] > Part 4: [WIP] Pulling data from Gonk camera in media engine. > > Review of attachment 8381188 [details] [diff] [review]: > ----------------------------------------------------------------- > > ::: content/media/webrtc/MediaEngineGonkVideo.h > @@ +192,5 @@ > > > > void ChooseCapability(const MediaEnginePrefs &aPrefs); > > + > > + android::sp<android::GonkCameraSource> mCameraSource; > > + bool mCameraSourceStarted; > > mCameraSourceStarted is not required. Set mCameraSource as null after stop > or start fail, use mCameraSource pointer value to tell whether stated You're right. It's not needed in current implementation. I added this flag so that in the future, if start/stop can be controlled by media stream notification, we can reuse the source between multiple recording sessions. Will remove it if that cannot be done.
(In reply to peter chang[:pchang][:peter] from comment #12) > Comment on attachment 8381186 [details] [diff] [review] > Part 2: [WIP] Add new image type for B2G camera metadata buffer. > > Review of attachment 8381186 [details] [diff] [review]: > ----------------------------------------------------------------- > > ::: gfx/layers/ImageTypes.h > @@ +30,5 @@ > > + * which encapsulates a "metadata" buffer read from camera hardware and is > > + * only used by B2G HW video encoder. > > + */ > > + B2G_METADATA_BUFFER_CAMERA, > > + > > How about rename to METADATA_GRALLOC? > Because the usage case may not limit to camera only. Please also explain its > structure somewhere. I was counting on you to add another new type, say B2G_METADATA_BUFFER_GRALLOC. :P How about just calling it B2G_METADATA_BUFFER for both camera and gralloc-ed buffers?
(In reply to StevenLee[:slee] from comment #14) > Comment on attachment 8381187 [details] [diff] [review] > Part 3: [WIP] Add functions in Gonk camera classes for media engine > > Review of attachment 8381187 [details] [diff] [review]: > ----------------------------------------------------------------- > > ::: dom/camera/GonkCameraSource.cpp > @@ +741,5 @@ > > } > > > > +bool GonkCameraSource::hasData() { > > + Mutex::Autolock autoLock(mLock); > > + return mStarted && !mFramesReceived.empty(); > > You can only check !mFramesReceived.empty(). When GonkCameraSource is > stopped, mFramesReceived must be empty. Just my imagination that it could be quicker without peeking into mFramesReceived. Probably premature optimization here. :)
(In reply to StevenLee[:slee] from comment #15) > Comment on attachment 8381188 [details] [diff] [review] > Part 4: [WIP] Pulling data from Gonk camera in media engine. > > Review of attachment 8381188 [details] [diff] [review]: > ----------------------------------------------------------------- > > ::: content/media/webrtc/MediaEngineGonkVideo.cpp > @@ +117,3 @@ > > // This can fail if either a) we haven't added the track yet, or b) > > // we've removed or finished the track. > > + if (aSource->AppendToTrack(aID, &(segment), &emptySegment)) { > > Why do you need append empty segment here. So that direct listeners don't have to filter data from preview window in NotifyRealtimeData(). :) It exploits the fact that 1st segment won't be send to direct consumer if 2nd one is not nullptr (http://dxr.mozilla.org/mozilla-central/source/content/media/MediaStreamGraph.cpp?from=MediaStreamGraph.cpp#2138) > > @@ +139,5 @@ > > + mLastFrameTime = time; > > + > > + VideoSegment rawSegment; > > + rawSegment.AppendFrame(frame.forget(), delta, IntSize(mWidth, mHeight)); > > + ALOGE("CameraSource buffer ptr:%p size:%d time:%lld delta:%lld duration:%lld", > > There is no ALOGE definition here. Should've removed debug log before uploading patch. > > @@ +143,5 @@ > > + ALOGE("CameraSource buffer ptr:%p size:%d time:%lld delta:%lld duration:%lld", > > + buffer->data() + buffer->range_offset(), buffer->range_length(), time, > > + delta, rawSegment.GetDuration()); > > + aSource->AppendToTrack(aID, &emptySegment, &rawSegment); > > + } > > It seems that when start recording, you may insert one real and one empty > segment to display(line 119). You also insert one real and one empty segment > to direct listener. Do the listeners need to modify for accepting this > scenario? You're right. This WIP breaks the direct listener in WebRTC unless HW video encoder is also enabled there. > > @@ +506,5 @@ > > +{ > > + if (mCameraSourceStarted) { > > + return NS_OK; > > + } > > + > > You can only check whether mCameraSource is null or not when you apply cj's > suggestion. Agreed.
Comment on attachment 8381185 [details] [diff] [review] Part 1: [Refactor] Extract B2G camera video source to its own class. Review of attachment 8381185 [details] [diff] [review]: ----------------------------------------------------------------- ::: content/media/webrtc/MediaEngineGonkVideo.cpp @@ +430,5 @@ > + > + if (mSensorAngle == 0) { > + mImage = aImage; > + } else { > + RotateImage(aImage); Note: the rotation code in this file should be updated to match the code that landed in bug 970183. For example, there's a gfx bug that causes crashes here if you try to use this optimization.
Attachment #8381185 - Flags: feedback+
Per talk with John Lin, take this bug from him.
Assignee: jolin → ctai
Comment on attachment 8453589 [details] [diff] [review] Part1: [Refactor] Extract B2G camera video source to its own class. v2.0 Rebased. Carry CJ Ku, Steve Lee and Rendell's f+.
Attachment #8453589 - Flags: feedback+
Rebased and carry cku and pchang's f+
Attachment #8381186 - Attachment is obsolete: true
Attachment #8453591 - Flags: feedback+
Rebased and carry cku and slee's f+
Attachment #8381187 - Attachment is obsolete: true
Attachment #8453593 - Flags: feedback+
Rebased and carry cku and slee's f+.
Attachment #8381188 - Attachment is obsolete: true
Attachment #8453594 - Flags: feedback+
Rebased on top of changeset 197776.
Attachment #8453589 - Attachment is obsolete: true
Attachment #8468179 - Flags: feedback?(jolin)
Comment on attachment 8468179 [details] [diff] [review] Part1: [Refactor] Extract B2G camera video source to its own class. v2.1 Carry cku, slee, rjesup's feedback+.
Attachment #8468179 - Flags: feedback+
Rebased. Change the name to B2G_METADATA_BUFFER.
Attachment #8453591 - Attachment is obsolete: true
Attachment #8468181 - Flags: feedback?(jolin)
Attachment #8453593 - Attachment is obsolete: true
Rebased. And add a new listener, CamcorderListener, for camcorder data.
Attachment #8453594 - Attachment is obsolete: true
Attachment #8468183 - Flags: feedback?(jolin)
Comment on attachment 8468179 [details] [diff] [review] Part1: [Refactor] Extract B2G camera video source to its own class. v2.1 Review of attachment 8468179 [details] [diff] [review]: ----------------------------------------------------------------- ::: content/media/webrtc/MediaEngineGonkVideo.cpp @@ +143,5 @@ > + Intersect(cHeight, advanced[i].mHeight); > + } > + } > + } > + // Detect Mac HD cams and give them some love in the form of a dynamic default Seems unnecessary for Gonk camera? ::: content/media/webrtc/MediaEngineGonkVideo.h @@ +47,5 @@ > +class CameraAllocateRunnable; > +class GetCameraNameRunnable; > + > +/** > + * The WebRTC implementation of the MediaEngine interface. s/WebRTC/B2G/g @@ +72,5 @@ > +public: > + MediaEngineGonkVideoSource(int aIndex, > + MediaSourceType aMediaSource = MediaSourceType::Camera) > + : mCameraControl(nullptr) > + , mCallbackMonitor("WebRTCCamera.CallbackMonitor") s/WebRTC/Gonk/g @@ +77,5 @@ > + , mRotation(0) > + , mBackCamera(false) > + , mCaptureIndex(aIndex) > + , mMediaSource(aMediaSource) > + , mMonitor("WebRTCCamera.Monitor") s/WebRTC/Gonk/g @@ +96,5 @@ > + const MediaEnginePrefs &aPrefs); > + virtual nsresult Deallocate(); > + virtual nsresult Start(SourceMediaStream*, TrackID); > + virtual nsresult Stop(SourceMediaStream*, TrackID); > + virtual void SetDirectListeners(bool aHasListeners); Obsoleted by new listener type added in part 4? @@ +113,5 @@ > + return false; > + } > + > + virtual const MediaSourceType GetMediaSource() { > + return mMediaSource; Always return |MediaSourceType::Camera| and get rid of |mMediaSource| @@ +142,5 @@ > + nsCOMPtr<nsIFile> tmp; > + nsresult rv = NS_GetSpecialDirectory(NS_OS_TEMP_DIR, getter_AddRefs(tmp)); > + NS_ENSURE_SUCCESS(rv, rv); > + > + tmp->Append(NS_LITERAL_STRING("webrtc_snapshot.jpeg")); Not sure what the temp file is for... but s/webrtc/gonk/ anyway? @@ +184,5 @@ > + // mMonitor protects mImage access/changes, and transitions of mState > + // from kStarted to kStopped (which are combined with EndTrack() and > + // image changes). Note that mSources is not accessed from other threads > + // for video and is not protected. > + Monitor mMonitor; // Monitor for processing WebRTC frames. s/WebRTC/preview/ ? @@ +188,5 @@ > + Monitor mMonitor; // Monitor for processing WebRTC frames. > + int mWidth, mHeight; > + nsRefPtr<layers::Image> mImage; > + nsRefPtr<layers::ImageContainer> mImageContainer; > + bool mHasDirectListeners; Obsoleted by new listener type added in part 4? ::: content/media/webrtc/MediaEngineWebRTCVideo.cpp @@ +422,4 @@ > if (mState != kStopped && mState != kAllocated) { > return NS_ERROR_FAILURE; > } > +#ifdef XP_MACOSX s/ifdef/if/
Attachment #8468179 - Flags: feedback?(jolin) → feedback+
Attachment #8468181 - Flags: feedback?(jolin) → feedback+
Comment on attachment 8468182 [details] [diff] [review] Part 3: Add functions in Gonk camera classes for media engine. v2.1 Review of attachment 8468182 [details] [diff] [review]: ----------------------------------------------------------------- ::: dom/camera/GonkCameraControl.cpp @@ +1689,5 @@ > + bool aStoreMetaDataInVideoBuffers, > + GonkCameraSource** aCameraSource) > +{ > + MOZ_ASSERT(aCameraSource); > + NS_ENSURE_ARG_POINTER(aCameraSource); Use new recommended pattern? if (NS_WARN_IF(...)) { return ... } @@ +1698,5 @@ > + > + GonkCameraSource* source = GonkCameraSource::Create(mCameraHw, aSize, > + aFrameRate, > + aStoreMetaDataInVideoBuffers); > + NS_ENSURE_TRUE(source, NS_ERROR_NOT_AVAILABLE); Ditto
Attachment #8468182 - Flags: feedback?(jolin) → feedback+
Comment on attachment 8468183 [details] [diff] [review] Part 4: Pulling data from Gonk camera in media engine. v3.0 Review of attachment 8468183 [details] [diff] [review]: ----------------------------------------------------------------- ::: content/media/MediaStreamGraph.cpp @@ +2569,5 @@ > + l->NotifyEvent(GraphImpl(), MediaStreamListener::EVENT_HAS_NO_CAMCORDER_LISTENERS); > + } > + } > +} > + Will Add/RemoveCamcorderListener() always be called in the same thread? If so, please add assertion for that. If not, either lock the whole function, or use message dispatching like Add/RemoveListener(). Also consider fixing Add/RemoveDirectListener() if you don't mind. ::: content/media/webrtc/MediaEngineGonkVideo.cpp @@ +139,5 @@ > + if ((mCameraSource != nullptr) && mCameraSource->hasData()) { > + android::MediaBuffer* buffer = nullptr; > + mCameraSource->read(&buffer); > + if (buffer) { > + int32_t iWidth, iHeight; Nits: s/iWidth/width/g s/iHeight/height/g @@ +144,5 @@ > + mCameraSource->getFormat()->findInt32(android::kKeyWidth, &iWidth); > + mCameraSource->getFormat()->findInt32(android::kKeyHeight, &iHeight); > + int64_t time = 0; > + if (buffer->meta_data()->findInt64(android::kKeyTime, &time) && time > 0) { > + // TODO? Remove this? @@ +150,5 @@ > + nsRefPtr<layers::Image> frame = new GonkCameraImage(static_cast<void*>(buffer), > + iWidth, iHeight); > + if (mLastFrameTime < 0) { > + // This is the 1st frame. Use fixed delta (33ms) > + delta = 1000000 / mFps; Add local variable |duration| rather than |delta| to make the code easier to understand. Also comment that it is not current but last frame duration. @@ +394,5 @@ > + // Now we have CamcorderListener, so we need to open the camcorder. > + if (aHasCamcorderListeners) { > + StartRecording(); > + return; > + } You might want to explain why StopRecording() is not called here. @@ +696,5 @@ > } > > +nsresult > +MediaEngineGonkVideoSource::StartRecording() > +{ Will this (and StopRecording()) always be called in the same thread? If so, add assertion for it. If not, use a lock to protect |mCameraSource|. @@ +708,5 @@ > + nsresult result = camera->CreateSource(videoSize, > + MediaEngine::DEFAULT_VIDEO_FPS, > + true /* store meta in buffer */, > + &source); > + NS_ENSURE_SUCCESS(result, result); Nits: NS_WARN_IF ::: content/media/webrtc/MediaEngineWebRTCAudio.cpp @@ +588,5 @@ > // or Destroyed. > // Note: due to evil magic, the nsAutoPtr<AudioSegment>'s ownership transfers to > // the Runnable (AutoPtr<> = AutoPtr<>) > RUN_ON_THREAD(mThread, WrapRunnable(mSources[i], &SourceMediaStream::AppendToTrack, > + mTrackID, segment, (AudioSegment *) nullptr, (AudioSegment *) nullptr), Don't need to change this line since the 3rd parameter has default value. ::: dom/media/MediaManager.cpp @@ +543,5 @@ > + virtual bool AddCamcorderListener(MediaStreamCamcorderListener *aListener) MOZ_OVERRIDE > + { > + if (mSourceStream) { > + mSourceStream->AddCamcorderListener(aListener); > + return true; // application should ignore NotifyQueuedTrackData return mSourceStream->AddCamcorderListener(aListener)?
Attachment #8468183 - Flags: feedback?(jolin) → feedback-
Comment on attachment 8468179 [details] [diff] [review] Part1: [Refactor] Extract B2G camera video source to its own class. v2.1 Review of attachment 8468179 [details] [diff] [review]: ----------------------------------------------------------------- Thanks for your feedback. :D ::: content/media/webrtc/MediaEngineGonkVideo.cpp @@ +143,5 @@ > + Intersect(cHeight, advanced[i].mHeight); > + } > + } > + } > + // Detect Mac HD cams and give them some love in the form of a dynamic default Yes, not necessary. Will remove it. ::: content/media/webrtc/MediaEngineGonkVideo.h @@ +47,5 @@ > +class CameraAllocateRunnable; > +class GetCameraNameRunnable; > + > +/** > + * The WebRTC implementation of the MediaEngine interface. OK, Thanks. @@ +72,5 @@ > +public: > + MediaEngineGonkVideoSource(int aIndex, > + MediaSourceType aMediaSource = MediaSourceType::Camera) > + : mCameraControl(nullptr) > + , mCallbackMonitor("WebRTCCamera.CallbackMonitor") OK, Thanks. @@ +77,5 @@ > + , mRotation(0) > + , mBackCamera(false) > + , mCaptureIndex(aIndex) > + , mMediaSource(aMediaSource) > + , mMonitor("WebRTCCamera.Monitor") OK, Thanks. @@ +96,5 @@ > + const MediaEnginePrefs &aPrefs); > + virtual nsresult Deallocate(); > + virtual nsresult Start(SourceMediaStream*, TrackID); > + virtual nsresult Stop(SourceMediaStream*, TrackID); > + virtual void SetDirectListeners(bool aHasListeners); Can not be obsoleted. Bug 1049302 needs it. @@ +113,5 @@ > + return false; > + } > + > + virtual const MediaSourceType GetMediaSource() { > + return mMediaSource; OK, Thanks. @@ +142,5 @@ > + nsCOMPtr<nsIFile> tmp; > + nsresult rv = NS_GetSpecialDirectory(NS_OS_TEMP_DIR, getter_AddRefs(tmp)); > + NS_ENSURE_SUCCESS(rv, rv); > + > + tmp->Append(NS_LITERAL_STRING("webrtc_snapshot.jpeg")); I thinks it is for snapshot image. Should keep it. @@ +184,5 @@ > + // mMonitor protects mImage access/changes, and transitions of mState > + // from kStarted to kStopped (which are combined with EndTrack() and > + // image changes). Note that mSources is not accessed from other threads > + // for video and is not protected. > + Monitor mMonitor; // Monitor for processing WebRTC frames. OK, Thanks. @@ +188,5 @@ > + Monitor mMonitor; // Monitor for processing WebRTC frames. > + int mWidth, mHeight; > + nsRefPtr<layers::Image> mImage; > + nsRefPtr<layers::ImageContainer> mImageContainer; > + bool mHasDirectListeners; ditto. ::: content/media/webrtc/MediaEngineWebRTCVideo.cpp @@ +422,4 @@ > if (mState != kStopped && mState != kAllocated) { > return NS_ERROR_FAILURE; > } > +#ifdef XP_MACOSX OK, Thanks.
Comment on attachment 8468183 [details] [diff] [review] Part 4: Pulling data from Gonk camera in media engine. v3.0 Review of attachment 8468183 [details] [diff] [review]: ----------------------------------------------------------------- Thanks for your detail review. ::: content/media/MediaStreamGraph.cpp @@ +2569,5 @@ > + l->NotifyEvent(GraphImpl(), MediaStreamListener::EVENT_HAS_NO_CAMCORDER_LISTENERS); > + } > + } > +} > + |Add/RemoveCamcorderListener| will be called in main thread. Will add MOZ_ASSERT(NS_IsMainThread()); in front of the function. ::: content/media/webrtc/MediaEngineGonkVideo.cpp @@ +139,5 @@ > + if ((mCameraSource != nullptr) && mCameraSource->hasData()) { > + android::MediaBuffer* buffer = nullptr; > + mCameraSource->read(&buffer); > + if (buffer) { > + int32_t iWidth, iHeight; OK, Thanks. @@ +144,5 @@ > + mCameraSource->getFormat()->findInt32(android::kKeyWidth, &iWidth); > + mCameraSource->getFormat()->findInt32(android::kKeyHeight, &iHeight); > + int64_t time = 0; > + if (buffer->meta_data()->findInt64(android::kKeyTime, &time) && time > 0) { > + // TODO? OK, Thanks. @@ +150,5 @@ > + nsRefPtr<layers::Image> frame = new GonkCameraImage(static_cast<void*>(buffer), > + iWidth, iHeight); > + if (mLastFrameTime < 0) { > + // This is the 1st frame. Use fixed delta (33ms) > + delta = 1000000 / mFps; OK, Thanks. @@ +394,5 @@ > + // Now we have CamcorderListener, so we need to open the camcorder. > + if (aHasCamcorderListeners) { > + StartRecording(); > + return; > + } OK, Thanks. @@ +696,5 @@ > } > > +nsresult > +MediaEngineGonkVideoSource::StartRecording() > +{ Add Monitor mCameraSourceMonitor to prevent race condition while |StopRecording|. @@ +708,5 @@ > + nsresult result = camera->CreateSource(videoSize, > + MediaEngine::DEFAULT_VIDEO_FPS, > + true /* store meta in buffer */, > + &source); > + NS_ENSURE_SUCCESS(result, result); OK, Thanks. ::: content/media/webrtc/MediaEngineWebRTCAudio.cpp @@ +588,5 @@ > // or Destroyed. > // Note: due to evil magic, the nsAutoPtr<AudioSegment>'s ownership transfers to > // the Runnable (AutoPtr<> = AutoPtr<>) > RUN_ON_THREAD(mThread, WrapRunnable(mSources[i], &SourceMediaStream::AppendToTrack, > + mTrackID, segment, (AudioSegment *) nullptr, (AudioSegment *) nullptr), We need this for build pass. Because it use macro WrapRunnable. ::: dom/media/MediaManager.cpp @@ +543,5 @@ > + virtual bool AddCamcorderListener(MediaStreamCamcorderListener *aListener) MOZ_OVERRIDE > + { > + if (mSourceStream) { > + mSourceStream->AddCamcorderListener(aListener); > + return true; // application should ignore NotifyQueuedTrackData AddCamcorderListener return void. So we can't do that.
Carry John and rjesup's f+.
Attachment #8468179 - Attachment is obsolete: true
Attachment #8469786 - Flags: feedback+
Carry John and pchang's f+. Add r=roc.
Attachment #8468181 - Attachment is obsolete: true
Attachment #8469790 - Flags: feedback+
Carry John's f+. Address John's comments.
Attachment #8468182 - Attachment is obsolete: true
Attachment #8469792 - Flags: feedback+
Address John's comment.
Attachment #8468183 - Attachment is obsolete: true
Attachment #8469793 - Flags: feedback?(jolin)
Change #if XP_MACOSX to #ifdef XP_MACOSX.
Attachment #8469786 - Attachment is obsolete: true
Comment on attachment 8469800 [details] [diff] [review] Part1: [Refactor] Extract B2G camera video source to its own class. v2.3 Hi, Randell, Could you please review this patch? This patch is separated B2G camera video source from WebRTCVideoSource.
Attachment #8469800 - Flags: review?(rjesup)
Attachment #8469800 - Flags: feedback+
Comment on attachment 8469800 [details] [diff] [review] Part1: [Refactor] Extract B2G camera video source to its own class. v2.3 Review of attachment 8469800 [details] [diff] [review]: ----------------------------------------------------------------- Technically, this works. I have a design problem here, though. There is way too much duplicated code between MediaEngineWebRTCVideo and MediaEngineGonkVideo. This will be a maintenance nightmare (ok, at least a headache and loaded gun waiting to shoot us in the foot). Please identify the major base functionality and refactor that into a base class both inherit from. ::: content/media/webrtc/MediaEngineGonkVideo.cpp @@ +83,5 @@ > + > + MonitorAutoLock lock(mMonitor); > + // B2G does AddTrack, but holds kStarted until the hardware changes state. > + // So mState could be kReleased here. We really don't care about the state, > + // though. Good, I see you're merged to the latest MediaEngineWebRTCVideo.* source. Please track any future updates. ::: content/media/webrtc/MediaEngineWebRTC.h @@ -405,2 @@ > AsyncLatencyLogger::Get()->Release(); > -#endif Are you sure you wanted to do this, and not remove the Release() line? ::: content/media/webrtc/MediaEngineWebRTCVideo.cpp @@ +11,1 @@ > #include "MediaEngineWebRTC.h" MediaEngineWebRTC.h should come first; if it can't that's an indication that it is missing important includes.
Attachment #8469800 - Flags: review?(rjesup) → review-
Hi, Randell, You are right. I should add a base class. Also I will fix below comments in that patch. Thanks. (In reply to Randell Jesup [:jesup] from comment #44) > Comment on attachment 8469800 [details] [diff] [review] > Part1: [Refactor] Extract B2G camera video source to its own class. v2.3 > > Review of attachment 8469800 [details] [diff] [review]: > ----------------------------------------------------------------- > > Technically, this works. > > I have a design problem here, though. There is way too much duplicated code > between MediaEngineWebRTCVideo and MediaEngineGonkVideo. This will be a > maintenance nightmare (ok, at least a headache and loaded gun waiting to > shoot us in the foot). > > Please identify the major base functionality and refactor that into a base > class both inherit from. > > ::: content/media/webrtc/MediaEngineGonkVideo.cpp > @@ +83,5 @@ > > + > > + MonitorAutoLock lock(mMonitor); > > + // B2G does AddTrack, but holds kStarted until the hardware changes state. > > + // So mState could be kReleased here. We really don't care about the state, > > + // though. > > Good, I see you're merged to the latest MediaEngineWebRTCVideo.* source. > Please track any future updates. > > ::: content/media/webrtc/MediaEngineWebRTC.h > @@ -405,2 @@ > > AsyncLatencyLogger::Get()->Release(); > > -#endif > > Are you sure you wanted to do this, and not remove the Release() line? > > ::: content/media/webrtc/MediaEngineWebRTCVideo.cpp > @@ +11,1 @@ > > #include "MediaEngineWebRTC.h" > > MediaEngineWebRTC.h should come first; if it can't that's an indication that > it is missing important includes.
Comment on attachment 8469793 [details] [diff] [review] Part 4: Pulling data from Gonk camera in media engine. v3.1 Review of attachment 8469793 [details] [diff] [review]: ----------------------------------------------------------------- LGTM. ::: content/media/webrtc/MediaEngineGonkVideo.cpp @@ +45,5 @@ > NS_IMPL_QUERY_INTERFACE(MediaEngineGonkVideoSource, nsIRunnable) > NS_IMPL_ADDREF_INHERITED(MediaEngineGonkVideoSource, CameraControlListener) > NS_IMPL_RELEASE_INHERITED(MediaEngineGonkVideoSource, CameraControlListener) > > +class GonkCameraImage MOZ_FINAL : public layers::Image You might want to add some notes about how to use the image data here. E.q., GetImplData()/casting to MediaBuffer*, add_ref() to manage lifecycle, etc... @@ +148,5 @@ > + buffer->meta_data()->findInt64(android::kKeyTime, &time); > + nsRefPtr<layers::Image> frame = new GonkCameraImage(static_cast<void*>(buffer), > + width, height); > + if (mLastFrameTime < 0) { > + // This is the 1st frame. Use fixed delta (33ms) It won't be 33ms if mFps is not 30. How about "Infer the value using frame rate"?
Attachment #8469793 - Flags: feedback?(jolin) → feedback+
Comment on attachment 8471359 [details] [diff] [review] Part 4: Pulling data from Gonk camera in media engine. v3.2 Carry John's f+
Attachment #8471359 - Flags: feedback+
Comment on attachment 8469790 [details] [diff] [review] Part 2: Add new image type for B2G camera metadata buffer. v2.2 Review of attachment 8469790 [details] [diff] [review]: ----------------------------------------------------------------- Hi, roc, Could you please give me some feedbacks for this series of patches? The main idea of these patches is to provide a recording callback in media stream in FX OS. Then MediaRecorder can use this callback to encode higher resolution data(high than preview data). I will modify part 1 based on Randell's comment to re-factor a base class for WebRTC and Gonk camera video source. In the mean time, could you please check part2-part4? Thanks.
Attachment #8469790 - Flags: feedback?(roc)
Comment on attachment 8469792 [details] [diff] [review] Part 3: Add functions in Gonk camera classes for media engine. v2.2 roc, Could you please check this patch? I add some functions to get recording data from GonkCamera.
Attachment #8469792 - Flags: feedback?(roc)
Comment on attachment 8469792 [details] [diff] [review] Part 3: Add functions in Gonk camera classes for media engine. v2.2 hi, mikeh, Could you please check this patch? The main idea of this bug is to provide a recording callback in media stream in FX OS. Then MediaRecorder can use this callback to encode higher resolution data(high than preview data). I add some functions to get recording data from GonkCamera. Thanks.
Attachment #8469792 - Flags: feedback?(mhabicher)
Comment on attachment 8471359 [details] [diff] [review] Part 4: Pulling data from Gonk camera in media engine. v3.2 Hi, roc, In this patch, I do below modifications. 1. Add a new MediaStreamListener called MediaStreamCamcorderListener to notify recording data from Gonk camera. 2. Modify MediaEngineGonkVideoSource::NotifyPull to append recording frame into MediaStreamTrack. Thanks.
Attachment #8471359 - Flags: feedback?(roc)
Comment on attachment 8469792 [details] [diff] [review] Part 3: Add functions in Gonk camera classes for media engine. v2.2 Review of attachment 8469792 [details] [diff] [review]: ----------------------------------------------------------------- f+ from me (and r+ with comments below addressed). ::: dom/camera/GonkCameraControl.cpp @@ +1699,5 @@ > + GonkCameraSource** aCameraSource) > +{ > + MOZ_ASSERT(aCameraSource); > + if (NS_WARN_IF(!aCameraSource)) { > + return NS_ERROR_INVALID_POINTER; Please make this return NS_ERROR_INVALID_ARG. ::: dom/camera/GonkCameraControl.h @@ +90,5 @@ > + * data camera provides in buffers will be just opaque handles rather than > + * full size images. It's used to reduce the overhead of buffer data copying. > + * Caller should be responsible for the lifecycle of returned GonkCameraSource > + * object. > + */ Please use //-prefixed comments to follow the style used in the rest of this header. It might be simpler to change 'aFrameRate' to 'aFps' or 'aFramesPerSecond' so that it's immediately obvious what kind of parameter goes in there. If this argument can never be negative, please make it a uint32_t. Also, please document the return values. Finally, since the caller is definitely responsible for the life-cycle of the GonkCameraSource object, the description should probably read "Caller is responsbible for...". ::: dom/camera/GonkCameraSource.cpp @@ +776,1 @@ > } // namespace android General: please update the header of this file to read: "Copyright (C) 2013-2014 Mozilla Foundation"
Attachment #8469792 - Flags: feedback?(mhabicher) → feedback+
Comment on attachment 8469800 [details] [diff] [review] Part1: [Refactor] Extract B2G camera video source to its own class. v2.3 Review of attachment 8469800 [details] [diff] [review]: ----------------------------------------------------------------- Drive-by review: I noticed a few nits, below. ::: content/media/webrtc/MediaEngineGonkVideo.cpp @@ +45,5 @@ > +NS_IMPL_QUERY_INTERFACE(MediaEngineGonkVideoSource, nsIRunnable) > +NS_IMPL_ADDREF_INHERITED(MediaEngineGonkVideoSource, CameraControlListener) > +NS_IMPL_RELEASE_INHERITED(MediaEngineGonkVideoSource, CameraControlListener) > + > +bool /* static */ bool @@ +50,5 @@ > +MediaEngineGonkVideoSource::IsWithin(int32_t n, const ConstrainLongRange& aRange) { > + return aRange.mMin <= n && n <= aRange.mMax; > +} > + > +int32_t /* static */ int32_t @@ +55,5 @@ > +MediaEngineGonkVideoSource::Clamp(int32_t n, const ConstrainLongRange& aRange) { > + return std::max(aRange.mMin, std::min(n, aRange.mMax)); > +} > + > +bool /* static */ bool @@ +60,5 @@ > +MediaEngineGonkVideoSource::AreIntersecting(const ConstrainLongRange& aA, const ConstrainLongRange& aB) { > + return aA.mMax >= aB.mMin && aA.mMin <= aB.mMax; > +} > + > +bool /* static */ static @@ +73,5 @@ > +// the last frame for whatever minimum period it think it needs. Note that > +// this means that no *real* frame can be inserted during this period. > +void > +MediaEngineGonkVideoSource::NotifyPull(MediaStreamGraph* aGraph, > + SourceMediaStream *aSource, nit: SourceMediaStream* @@ +203,5 @@ > +} > + > +nsresult > +MediaEngineGonkVideoSource::Allocate(const VideoTrackConstraintsN &aConstraints, > + const MediaEnginePrefs &aPrefs) nit: & goes with type. @@ +283,5 @@ > + return NS_OK; > +} > + > +nsresult > +MediaEngineGonkVideoSource::Stop(SourceMediaStream *aSource, TrackID aID) nit: * goes with type. @@ +391,5 @@ > +MediaEngineGonkVideoSource::DeallocImpl() { > +MOZ_ASSERT(NS_IsMainThread()); > + > +mCameraControl = nullptr; > +} nit: indentation. @@ +418,5 @@ > + > + int result; > + > + if (aBackCamera) { > + //back camera nit: //<space>comment text @@ +421,5 @@ > + if (aBackCamera) { > + //back camera > + result = (aCameraMountAngle - screenAngle + 360) % 360; > + } else { > + //front camera ditto. @@ +621,5 @@ > + > + return true; // return true because we're accepting the frame > +} > + > +} possible nit? -- } // namespace mozilla ::: content/media/webrtc/MediaEngineGonkVideo.h @@ +66,5 @@ > + */ > +class MediaEngineGonkVideoSource : public MediaEngineVideoSource > + , public nsRunnable > + , public mozilla::hal::ScreenConfigurationObserver > + , public CameraControlListener nit: line up commas with colon. @@ +83,5 @@ > + , mInitDone(false) > + , mInSnapshotMode(false) > + , mSnapshotPath(nullptr) > + { > + mState = kReleased; Could this be changed to mState(kReleased)? @@ +101,5 @@ > + bool aAgcOn, uint32_t aAGC, > + bool aNoiseOn, uint32_t aNoise, > + int32_t aPlayoutDelay) { return NS_OK; }; > + virtual void NotifyPull(MediaStreamGraph* aGraph, > + SourceMediaStream *aSource, nit: * goes with type. @@ +104,5 @@ > + virtual void NotifyPull(MediaStreamGraph* aGraph, > + SourceMediaStream *aSource, > + TrackID aId, > + StreamTime aDesiredTime, > + TrackTicks &aLastEndTime); nit: & goes with type. @@ +187,5 @@ > + nsRefPtr<layers::Image> mImage; > + nsRefPtr<layers::ImageContainer> mImageContainer; > + bool mHasDirectListeners; > + > + nsTArray<SourceMediaStream *> mSources; // When this goes empty, we shut down HW nit: * goes with type. @@ +208,5 @@ > + > + void ChooseCapability(const VideoTrackConstraintsN &aConstraints, > + const MediaEnginePrefs &aPrefs); > + void GuessCapability(const VideoTrackConstraintsN &aConstraints, > + const MediaEnginePrefs &aPrefs); nit: & goes with type.
Comment on attachment 8471359 [details] [diff] [review] Part 4: Pulling data from Gonk camera in media engine. v3.2 Review of attachment 8471359 [details] [diff] [review]: ----------------------------------------------------------------- Drive-by nit-ing. ::: content/media/DOMMediaStream.h @@ +98,5 @@ > + * data directly to MediaRecorder without going through graph queuing. > + * Returns a bool to let us know if direct data will be delivered. > + */ > + virtual bool AddCamcorderListener(MediaStreamCamcorderListener *aListener) { return false; } > + virtual void RemoveCamcorderListener(MediaStreamCamcorderListener *aListener) {} nit: * goes with type. ::: content/media/MediaStreamGraph.cpp @@ +2429,5 @@ > > bool > +SourceMediaStream::AppendToTrack(TrackID aID, MediaSegment* aSegment, > + MediaSegment *aRawSegment, > + MediaSegment *aCamcorderSegment) nit: * goes with type. @@ +2483,5 @@ > > + > +void > +SourceMediaStream::NotifyCamcorderConsumers(TrackData *aTrack, > + MediaSegment *aSegment) nit: * goes with type. @@ +2488,5 @@ > +{ > + // Call with mMutex locked > + MOZ_ASSERT(aTrack); > + > + for (uint32_t j = 0; j < mCamcorderListeners.Length(); ++j) { I think the correct style here is to use: for (nsTArray<nsRefPtr<MediaStreamCamcorderListener> >::index_type j = ... (And for all the other occurences below.) ::: content/media/MediaStreamGraph.h @@ +768,5 @@ > * or the stream was already finished. > */ > + bool AppendToTrack(TrackID aID, MediaSegment* aSegment, > + MediaSegment *aRawSegment = nullptr, > + MediaSegment *aCamcorderSegment = nullptr); nit: * goes with type. @@ +906,5 @@ > + * on the thread providing the data, and will call the Listeners on this > + * thread. > + */ > + void NotifyCamcorderConsumers(TrackData *aTrack, > + MediaSegment *aSegment); nit: * goes with type. ::: content/media/webrtc/MediaEngineGonkVideo.cpp @@ +726,5 @@ > +MediaEngineGonkVideoSource::StopRecording() > +{ > + MOZ_ASSERT(NS_IsMainThread()); > + MonitorAutoLock enter(mCameraSourceMonitor); > + if (mCameraSource != nullptr) { nit: if (mCameraSource) { ::: content/media/webrtc/MediaEngineWebRTCAudio.cpp @@ +588,5 @@ > // or Destroyed. > // Note: due to evil magic, the nsAutoPtr<AudioSegment>'s ownership transfers to > // the Runnable (AutoPtr<> = AutoPtr<>) > RUN_ON_THREAD(mThread, WrapRunnable(mSources[i], &SourceMediaStream::AppendToTrack, > + mTrackID, segment, (AudioSegment *) nullptr, (AudioSegment *) nullptr), Use C++-style casts. ::: dom/media/MediaManager.cpp @@ +539,5 @@ > } > } > > + // Allow getUserMedia to pass camcorder data directly to MediaRecorder. > + virtual bool AddCamcorderListener(MediaStreamCamcorderListener *aListener) MOZ_OVERRIDE nit: * goes with type. I notice this file uses |Type *aValue| a lot--this should probably be cleaned up in a follow-up, style-specific bug. @@ +548,5 @@ > + } > + return false; > + } > + > + virtual void RemoveCamcorderListener(MediaStreamCamcorderListener *aListener) MOZ_OVERRIDE nit: ditto.
Comment on attachment 8469790 [details] [diff] [review] Part 2: Add new image type for B2G camera metadata buffer. v2.2 Review of attachment 8469790 [details] [diff] [review]: ----------------------------------------------------------------- ::: gfx/layers/ImageTypes.h @@ +27,5 @@ > > /** > + * The B2G_METADATA_BUFFER format creates a GonkCameraSourceImage, > + * which encapsulates a "metadata" buffer read from camera hardware and is > + * only used by B2G HW video encoder. Can you explain further what's in this buffer? Is it a high-resolution image? If so, that's not metadata... Metadata means something about the image, e.g. orientation data or GPS location or something like that.
Attachment #8469790 - Flags: feedback?(roc) → feedback-
Comment on attachment 8469792 [details] [diff] [review] Part 3: Add functions in Gonk camera classes for media engine. v2.2 Review of attachment 8469792 [details] [diff] [review]: ----------------------------------------------------------------- ::: dom/camera/GonkCameraSource.h @@ +81,5 @@ > + * buffer. > + * > + * @return true if source has buffer ready for reading. > + */ > + bool hasData(); Why doesn't GonkCameraSource follow Gecko naming conventions where the first letter of a method name is capitalized?
Comment on attachment 8471359 [details] [diff] [review] Part 4: Pulling data from Gonk camera in media engine. v3.2 Review of attachment 8471359 [details] [diff] [review]: ----------------------------------------------------------------- ::: content/media/MediaStreamGraph.h @@ +229,5 @@ > + * This will be called on any MediaStreamCamcorderListener added to a > + * a SourceMediaStream when AppendToTrack() is called. The MediaSegment > + * will be the CamcorderSegment (the metadata store in buffer) if available > + * in AppendToTrack(). Note that NotifyQueuedTrackChanges() calls will also > + * still occur. It's not clear why we need a new notification here. If we just have a parallel set of images being sent, then I think the best way to represent that would be with a separate video track. Is there a reason we can't use a separate video track?
Attachment #8471359 - Flags: feedback?(roc) → feedback-
(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #57) > Why doesn't GonkCameraSource follow Gecko naming conventions where the first > letter of a method name is capitalized? Because it was pretty heavily ported from CameraSource.h in libstagefright.
(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #56) > Comment on attachment 8469790 [details] [diff] [review] > Part 2: Add new image type for B2G camera metadata buffer. v2.2 > > Review of attachment 8469790 [details] [diff] [review]: > ----------------------------------------------------------------- > > ::: gfx/layers/ImageTypes.h > @@ +27,5 @@ > > > > /** > > + * The B2G_METADATA_BUFFER format creates a GonkCameraSourceImage, > > + * which encapsulates a "metadata" buffer read from camera hardware and is > > + * only used by B2G HW video encoder. > > Can you explain further what's in this buffer? Is it a high-resolution > image? If so, that's not metadata... Metadata means something about the > image, e.g. orientation data or GPS location or something like that. Please refer to below address. http://androidxref.com/4.4.4_r1/xref/frameworks/av/services/camera/libcameraservice/device1/CameraHardwareInterface.h#254 It is not a high resolution image. You can request the camera hal to store meta data in the video buffers. The OMX also support this mode, see [1]. The OMX encoder will look up the stored metadata in the video buffers first. Then do the encoding job. Basically it start from GonkRecorder[2]. When we start a camera recorder, we will use meta data mode to create camera source if possible. But some camera HAL may not support storing meta data in the video buffers. After revisit the document, I think I miss the part about dealing |storeMetaDataInBuffers| fail case. I should take it into consideration in part 4 patch. [1]: http://androidxref.com/4.4.4_r1/xref/frameworks/av/media/libstagefright/include/OMX.h#71 [2]: http://dxr.mozilla.org/mozilla-central/source/dom/camera/GonkRecorder.cpp#1174
(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #58) > Comment on attachment 8471359 [details] [diff] [review] > Part 4: Pulling data from Gonk camera in media engine. v3.2 > > Review of attachment 8471359 [details] [diff] [review]: > ----------------------------------------------------------------- > > ::: content/media/MediaStreamGraph.h > @@ +229,5 @@ > > + * This will be called on any MediaStreamCamcorderListener added to a > > + * a SourceMediaStream when AppendToTrack() is called. The MediaSegment > > + * will be the CamcorderSegment (the metadata store in buffer) if available > > + * in AppendToTrack(). Note that NotifyQueuedTrackChanges() calls will also > > + * still occur. > > It's not clear why we need a new notification here. > > If we just have a parallel set of images being sent, then I think the best > way to represent that would be with a separate video track. > > Is there a reason we can't use a separate video track? No particularly reason that we can't. Just unclear to one question. If we separate it to another video track, should we expose this kind of video track to web API or internal usage only?
Thanks for your feedbacks and drive-by reviews. I will address below comments. (In reply to Mike Habicher [:mikeh] from comment #53) > Comment on attachment 8469792 [details] [diff] [review] > Part 3: Add functions in Gonk camera classes for media engine. v2.2 > > Review of attachment 8469792 [details] [diff] [review]: > ----------------------------------------------------------------- > > f+ from me (and r+ with comments below addressed). > > ::: dom/camera/GonkCameraControl.cpp > @@ +1699,5 @@ > > + GonkCameraSource** aCameraSource) > > +{ > > + MOZ_ASSERT(aCameraSource); > > + if (NS_WARN_IF(!aCameraSource)) { > > + return NS_ERROR_INVALID_POINTER; > > Please make this return NS_ERROR_INVALID_ARG. > > ::: dom/camera/GonkCameraControl.h > @@ +90,5 @@ > > + * data camera provides in buffers will be just opaque handles rather than > > + * full size images. It's used to reduce the overhead of buffer data copying. > > + * Caller should be responsible for the lifecycle of returned GonkCameraSource > > + * object. > > + */ > > Please use //-prefixed comments to follow the style used in the rest of this > header. > > It might be simpler to change 'aFrameRate' to 'aFps' or 'aFramesPerSecond' > so that it's immediately obvious what kind of parameter goes in there. If > this argument can never be negative, please make it a uint32_t. > > Also, please document the return values. > > Finally, since the caller is definitely responsible for the life-cycle of > the GonkCameraSource object, the description should probably read "Caller is > responsbible for...". > > ::: dom/camera/GonkCameraSource.cpp > @@ +776,1 @@ > > } // namespace android > > General: please update the header of this file to read: > "Copyright (C) 2013-2014 Mozilla Foundation"
Hi, roc, If we separate camcorder data to another video track, should we expose this kind of video track to web API or internal usage only? Thanks.
Flags: needinfo?(roc)
It's still not clear to me what the metadata is. Is it the image data for the video frame in some alternate (internal) format?
Flags: needinfo?(roc)
(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #64) > It's still not clear to me what the metadata is. Is it the image data for > the video frame in some alternate (internal) format? Hope this helps: http://androidxref.com/4.4.4_r1/xref/frameworks/native/include/media/hardware/MetadataBufferType.h#25
Hi, roc, Yes, it is kind of format that the creator of metadata buffers(in this case, camera) and video encoder know how to interpreter this buffer. Please see below excerpts from [1]. "The creator of metadata buffers and video encoder share common knowledge on what is actually being stored in these metadata buffers, and how the information can be used by the video encoder component to locate the actual pixel data as the source input for video encoder, plus whatever other information that is necessary." John, Thanks for your information. [1]: http://androidxref.com/4.4.4_r1/xref/frameworks/native/include/media/hardware/MetadataBufferType.h#25 (In reply to John Lin[:jolin][:jhlin] from comment #65) > (In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #64) > > It's still not clear to me what the metadata is. Is it the image data for > > the video frame in some alternate (internal) format? > > Hope this helps: > http://androidxref.com/4.4.4_r1/xref/frameworks/native/include/media/ > hardware/MetadataBufferType.h#25
Flags: needinfo?(roc)
Thanks. It seems to me that making it a separate video track is not a good idea. There's really only one video track; that track's video frames have two parts, the "metadata" part and the YCbCr part. But creating a separate system of callbacks for handling these frames doesn't seem like a good idea either. How about we use a single video track and create a new Image type that combines a "metadata buffer" with a YCbCr version of the video frame? MediaRecorder would detect this image type and use the metadata buffer part. Compositing would use the YCbCr part. Is that possible?
Flags: needinfo?(roc)
Hi, roc, Just want to clarify one part of your below comments (In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #67) > Thanks. > > It seems to me that making it a separate video track is not a good idea. > There's really only one video track; that track's video frames have two > parts, the "metadata" part and the YCbCr part. Does the YCbCr part means preview image? Or it means payload part in MetadataBufferType.h. In B2G case, we will have tow kinds of buffers. One is preview buffer for rendering on screen, another is metadata buffers for video encoder. They have different resolutions and frame rates. We get preview data by the callback |OnNewPreviewFrame|. And the camcorder frames come from |GonkCameraSource::dataCallbackTimestamp|. So I think we can make it a separate video track. > > But creating a separate system of callbacks for handling these frames > doesn't seem like a good idea either. > > How about we use a single video track and create a new Image type that > combines a "metadata buffer" with a YCbCr version of the video frame? > MediaRecorder would detect this image type and use the metadata buffer part. > Compositing would use the YCbCr part. Is that possible?
Flags: needinfo?(roc)
(In reply to Chia-hung Tai [:ctai :ctai_mozilla :cht] from comment #68) > (In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #67) > > Thanks. > > > > It seems to me that making it a separate video track is not a good idea. > > There's really only one video track; that track's video frames have two > > parts, the "metadata" part and the YCbCr part. > Does the YCbCr part means preview image? Or it means payload part in > MetadataBufferType.h. The preview image. > In B2G case, we will have tow kinds of buffers. One is preview buffer for > rendering on screen, another is metadata buffers for video encoder. They > have different resolutions and frame rates. > > We get preview data by the callback |OnNewPreviewFrame|. And the camcorder > frames come from |GonkCameraSource::dataCallbackTimestamp|. The metadata buffers will have higher frame rate, right? If so, then you can deliver a frame to the MediaStreamGraph whenever you receive a metadata buffer. And you when you get a preview image frame, keep a reference to it and every time you generate the metadata buffer image, give it a reference to the last received preview image. > So I think we can make it a separate video track. Making it a separate video track is going to get confusing when we start dealing with MediaStreams that really have multiple video tracks.
Flags: needinfo?(roc)
Hi, roc, Thanks for your quick respond. I think we can use a single video track and create a new Image type that combines a "metadata buffer" with a YCbCr version of the video frame. I will try this way and update a new patch. Remove the dependence on Bug 1003695. (In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #69) > (In reply to Chia-hung Tai [:ctai :ctai_mozilla :cht] from comment #68) > > (In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #67) > > > Thanks. > > > > > > It seems to me that making it a separate video track is not a good idea. > > > There's really only one video track; that track's video frames have two > > > parts, the "metadata" part and the YCbCr part. > > Does the YCbCr part means preview image? Or it means payload part in > > MetadataBufferType.h. > > The preview image. > > > In B2G case, we will have tow kinds of buffers. One is preview buffer for > > rendering on screen, another is metadata buffers for video encoder. They > > have different resolutions and frame rates. > > > > We get preview data by the callback |OnNewPreviewFrame|. And the camcorder > > frames come from |GonkCameraSource::dataCallbackTimestamp|. > > The metadata buffers will have higher frame rate, right? If so, then you can > deliver a frame to the MediaStreamGraph whenever you receive a metadata > buffer. And you when you get a preview image frame, keep a reference to it > and every time you generate the metadata buffer image, give it a reference > to the last received preview image. > > > So I think we can make it a separate video track. > > Making it a separate video track is going to get confusing when we start > dealing with MediaStreams that really have multiple video tracks. Indeed, how to expose this kind of track to JS context will be a problem.
No longer depends on: 1003695
See Also: → 1069246
Blocks: 1074675
Assignee: ctai → ayang
Attached patch enable_gum_recording_callback (obsolete) — Splinter Review
According to previous, this patch uses a new Image type GumTextureImage to store preview and meta buffer (MediaBuffer). It also avoids MSG thread to queue meta buffer by sending Image via AppendToTrack() in camera recording callback thread.
Attachment #8469790 - Attachment is obsolete: true
Attachment #8469792 - Attachment is obsolete: true
Attachment #8469800 - Attachment is obsolete: true
Attachment #8471359 - Attachment is obsolete: true
Attachment #8469792 - Flags: feedback?(roc)
Attachment #8519766 - Flags: review?(roc)
Comment on attachment 8519766 [details] [diff] [review] enable_gum_recording_callback Review of attachment 8519766 [details] [diff] [review]: ----------------------------------------------------------------- Please split this up into one patch to add the new image type and another patch to actually set the compressed image buffer. ::: dom/media/webrtc/GumTextureImage.h @@ +16,5 @@ > + > +namespace mozilla { > + > +/** > + * GumTextureImage has two kinds of image. One is preview image which will be "has two parts." @@ +18,5 @@ > + > +/** > + * GumTextureImage has two kinds of image. One is preview image which will be > + * saved in PlanarYCbCrImage, another kind is the MediaBuffer keeps in mMediaBuffer > + * which is from gonk camera recording callback. What format is the MediaBuffer in? I'm assuming it's always a compressed video frame, but is that true? If it is true, you should say so here. The name GumTextureImage doesn't capture what's special about this class. Maybe call it GonkCameraImage? ::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.cpp @@ +1185,5 @@ > graphicBuffer->unlock(); > } else > #endif > + if (format == ImageFormat::PLANAR_YCBCR || > + format == ImageFormat::GUM_TEXTURE_PLANAR_YCBCR) { Let's add a AsPlanarYCbCrImage() method to Image so you can call it here without checking for all the possible formats.
Attachment #8519766 - Flags: review?(roc) → review-
(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #72) > Comment on attachment 8519766 [details] [diff] [review] > enable_gum_recording_callback > > Review of attachment 8519766 [details] [diff] [review]: > ----------------------------------------------------------------- > @@ +18,5 @@ > > + > > +/** > > + * GumTextureImage has two kinds of image. One is preview image which will be > > + * saved in PlanarYCbCrImage, another kind is the MediaBuffer keeps in mMediaBuffer > > + * which is from gonk camera recording callback. > > What format is the MediaBuffer in? I'm assuming it's always a compressed > video frame, but is that true? If it is true, you should say so here. The format in MediaBuffer is gonk shared memory (IMemory) based on binder. The data in IMemory should be platform dependent. I think it should be a special token mapping to a specified hardware memory. But I'm not sure whether or not it is a compressed video frame. Anyway, I'll add more comments about MediaBuffer.
Attached patch add_gonk_camera_image (obsolete) — Splinter Review
Attachment #8519766 - Attachment is obsolete: true
Attachment #8521179 - Flags: review?(roc)
Attached patch enable_gum_recording_callback (obsolete) — Splinter Review
Attachment #8521180 - Flags: review?(roc)
Comment on attachment 8521179 [details] [diff] [review] add_gonk_camera_image Review of attachment 8521179 [details] [diff] [review]: ----------------------------------------------------------------- ::: dom/media/webrtc/GonkCameraImage.cpp @@ +47,5 @@ > +bool > +GonkCameraImage::HasMediaBuffer() > +{ > + ReentrantMonitorAutoEnter mon(mMonitor); > + return (mMediaBuffer ? true : false); return mMediaBuffer != nullptr; ::: dom/media/webrtc/GonkCameraImage.h @@ +19,5 @@ > +/** > + * GonkCameraImage has two parts. One is preview image which will be saved in > + * PlanarYCbCrImage, another kind is the MediaBuffer keeps in mMediaBuffer > + * which is from gonk camera recording callback. The data in MediaBuffer is Gonk > + * shared memory based on android binder (IMemory), the realy format in IMemory "the actual format" @@ +21,5 @@ > + * PlanarYCbCrImage, another kind is the MediaBuffer keeps in mMediaBuffer > + * which is from gonk camera recording callback. The data in MediaBuffer is Gonk > + * shared memory based on android binder (IMemory), the realy format in IMemory > + * is platform dependent. > + * This instance is created in MediaEngine while the preview image is arrived. "when the preview image arrives" @@ +65,5 @@ > + ReentrantMonitor mMonitor; > + android::MediaBuffer* mMediaBuffer; > + // Check if current thread is the same one which called SetBuffer(). > + // It doesn't need to hold reference count. > + nsIThread* mThread; Sounds like this should use DebugOnly<nsIThread>. ::: gfx/layers/ImageTypes.h @@ +28,5 @@ > /** > + * The GONK_CAMERA_IMAGE format creates a GonkCameraImage, which contains two > + * parts. One is PlanarYCbCr image for preview image. Another one is > + * MediaBuffer from Gonk recording image. The perview image sends to display > + * layer for display. And the MediaBuffer will be used in component like OMX "The preview image can be rendered in a layer for display."
Attachment #8521179 - Flags: review?(roc) → review+
Comment on attachment 8521180 [details] [diff] [review] enable_gum_recording_callback Review of attachment 8521180 [details] [diff] [review]: ----------------------------------------------------------------- ::: dom/camera/GonkCameraSource.cpp @@ +163,5 @@ > + Size videoSize, > + int32_t frameRate) > +{ > + mozilla::nsGonkCameraControl* control = > + static_cast<mozilla::nsGonkCameraControl*> (aControl); No space before ( ::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.cpp @@ +1185,5 @@ > graphicBuffer->unlock(); > } else > #endif > + if (img->AsPlanarYCbCrImage()) { > + layers::PlanarYCbCrImage* yuv = img->AsPlanarYCbCrImage(); Just call AsPlanarYCbCrImage once and keep the result in a local variable.
Attachment #8521180 - Flags: review?(roc) → review+
Thanks for review. I'll update patches according to review comments and go full test on try.
Depends on: 1103782
https://treeherder.mozilla.org/ui/#/jobs?repo=try&revision=e8dad71e6c5f B2G ImageCapture test is broken due to deadlock in camera emulator. File bug 1103782 for it.
Attached patch add_gonk_camera_image (obsolete) — Splinter Review
Attachment #8521179 - Attachment is obsolete: true
Attachment #8521180 - Attachment is obsolete: true
Attachment #8536423 - Flags: review+
Attached patch enable_gum_recording_callback (obsolete) — Splinter Review
Attachment #8536424 - Flags: review+
Attachment #8536423 - Attachment is obsolete: true
Attachment #8536424 - Attachment is obsolete: true
Flags: needinfo?(ayang)
Attachment #8538334 - Flags: review+
Keywords: checkin-needed
Keywords: checkin-needed
fixed non-unified build bust.
Attachment #8538335 - Attachment is obsolete: true
Attachment #8538361 - Flags: review+
Keywords: checkin-needed
Status: NEW → RESOLVED
Closed: 10 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla37
Android L didn't set MOZ_B2G_CAMERA right now, so I got a compilation error on Android L build: /home/boris/projects/build-b2g/B2G-L/gecko/gfx/layers/ImageContainer.cpp:22:29: fatal error: GonkCameraImage.h: No such file or directory
Blocks: 1113655
See Also: → 1137515
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: