Closed Bug 925615 Opened 11 years ago Closed 8 years ago

[meta][B2G][Wifi] Support Wifi Display

Categories

(Firefox OS Graveyard :: Wifi, defect)

ARM
Gonk (Firefox OS)
defect
Not set
normal

Tracking

(tracking-b2g:backlog)

RESOLVED WONTFIX
tracking-b2g backlog

People

(Reporter: ethan, Assigned: hchang)

References

Details

Attachments

(11 files, 2 obsolete files)

This bug is the meta-bug of Wifi Display feature.

The system requirement areas of Wifi Display are as follows:
* WFD device discovery
* WFD capability discovery
* WFD Connection establishment
* WFD Session establishment
* Payload formats for video and audio streams from WFD Source to WFD Sink
* Transport and multiplex protocol for video and audio payload
* Link Content Protection
* WFD Session termination
* Persistent WFD Groups
This figure shows a reference model for session management of a WFD Source and WFD Sink. This conceptual model includes a set of predefined functions such as WFD Device Discovery, presentation, session control, and transport.
This figure shows a reference model for AV payload processing for WFD Source and WFD Sink.
As I discussed with bechen and rlin, we should support below function blocks to complete Wifi Display.  

1. support wifi p2p.
2. support rtsp server (support rtsp client, too ?).
3. implement packetization module (support mpeg2ts, h.264, vp8/vp9 ...). 
4. implement media encoder.
5. implement remote display and local display
6. implement screenshot to media stream (from frame buffer or layout engine ?).
Depends on: 945569
Depends on: 946110
Depends on: 949901
Attached image Overall class diagram (v1) (obsolete) —
Depends on: 811635
Have you given any thought to mirrored content (display the same content in both the phone display and WFD) vs non-mirrored (different content on the phone display and WFD)?
Flags: needinfo?(hchang)
3 major user case take into consideration:
1. mirror
2. non-mirror (or, named 2nd display, i.e. different content on phone display and WFD)
3. direct-transfer mode, i.e. according to capability (e.g. 1080P), source (app in the phone) request same quality video from web server (YouTube or NetFlex) and pass the encoded video stream (if content is protected, follow HDCP to do re-encryption)
Will use 1 to do deep design.
Summary: [B2G][Wifi] Support Wifi Display → [meta][B2G][Wifi] Support Wifi Display
Blocks: 952359
No longer blocks: 952359
Depends on: 952359, 952361
Depends on: 952363
No longer depends on: 952359
Depends on: 952359
Define misc use cases and bugs here:
https://docs.google.com/a/mozilla.com/spreadsheet/ccc?key=0Ausz8kq-KLDcdGxFRUVyRjhhRnZJa2h4ZmMtWXJ5TFE#gid=0

Focus on use case 1: mirror

Bugs related to use case 1:
811635 945569 946110 947063 949901 952359 952361 952363
Flags: needinfo?(hchang)
Attached image Overall class diagram (v2) (obsolete) —
Updated class diagram to v2.

The major updates are:

1. Adding new interface for GonkDisplay

void GonkDisplay::SetVirtualDisplayBuffer(IGraphicBufferProducer);

2. WifiDisplayManager will create WifiDisplaySource via binder in remote process, which may be a native android process or gecko child process.

3. On sink gets connected to us, a BpGraphicBufferProducer with interface IGraphicBufferProducer will be called back to GonkDisplay through:


                    binder
WifiDisplaySource ----------> WifiDisplayManager --> DisplayManager --> GonkDisplay
Attachment #8347062 - Attachment is obsolete: true
In AOSP, egl_window_surface_v2_t is the object manipulating BufferQueue via NativeWindow. We probably need to study how egl_window_surface_v2_t uses NativeWindow.
Added IPC part
Attachment #8351323 - Attachment is obsolete: true
Attachment #8376960 - Attachment description: Part 2: Gfx, virtual display, layer composition → [WIP] Part 2: Gfx, virtual display, layer composition
Miracast (wifi display) on B2G preview:

0) Go get a nexus 4 (haven't tried other phones...) and 
      get a miracast sink, usually a dongle or a smart TV with built-in miracast support.

   If you couldn't find either, check https://github.com/kensuke/How-to-Miracast-on-AOSP/wiki. You may use        nexus devices as the miracast sink. I've verified nexus 7.

1) Apply attachment 8376959 [details] [diff] [review] for web api and wifi (already force to enable wifi p2p)
2) Apply attachment 8376960 [details] [diff] [review] for gfx and virtual surface
3) Apply attachment 8376963 [details] [diff] [review] for audio

(All based on changeset a62bde1d6efe2be555ce614809c0b828b73233bf)

Or simply use my development branch at https://bitbucket.org/changhenry/mozilla-central/branch/dev/wfd

4) Use my gaia: https://github.com/elefant/gaia/tree/dev/wfd

5) Boot up the phone, go to Settings --> Miracast. You will see the nearby miracast sinks. Find your sink    and click it to connect. Most likely you also have to click "OK" on the sink's screen. After you see "Connected" appeared on the phone, make some animation (like go to home screen, scroll, open an app, ...)

6) Sit back and watch a movie on TV (or nexus 7 :p )
This figure is used to explain the idea of WIP patch 2. There are four components in this diagram: LayerManagerComposite, CompositorOGL, GLContextEGL and GonkDisplayJB. The blue text is the added variable/function. 

(Assume we are on B2G JB, so LayerManagerComposite will own CompositorOGL, which will own GLContextEGL. GonkDisplay will be GonkDisplayJB)

This patch is a proof-of-concept of rendering the composite layer onto the screen buffer as well as a virtual display buffer. The origin of the virtual display buffer is an IGraphicBufferProducer from other process. The virtual display buffer will be stored in GonkDisplayJB. The stored IGraphicBufferProducer will be associated with GLContextEGL::mVirtualDisplaySurface via a couple of EGL API calls.

The general idea is that every time LayerManagerComposite::Render() is called, we check if there is a virtual display to render. If there is, we render twice on different EGLSurfaces with different transform matrices respectively.

(Please ignore all the down-casting and non-transparent stuff here. They can be solved by  abstractions)
Comment on attachment 8376960 [details] [diff] [review]
[WIP] Part 2: Gfx, virtual display, layer composition

Review of attachment 8376960 [details] [diff] [review]:
-----------------------------------------------------------------

::: gfx/layers/composite/LayerManagerComposite.cpp
@@ +508,5 @@
> +                                                               actualBounds.width,
> +                                                               actualBounds.height));
> +
> +  // Render our layers.
> +  RootLayer()->RenderLayer(clipRect);

There are few concerns:
1. Dual GPU composition is happening, 1 for primary display in Render() call and this one for Virtual display. I believe it will slow down performance (fps) and will increase power numbers.
2. This patch will only do Mirroring use case.
3. This patch will not work when primary display has achieved full HWC composition.
4. Did you try porting Virtual Display Surface solution (like Android) ?

@@ +597,5 @@
>      PROFILER_LABEL("LayerManagerComposite", "EndFrame");
>      mCompositor->EndFrame();
>    }
>  
> +  RenderVirtualDisplay();

This will be called only in case when primary display is using GPU Composition.
Hi Sushil, I really appreciate your feedback. Please see my inline reply below.

(In reply to Sushil from comment #18)
> Comment on attachment 8376960 [details] [diff] [review]
> [WIP] Part 2: Gfx, virtual display, layer composition
> 
> Review of attachment 8376960 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> ::: gfx/layers/composite/LayerManagerComposite.cpp
> @@ +508,5 @@
> > +                                                               actualBounds.width,
> > +                                                               actualBounds.height));
> > +
> > +  // Render our layers.
> > +  RootLayer()->RenderLayer(clipRect);
> 
> There are few concerns:
> 1. Dual GPU composition is happening, 1 for primary display in Render() call
> and this one for Virtual display. I believe it will slow down performance
> (fps) and will increase power numbers.

This is how surface flinger deal with virtual display before kitkat. Every layer will
be rendered a couple of times on different surface. After kitkat with HWC API 1.3 support,
each layer can be rendered to a virtual display by either GL or HWC. Having looked the code
and it's so complicated tough :p

> 2. This patch will only do Mirroring use case.

Yes! But there is some flexibility to extend to presentation use case. For example, we can create
a special layer for virtual display rendering. Or we can determine if any layer should be rendered
to the virtual display by introducing additional information. Like surface flinger, they use "token", which in essential the binder object address of certain display device, to identify which layer should be
rendered to which display device.

> 3. This patch will not work when primary display has achieved full HWC
> composition.

Yes. When primary display achieves full HWC composition, we have to put all the logic 
to GonkDisplay. It will be another tough work....

> 4. Did you try porting Virtual Display Surface solution (like Android) ?

What specific solution you referr to? Before kitkat or after kitkat?

> 
> @@ +597,5 @@
> >      PROFILER_LABEL("LayerManagerComposite", "EndFrame");
> >      mCompositor->EndFrame();
> >    }
> >  
> > +  RenderVirtualDisplay();
> 
> This will be called only in case when primary display is using GPU
> Composition.

Yes I knew it. This is what 3) says, isn't it? 

By the way, regarding fps issue, I didn't see any obvious difference while mirroring (But I do believe there must be a performance hit. HWC with virtual display support could offload this.) 

Thanks your feedback again.
Hi Vlad,
This a WIP of WifiDisplay.
Flags: needinfo?(vladimir)
(In reply to Henry Chang [:henry] from comment #19)
> > 4. Did you try porting Virtual Display Surface solution (like Android) ?
>
> What specific solution you referr to? Before kitkat or after kitkat?
 
After kitkat.
Milan, can you find someone to do a design review of this code?
Flags: needinfo?(vladimir) → needinfo?(msreckovic)
Comment on attachment 8377468 [details]
Illustration of WIP patch 2

Let's start by reviewing this picture; Jeff, Sotaro, Bas, can you start with some feedback, and pull in others as you see fit?  I'll schedule something during the work week as well.  There is some code above as well, but I would more concentrate on the design for now.
Attachment #8377468 - Flags: feedback?(sotaro.ikeda.g)
Attachment #8377468 - Flags: feedback?(jmuizelaar)
Attachment #8377468 - Flags: feedback?(bas)
Flags: needinfo?(msreckovic)
I am going to check the patch.
Comment on attachment 8377468 [details]
Illustration of WIP patch 2

(In reply to Milan Sreckovic [:milan] from comment #23)
> Comment on attachment 8377468 [details]
> Illustration of WIP patch 2
> 
> Let's start by reviewing this picture; Jeff, Sotaro, Bas, can you start with
> some feedback, and pull in others as you see fit?  I'll schedule something
> during the work week as well.  There is some code above as well, but I would
> more concentrate on the design for now.

It sort of feels a little hacky in its current form. Intuitively I feel like we'd be better off having a separate, dedicated layer manager than uses a compositor that explicitly composites to the virtual display. That way we can both do other things than mirroring and probably better control what we do and don't need to minimize performance/power impact.

If we really just want mirroring it seems like the 'cheapest' (and also hacky) way to do that is to just in a single bit of code on the compositor to copy the framebuffer that we composited to directly onto the virtual display surface.

But I could be convinced this is a better idea :). I haven't thought about it that much yet, nor looked at the implementation details.
Attachment #8377468 - Flags: feedback?(bas) → feedback-
Created a diagram to help to understand an architectural difference between LayerManagerComposite and KK's SurfaceFlinger.

more detailed diagram around HwcComposer2D is the following.

https://github.com/sotaroikeda/firefox-diagrams/blob/master/widget/widget_HwcComposer2D__FirefoxOS_1_4.pdf?raw=true
It seems better to implement WiFi display based on KK. JB support could be subset of it. On KK, VirtualDisplaySurface's HWC compositions is supported. When full HWC composition are done for main display and virtual display, we could reduce CPU time usage and power consumption. As in Comment 18, could consume more power and cpu time.
From Comment 27, it seems unavoidable to modify LayerManagerComposite as to support multiple display. To support HWC composotion, the composition need to be done by once.

But implementation needs to be done by more clean and abstracted way. As in attachment 8398839 [details], implement DisplayDevice like class seems one way to implement more abstracted.
Bas, Sotaro,

Thanks for the review. Before we go further, I think we need to confirm the following things:

1) Do we only need mirroring? 
2) If what we need is more than mirroring, what should be rendered to
   the main display and what should be rendered to virtual display?
3) Do we need to support other display devices like external HDMI display?

Since different requirement leads to different design, maybe we could discuss 
these first in the Wifi display session today.
(In reply to Henry Chang [:henry] from comment #29)
> Bas, Sotaro,
> 
> Thanks for the review. Before we go further, I think we need to confirm the
> following things:
> 
> 1) Do we only need mirroring? 
ni?Ravi to get answer. I wouldn't say we only need mirroring mode. But mirroring mode is a good first step for this feature.

> 3) Do we need to support other display devices like external HDMI display?
Ravi, do you know if PM have plan for this?
Flags: needinfo?(rdandu)
During taipei graphics work week, we had talked about WifiDisplay usage patterns.
- [1] same output to local display and remote display.
- [2] same UI different display resolution.
- [3] different UI

*Discussion result was the following.

[1] does not provide enough quality as a product. [2] seems reasonable as a WifiDisplay. [3] seems to request bigger change and take longer time to implement.

*more detail about the discussion.

ThebesLayer renders based on display resolution. Even when WifiDisplay has higher display size, it can not get enough rendering result resolution. But it is same to android.
On Video playback case, rescaling happens only on composition. [1] could loses a resolution. [2] could get a good resolution.
[3] It seems to require similar to Firefox for Window's multi-windows implementation. Need different documents, different compositors, different widgets becomes necessary...
About Mirroring: Mirroring is the first use case for WiFi Display. We are taking a stepwise approach. Later step will  "Two screen experience" where video, presentations, and gaming will have different screens. eg: controller on small screen on the device, and visual experience on the large screen. Detailed use cases are at https://docs.google.com/a/mozilla.com/document/d/1w7OiTQg55EYAyK_ZMtcPe2FeDQo-K0zi1qQxBwl9xr4/edit#heading=h.n0d69g8fkl3p
In summary, current step is Mirroring, a future step will be two screen experience.

About external HDMI display, that is a simpler case than WiFi display. The presentation layer should be leveragable, the underlying transport should be either WiFi-Direct or HDMI. 
About the market need, connecting using a HDMI cable is more popular with larger form factor devices like laptops rather than mobile devices. We're investigating this, and will get back in a few days.
Flags: needinfo?(rdandu)
blocking-b2g: --- → backlog
Per discussion with Bas and Sotaro a couple of weeks ago, (please kindly correct me if I am wrong)

1) The current approach: rendering one layer tree to two surfaces, is a waste of GPU and still needs to      stretch the smaller phone screen to a bigger screen (except VideoLayer). 
2) If the requirement is the "pure mirroring mode", which means the resolution we cast is exactly the same as the phone screen, simply copying final composition buffer is the way better.
3) If we hope not just stretching but "rendering" to a bigger resolution with the same aspect ratio, we have to maintain two layer trees or simply two windows. However, this approach may use a lot of additional resources if the page is so complicated.
4) For video layer, we can use the current approach to reduce the overhead of maintaining two video layers.
(In reply to Henry Chang [:henry] from comment #34)
> Per discussion with Bas and Sotaro a couple of weeks ago, (please kindly
> correct me if I am wrong)
> 
> 2) If the requirement is the "pure mirroring mode", which means the
> resolution we cast is exactly the same as the phone screen, simply copying
> final composition buffer is the way better.

I have a different opinion about it. I think that it can not get good performance on low end b2g devices.
We are using B2G to make TV dongle. 
 
It will be good if B2G can support WFD sink.
Assignee: nobody → hchang
No longer depends on: 946110
Try it on flame!

0) Get a flame and flash base image v184. (v188 is not tested yet but should also work)
1) Flash gecko from https://bitbucket.org/changhenry/mozilla-central/branch/dev%2Fwfd2
2) Flash gaia from https://github.com/elefant/gaia/tree/dev/wfd2-amd
3) Go to "Settings" --> "Wifi Direct", find the sink (remote display) you want to use 
   and click on it.
4) Play around the phone and you will see the screen is casting!
blocking-b2g: backlog → ---
Attachment #8377468 - Flags: feedback?(jmuizelaar)
Status: NEW → RESOLVED
Closed: 8 years ago
Resolution: --- → WONTFIX
Attachment #8377468 - Flags: feedback?(sotaro.ikeda.g)
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: