Closed Bug 1341235 Opened 7 years ago Closed 6 years ago

[meta] External image integration

Categories

(Core :: Graphics: WebRender, defect, P3)

defect

Tracking

()

RESOLVED FIXED
Tracking Status
firefox56 --- unaffected
firefox57 --- unaffected

People

(Reporter: nical, Unassigned)

References

Details

(Keywords: meta, Whiteboard: [gfx-noted])

Attachments

(2 files, 3 obsolete files)

Sotaro and Jerry are working on two slightly different approaches for the integration of external images with WebRender (which has been made trickier since the addition of the render thread). Let's continue the discussion about the two approaches here.

Some things to keep in mind:
 - As much as possible we should avoid re-building the display list.
 - Currently each texture in an ImageClient/Host has a rendered rect associated to it. This information belongs to the display item so when the size changes we should update the display list and keep that in sync with the textures being rendered. Most of the time, the rect does not change.
 - When using an external texture buffer, if the size of the texture itself changes we need to allocate a new spot in the texture cache, which means we need to get that information through the render backend where the texture cache is managed and keep that in sync with whatever is going on in the render thread. Most of the time, the texture size does not change.
 - YUV images have a slightly different interface (the shader gets 3 textures). Still it would be nice to 
 - It is worth having a specific mechanism for video because it is more complicated and more performance sensitive than the other types of images. Also, Andrew has his own plans for imagelib images so we don't need to worry about them in this particular discussion.
 - It is best to keep it possible for the display list to be built on the content side (this affects id allocation in particular). Whatever we do that relies on building display lists on the compositor process will have to be changed at some point.

Sotaro and Jerry, it would be great if you could write down a high level description of what you have in mind and link/upload some code on this bug or another to continue the discussion.
Flags: needinfo?(sotaro.ikeda.g)
Flags: needinfo?(hshih)
See Also: → 1340997
Sotaro, I have a question for video in current WRImageLayer implementation. How to get the image format in WRImageLayer::RenderLayer()?
We should call push_image() or push_yuv_image() for different format. The format information is only in Image. In video case, the result of ImageContainer::GetCurrentImages() might be empty. Then, we don't know which push_image call should be used.
WRImageLayer could not get image format in async ImageContainer case. It could be any format. In video case, yuv could be possible, but async plugin rendering case, it could be RGB. Actual format could be decided only at WebRenderBridgeParent::ProcessWebrenderCommands().
Flags: needinfo?(sotaro.ikeda.g)
My idea is split external image key binding to CompositableHost. Instead, external image key is bind to TextureHost. If we only does image update sync with DisplayList. We do not need to forward CompositableHost handling. It could not handle async video selection on RenderThread. It could be added later by abstracting the implementation. But even in this case, same external image key have to have a same size and a same format. It we want to push different size or different format, we have to use different external image key.

Role of TextureHost is split to WebRenderTextureHost and RenderTextureHost. WebRenderTextureHost works on Compositor thread. It wraps TextureHost and does PTexture related tasks. For now, it wraps only BufferTextureHost.

RenderTextureHost works on RenderThread. It provides data or texture to WebRender's renderer in LockExternalImage(). But it is created or deleted by WebRenderTextureHost on Compositor thread. It is not a problem for BufferTextureHost.

For releasing external image, attachment 8839843 [details] [diff] [review] does not use ReleaseExternalImage(). In stead, wr_rendered_epochs is used. WebRenderCompositableHolder holds compositable ref of TextureHost on Compositor thread until next epoch is rendered. With it, TextureHost recycling works without problem. The epochs are delivered to Compositor thread with DidComposite(). Current implementation has a problem about calling DidComposite(), Bug 1341524 is going to address it.
I am still wrapping my head around what you wrote, but I'd like to underline that if we do change the ImageKey every frame during video playback, it means we have to recompute the frame each time (at least in the way WebRender works currently). I believe that a very common use case when watching video is that nothing else on the page changes most of the time, so avoiding to rebuild the frame when we can just swap out a texture id is worthwhile, because rebuilding the frame can involve quite a bit of processing and video playback is fairly expensive.

That said, I may have not yet entirely understood the big picture of your proposal Sotaro, so what I'm saying could be out of context, and is specifically about video. It's probably fine to rebuild the frame in other cases (although it would be nice to avoid that for canvas too).
Thanks, Sotaro.

I'm trying to upload my patch today.

In comment 2, if we only can get the format in WebRenderBridgeParent::ProcessWebrenderCommands(), how to insert the push_image or push_yuv_image display item in wr_display_list? When imageBridge sends a new image from decoder, we don't know where to insert that image(which stacking context?). We don't save the previous WebrenderCommands list right now. Maybe we still need to insert a placeholder item at content side even though we don't know the format for video. Then, call a special wr_command to replace that display_item with the correct item. In this case, we need to rebuild the frame structure once.
> In comment 2, if we only can get the format in WebRenderBridgeParent::ProcessWebrenderCommands(), how to insert the push_image or push_yuv_image display item in wr_display_list?

It seems an orthogonal problem to how to implement External image. We need to modify display list in ProcessWebrenderCommands() if we keep ImageBridge.
> We need to modify display list in ProcessWebrenderCommands() if we keep ImageBridge.

Assuming you mean modifying the display list in the parent side, I disagree. The fact that a particular stream of video frames in ImageBridge can change at any time its format and geometry has been a source of complexity in the past, and seems to be the main reason for wanting to modify the display list in the parent side. I would much rather change this constraint than design around it.

We should be able to create a stream of video frames with fixed format/size associated to an asyncId and create a new stream with a new async id when the geometry or format of the video changes. that way we can notify the content thread, modify the display list there to update the asyncId format and size associated with the display item. The story on the compositor side would become a lot simpler (and even without WebRender, if we had that ImageHost would be a lot simpler too).
(In reply to Nicolas Silva [:nical] from comment #8)
> > We need to modify display list in ProcessWebrenderCommands() if we keep ImageBridge.
> 
> Assuming you mean modifying the display list in the parent side, I disagree.

I just explained possible ways in current webrender api. If we want to correctly support it, webrender api needs to be changed.
(In reply to Nicolas Silva [:nical] from comment #5)
> I am still wrapping my head around what you wrote, but I'd like to underline
> that if we do change the ImageKey every frame during video playback, it
> means we have to recompute the frame each time (at least in the way
> WebRender works currently).

It seems like an orthogonal problem than supporting External image integration. In current webrender api, we could keep ImageKey if video size and format is same. But if we want to change the Image size, Image format, we need to send display list.

In video case, video size/format change, video clear could happen at any timing. Ideally these updates should not need display list. webrender seems to need a new api to support it. It works similarly to ScaleMode::STRETCH mode of ImageLayerComposite. The new api seems similar category to scroll_layers_with_scroll_root_id().
(In reply to Nicolas Silva [:nical] from comment #8)
> 
> We should be able to create a stream of video frames with fixed format/size
> associated to an asyncId and create a new stream with a new async id when
> the geometry or format of the video changes. that way we can notify the
> content thread, modify the display list there to update the asyncId format
> and size associated with the display item. The story on the compositor side
> would become a lot simpler (and even without WebRender, if we had that
> ImageHost would be a lot simpler too).

I seems to have a different opinion to it. I prefer to add similar to ScaleMode::STRETCH mode of ImageLayerComposite to webrender as in comment 10. We do not need to send updating display item even when video size/format is changed.
Similar thing is already done by ImageLayerComposite, then it seems possible also by webrender.
(In reply to Sotaro Ikeda [:sotaro] from comment #11)
> (In reply to Nicolas Silva [:nical] from comment #8)
> 
> I seems to have a different opinion to it. I prefer to add similar to
> ScaleMode::STRETCH mode of ImageLayerComposite to webrender as in comment
> 10. We do not need to send updating display item even when video size/format
> is changed.

It's easy to have ScaleMode::STRETCH mode in WR, but I don't know how to handle the format setting in video playback case. The push_yuv_image() and push_image() call will generate the different display item. If try to have a WR display item to handle all rgb and the various yuv format to prevent the display list changing, that's do it later.
With the external_id binding with compositableHost:

1) When the compositorParent receives the new video image from imageBridge, we just need to schedule a new rendering task to rendererOGL for screen updating. No displayList change in this case. The rendererOGL will always get the latest video frame from the external-image-handler callback.

Texture size changing problem:
If we use external-image-handle(we upload the content into gl texture, and pass the gl handle to WR) for video playback, it's very easy to handle the texture size changing problem. It's just need to return the different uv coordinate in the callback. If we need to use the resource-cache for external image without update display list, we should update the resource-cache uploading logic in WR.

Texture format changing problem:
I asked Taipei media team yesterday. They think the format should not change in the same video. So, the remain problem is at comment 1.

2) The chooseImage() will be called at rendererOGL. It's more precise to have the correct time-stamp in video case.
(In reply to Jerry Shih[:jerry] (UTC+8) from comment #14)
> Texture format changing problem:
> I asked Taipei media team yesterday. They think the format should not change
> in the same video. So, the remain problem is at comment 1.

It is just from MediaDecoder point of view. The situation is different from HTMLMediaElement and ImageContainer point of view. HTMLMediaElement keeps to uses same ImageContainer for multiple video loads. Even when each video have same video, in total, ImageContainer could receive multiple video format.

HTMLMediaElement could render any video source with HTMLMediaElement::SetSrcObject().
  https://dxr.mozilla.org/mozilla-central/source/dom/html/HTMLMediaElement.cpp#1530

And WebRTC video and plugin could set any size and type of video to ImageContainer.
(In reply to Jerry Shih[:jerry] (UTC+8) from comment #14)
> 
> Texture size changing problem:
> If we use external-image-handle(we upload the content into gl texture, and
> pass the gl handle to WR) for video playback, it's very easy to handle the
> texture size changing problem. It's just need to return the different uv
> coordinate in the callback.

I am suspicious about it. How could it handle rendered size and scaling with keeping aspect ratio?
(In reply to Sotaro Ikeda [:sotaro] from comment #16)
> (In reply to Jerry Shih[:jerry] (UTC+8) from comment #14)
> > 
> > Texture size changing problem:
> > If we use external-image-handle(we upload the content into gl texture, and
> > pass the gl handle to WR) for video playback, it's very easy to handle the
> > texture size changing problem. It's just need to return the different uv
> > coordinate in the callback.
> 
> I am suspicious about it. How could it handle rendered size and scaling with
> keeping aspect ratio?

Will we keep the aspect ratio in this transform?
https://dxr.mozilla.org/mozilla-central/rev/32dcdde1fc64fc39a9065dc4218265dbc727673f/gfx/layers/composite/ImageLayerComposite.cpp#138

If we have a size (300,400) video HTMLMediaElement, we will push the "push_image()" display item with the size (300,400). With current WR, All external gl texture will stretch or shrink into (300,400). There is no aspect ratio logic here.
I think that the misunderstanding is on my side. As long as we minimize re-building the the display list and the frame when video size/format don't change, and as long as we can build the display list in the content process, we agree.


> In video case, video size/format change, video clear could happen at any timing. Ideally these
> updates should not need display list. webrender seems to need a new api to support it. It works
> similarly to ScaleMode::STRETCH mode of ImageLayerComposite. The new api seems similar category
> to scroll_layers_with_scroll_root_id().

This sounds reasonable.


(In reply to Sotaro Ikeda [:sotaro] from comment #9)
> I just explained possible ways in current webrender api. If we want to
> correctly support it, webrender api needs to be changed.

Yes, changing WebRender's API is not a problem. As long as we don't try to change WebRender's rendering model. For example abstracting over YUV and RGB is hard to do and I think that it is just easier for us to handle the YUV/RGB distinction outside of WebRender.




As far as I can see there are several independent discussions (for video):
* how do we change some video parameters (size, format, etc.) in webrender
* should these parameters be passed through the ImageBridge, do they require updating the display list?
* how do we do the timestamp selection


The paragraphs below are not arguments for or against what either of you have said so far, just some observations from the point of view of webrender.

About the first discussion item, the kinds of things that can change in a video element seem to be:

- The destination rect/size (rect and stretch_size)

I am looking at the feasibility of deferring resolving the item's destination rect to the renderer. I think that it should be possible to patch it up in the update_deferred_resolves function in Renderer just like the texture uvs are patched up there if the image is opaque. If the image has transparency it is more complicated (not necessarily impossible) due to how the rect is being used by the render backend to compute batches, but in the case of video we can reasonably assume opaque for now.
That said I think that regenerating the frame is fine since it happens rarely.

- The image format (RGBA/YUV)

There's no way to abstract over RGB and YUV images in webrender because they are handled/batched/shaded differently. I think that it is fine for us to change the display item on the client side if it is easier. Maybe we could try to add an api in WebRender to replace a display item with another one directly in the render backend and re-build the frame, but I don't know how hard that would be (that would save us the serialization and transfer of the display list, I don't know if that is enough to motivate the added complexity). Another possible solution could be to add a new display item with no batching that is entirely resolved by the renderer. While this is possible, it's a lot of code for something that I think we can achieve with the display items we already have.

- The texture's rect (UV)

the UV itself can be resolved in the renderer in update_deferred_resolves without rebuilding the frame. If we need to update the texture cache, however, we can't get around re-building the frame. I the specific case of video, we could have some way to make sure the external image isn't uploaded to the texture cache. There would still be value external images that don't change often use the texture cache, but we could add a flag in the display item to configure that.


As Jerry said, WebRender does not reason in terms of aspect ratio, but we can control the aspect ratio by specifying the right values with the item's rect, stretch_size and UV.


- Other stuff:

* Changing the ImageKey forces us to re-build the frame (not something we want to do every frame
* Changing the ExternalImageId (by calling update_image) currently does not have any effect until the frame is re-built. It is not clear to me how easy it would be to avoid re-building the frame.


About texture selection

It is not entirely clear to me if you two agree on the place where the timestamp selection should happen. Jerry, you clearly favor doing it in the renderer, I think that you want to do this in the compositor thread Sotaro? I am fine with both approaches. If we do this right, the timing should not be too different as long as we don't re-build the frame (that's a wild guess, but this should be easy to measure).
being able to change the texture from the compositor sounds like something we will need to implement at least for canvas (independently of what is best for video).
I think that there is a lot of work that can be done independently of where the texture selection takes place. That said it's important to make sure that if we do texture selection on the renderer, we don't get a list of textures to choose from with different size/format properties (or at least not the properties that require re-building the frame, since at this point it is too late) which we can totally do if we make sure that the image lists are associated to an Id (like ExternalImageId) that changes whenever one of these properties changes.


I totally forgot to mention during our meeting that I'll  be on PTO next week, so I won't attend the meeting next tuesday and most likely not answer emails (sorry!).
(In reply to Nicolas Silva [:nical] from comment #18)
> > In video case, video size/format change, video clear could happen at any timing. Ideally these
> > updates should not need display list. webrender seems to need a new api to support it. It works
> > similarly to ScaleMode::STRETCH mode of ImageLayerComposite. The new api seems similar category
> > to scroll_layers_with_scroll_root_id().
> 
> This sounds reasonable.

I checked webrender implementation. It seems better to extend and use webrender's animation framework, if we want to change Image without updating display list.
  https://github.com/servo/webrender/pull/832

webrender's animation framework supports only opacity and transform. It seems that we could add Image animation here. If we add it, we could select different Image for each frame. To support Image animation, we might add ImageContainer display item and ImageContainer might select Image or YuvImage.

With Bug 1342380, VsyncScheduler was enabled again. By it, only WebRenderAPI::GenerateFrame() triggers new frame composition. We could extend WebRenderAPI::GenerateFrame() as to pass animating properties. The GenerateFrame() is called by WebRenderBridgeParent::CompositeToTarget(). We could call Video frame selection to CompositeToTarget().

> As far as I can see there are several independent discussions (for video):
> * how do we change some video parameters (size, format, etc.) in webrender
> * should these parameters be passed through the ImageBridge, do they require
> updating the display list?

As the above if we add Image animation to webrender, we seems not need to update the display list.
In current webrender rebuilds scene by calling build_scene(), when there is animation. It could be minimised by optimisation.
(In reply to Nicolas Silva [:nical] from comment #18)
> 
> About texture selection
> 
> It is not entirely clear to me if you two agree on the place where the
> timestamp selection should happen. Jerry, you clearly favor doing it in the
> renderer, I think that you want to do this in the compositor thread Sotaro?
> I am fine with both approaches. If we do this right, the timing should not
> be too different as long as we don't re-build the frame (that's a wild
> guess, but this should be easy to measure).

With enable VsyncSchedule, generating new frame is controlled by WebRenderBridgeParent. Then it seems better to do timestamp selection on WebRenderBridgeParent::CompositeToTarget() on Compositor thread.
MozReview-Commit-ID: 8I7Zd3BBPLB
Flags: needinfo?(hshih)
Rebased.
Attachment #8839843 - Attachment is obsolete: true
About async video frame change, we need to handle the following changes.
- size change
- format change
- video frame clear
Blocks: webrender
Depends on: 1340997
Revert incorrect change.
Attachment #8841909 - Attachment is obsolete: true
I and :jerry discussed about how to implement External image. The followings are point of conclusion.
- Basically use approach of attachment 8841909 [details] [diff] [review]
- Rendering video selection is done in WebRenderBridgeParent::CompositeToTarget() on Compositor thread.
    just before calling WebRenderAPI::GenerateFrame().
- Handle External image more like TextureHost, since WebRender handle it as fixed sized buffer.
     TextureCache does not handle image resize(https://github.com/servo/webrender/pull/925)
- Do not forward whole CompositableHost, instead, forward only TextureHost related things,
   that are necessary for RenderThread.
    + Create WebRenderTextureHost on compositor thread and only forward necessary things
      to RenderThread. RenderTextureHost is current forwarded object.
- Minimise inter thread message passing between Compositor thread and RenderThread.
   + Use DidComposite to know TextureHosts release timings.
- We want new Webrender API that works like ImageContainer to update Image
   without sending DisplayList again. It might be similar to webrender animation, or different api.
   https://github.com/servo/webrender/pull/832
   Anyway, we need a way to update rendering TextureHost without DisplayList update.
- Implement ExternalBuffer at first then implement ExternalHandle.
Depends on: 1343457
Depends on: 1343764
Depends on: 1345054
Whiteboard: [gfx-noted]
Summary: External image integration → [meta] External image integration
All the dep bugs are closed, as are all the linked PRs/issues. I'm going to close this bug.
Status: NEW → RESOLVED
Closed: 6 years ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: