Closed Bug 1613602 Opened 5 years ago Closed 4 years ago

HTMLCanvasElement::captureStream(0) passes half-baked frames

Categories

(Core :: Audio/Video: MediaStreamGraph, defect, P3)

74 Branch
defect

Tracking

()

RESOLVED WORKSFORME

People

(Reporter: tristan.fraipont, Unassigned)

Details

Attachments

(1 file)

Attached file test-case.html

User Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:72.0) Gecko/20100101 Firefox/72.0

Steps to reproduce:

From test-case, click the "start test" button
Also available as a jsfiddle 1

  • Initialize a CanvasCaptureStream[Track] with frameRequestRate set to 0 (canvas.captureStream(0)).
  • Pass this stream to a <video> and wait it starts playing it.
  • Draw a red rectangle on the canvas, then wait again.
  • Clear the canvas with a green rectangle.
  • Call CanvasCaptureStream[Track] .requestFrame() to push the current frame to the stream.
  • Check the color of the current frame of the <video> by drawing it on an other canvas

Actual results:

The red frame has been passed to the stream.

Expected results:

Only the green frame should ever be passed to the stream, and hence to the <video>.


Some context:
I was fiddling a bit with a bug-report I made against Chromium 2 when I found out that FF doesn't actually respect the specs 3 either.
HTMLCanvasElement::captureStream(0) should give requestFrame() full control over when a frame is pushed to the stream-track after the initial frame.

Was hard to see at first because the "correct frame" eventually gets passed too at the next painting frame (can be seen using the radio inputs in test-case), even without doing anything else after (can be seen by using the checkbox in test-case) but nevertheless the red one should never get passed.

This is not a regression, gone as far as async/await implementation (~2016) and it still did reproduce, with a change in "before-paint" behavior at rev. 0ef34a9ec4fbfccd03ee0cfb26b182c03e28133a (probably related to how MicroTasks were handled).

Component: WebRTC: Audio/Video → Audio/Video: MediaStreamGraph
Status: UNCONFIRMED → NEW
Ever confirmed: true
Priority: -- → P3

(In reply to Kaiido from comment #0)

  • Call CanvasCaptureStream[Track] .requestFrame() to push the current frame to the stream.

This is a bad assumption. requestFrame() does not capture what is currently drawn to the canvas.

From the spec:

Each track that captures a canvas has a [[frameCaptureRequested]] internal slot that is set to true when a new frame is requested from the canvas.
(...)
A new frame is requested from the canvas when [[frameCaptureRequested]] is true and the canvas is painted.

When a canvas is painted is not properly defined anywhere afaik.

In the fiddle, in synchronous mode, an "invisible" frame is drawn synchronously just after requestFrame(). This is before the next paint, since the painting happens on the same thread and we did the drawing synchronously.

The special mode wait_before_stream_kill_input provides the "visible" frame after the "invisible" because the "invisible" frame was captured by the requestFrame() from the previous run of nextFrame() whereas the "visible" frame was captured by the requestFrame() from the current (and last) run of nextFrame().

If the timing of painting a canvas was properly defined we could discuss whether the "before-paint" and "after-paint" cases in your example were correct. Until then I don't think we can.

Status: NEW → RESOLVED
Closed: 4 years ago
Resolution: --- → WORKSFORME

If the timing of painting a canvas was properly defined we could discuss whether the "before-paint" and "after-paint" cases in your example were correct. Until then I don't think we can.

So should we move this to the specs? Because there is a clear Promise in https://w3c.github.io/mediacapture-fromelement/#dom-canvascapturemediastreamtrack-requestframe that we should be able to call captureStream(0) and call requestFrame() exactly "to avoid capturing a partially rendered frame."

If implementations are not able to honor this promise, maybe it should be removed from there, and the whole captureStream(0) + requestFrame() be removed altogether because if applications have to hope "the frame" is what they last drawn, this doesn't seem very useful.

One way I can think of is to make requestFrame() force rendering to canvas synchronously the same way ctx.drawImage(canvas,x,y) does. Does that sound reasonable to you?

(In reply to Kaiido from comment #2)

If the timing of painting a canvas was properly defined we could discuss whether the "before-paint" and "after-paint" cases in your example were correct. Until then I don't think we can.

So should we move this to the specs? Because there is a clear Promise in https://w3c.github.io/mediacapture-fromelement/#dom-canvascapturemediastreamtrack-requestframe that we should be able to call captureStream(0) and call requestFrame() exactly "to avoid capturing a partially rendered frame."

The partially rendered frame this refers to is one that's been progressively drawn across several paints, which one can do with a 2d context. With captureStream() that frame would have been captured on each paint, partially rendered. With a non-zero framerate you run the risk of this too, though now chance is involved.

If implementations are not able to honor this promise, maybe it should be removed from there, and the whole captureStream(0) + requestFrame() be removed altogether because if applications have to hope "the frame" is what they last drawn, this doesn't seem very useful.

If you consider that the frame rate of a canvas can be observed by requestAnimationFrame, there are guarantees given. If all drawing happens in rAF callbacks, then when doing a requestFrame() in such a callback we know that a frame is captured before the next rAF callback.

One way I can think of is to make requestFrame() force rendering to canvas synchronously the same way ctx.drawImage(canvas,x,y) does. Does that sound reasonable to you?

Consider this:

  • draw
  • draw
  • forceFrame
  • draw
  • paint

Now the video element and the canvas contain different images. So no I don't think that would be a good idea.

The current approach has some extra benefits over your suggestion, too:

  • any performance throttling done to the canvas frame rate affects frame capture too. There's no benefit for an application to capture faster than the requestAnimationFrame frequency. (Doing so would be detrimental to the user experience, since it's on main thread)
  • Since the canvas captures a frame at the time of paint anyway, for displaying on screen, captureStream() could internally use the same frame to avoid some copies/readbacks, for better performance.

for displaying on screen

That's the thing, captureStream() is not for displaying on screen, for that we can just use the canvas directly.

I don't see any problem having the canvas display something else than the video, that's exactly why we use captureStream(0).

Take the scenario of a frame by frame renderer that outputs to a MediaRecorder instead, why should this setup be limited to rAF rate? Passing a new frame to that recorder as soon as possible (e.g an image load?) and then just wait for the expected frame-rate of the recorded media should be all that is needed. having to wait for the next screen refresh only makes things slower, nothing has to be in sync with the monitor, the canvas itself may not even be rendered to any monitor.

And syncing this to rAF also means that any throttling on rAF will affect that setup, once again for no reason.

It sounds like you want a non-realtime video source that can be piped to, at least, MediaRecorder.

Unfortunately both canvas.captureStream and MediaRecorder are realtime-only, because they both use MediaStreamTracks which are realtime-only.

Something would have to be speced in this space, but unfortunately I'm not aware of any ongoing efforts.

But that's exactly what the 0 in captureStream(0) is supposed to do.

The MediaRecorder example was just an example, we could create a similar situation with a WebRTC drawing board for instance.

If in such "applications progressively render[ing] to a canvas" we have to wait for some unknown time after we called requestFrame() to be able to render something-else to the canvas without this something-else erases what was supposed to be on the canvas when called, that makes for a very broken API.

You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: