Monitor/control capture frame rate in getUserMedia

NEW
Assigned to

Status

()

Core
WebRTC: Audio/Video
P4
normal
Rank:
35
5 years ago
7 months ago

People

(Reporter: jesup, Assigned: jesup)

Tracking

(Depends on: 1 bug)

Trunk
Points:
---
Dependency tree / graph

Firefox Tracking Flags

(Not tracked)

Details

(Assignee)

Description

5 years ago
We may want to limit captured frame rates inserted into a MediaStream to the requested rate (mFPS) (especially with Constraints/settings).

We may want to indicate that the effective frame rate is below the "mMinFPS" rate, which many cameras do in low-light at hi-res.  Reducing resolution may increase the frame rate.
(Assignee)

Updated

4 years ago
Blocks: 833314
Depends on: 896229
(Assignee)

Comment 1

3 years ago
Right now mFPS is used to select a capability from the camera, but the suggestion is that if you request 10fps you should get it, not 30fps if the camera has that.  Webrtc.org code has a framerate limiter.  What does the spec say, if anything?
backlog: --- → webRTC+
Rank: 35
Flags: needinfo?(jib)
Priority: -- → P3
QA Contact: jsmith
Whiteboard: [getUserMedia][blocking-gum-]
Is the question whether artificially creating lower frame rates is ok with the spec? I think so, but I can't find the exact line. I remember conversations that boiled down to that we thought transforming modes was OK "only as long as we're not making up bits". E.g. halving the frame-rate would be OK, but doubling it would not be.

(In reply to Randell Jesup [:jesup] from comment #1)
> Right now mFPS is used to select a capability from the camera, but the
> suggestion is that if you request 10fps you should get it, not 30fps if the
> camera has that.

When you say "request", what constraints exactly? { frameRate: 10 } ?

Let me check my understanding: For a given camera mode/capability:

- mFPS is the target and upper limit, i.e. what the camera will deliver in optimal conditions.
- mMinFPS is a lower limit guarantee the camera will never dip below during operation.

Do I have this right?

How these map to constraints is perhaps not entirely straight forward. See [1]

[1] https://github.com/w3c/mediacapture-main/issues/193
Flags: needinfo?(jib)
According to Harald [1], down-sampling seems to be OK (and I infer that up-sampling is not).

Question for you about the GIPS stack, is the frameRate a camera will give in present lighting conditions observable, without turning the camera light on?

[1] https://github.com/w3c/mediacapture-main/issues/193#issuecomment-113090102
Flags: needinfo?(rjesup)
That seems almost certainly like a hardware dependent thing.
(Assignee)

Comment 5

3 years ago
(In reply to Jan-Ivar Bruaroey [:jib] from comment #3)
> According to Harald [1], down-sampling seems to be OK (and I infer that
> up-sampling is not).
> 
> Question for you about the GIPS stack, is the frameRate a camera will give
> in present lighting conditions observable, without turning the camera light
> on?

I've never heard of it being observable.  At low levels, you can lock the adaptation range for a camera.  This may or may not be available to an application, depending on OS/etc.  

webrtc.org code can decimate the framerate to match whatever we want (i.e. downsampling on framerate.
Flags: needinfo?(rjesup)
Mass change P3->P4 to align with new Mozilla triage process.
Priority: P3 → P4
You need to log in before you can comment on or make changes to this bug.