Open
Bug 846188
Opened 13 years ago
Updated 3 years ago
Monitor/control capture frame rate in getUserMedia
Categories
(Core :: WebRTC: Audio/Video, defect, P4)
Core
WebRTC: Audio/Video
Tracking
()
NEW
backlog | webrtc/webaudio+ |
People
(Reporter: jesup, Unassigned)
References
(Depends on 1 open bug)
Details
We may want to limit captured frame rates inserted into a MediaStream to the requested rate (mFPS) (especially with Constraints/settings).
We may want to indicate that the effective frame rate is below the "mMinFPS" rate, which many cameras do in low-light at hi-res. Reducing resolution may increase the frame rate.
Reporter | ||
Updated•12 years ago
|
Reporter | ||
Comment 1•10 years ago
|
||
Right now mFPS is used to select a capability from the camera, but the suggestion is that if you request 10fps you should get it, not 30fps if the camera has that. Webrtc.org code has a framerate limiter. What does the spec say, if anything?
backlog: --- → webRTC+
Rank: 35
Flags: needinfo?(jib)
Priority: -- → P3
QA Contact: jsmith
Whiteboard: [getUserMedia][blocking-gum-]
Comment 2•10 years ago
|
||
Is the question whether artificially creating lower frame rates is ok with the spec? I think so, but I can't find the exact line. I remember conversations that boiled down to that we thought transforming modes was OK "only as long as we're not making up bits". E.g. halving the frame-rate would be OK, but doubling it would not be.
(In reply to Randell Jesup [:jesup] from comment #1)
> Right now mFPS is used to select a capability from the camera, but the
> suggestion is that if you request 10fps you should get it, not 30fps if the
> camera has that.
When you say "request", what constraints exactly? { frameRate: 10 } ?
Let me check my understanding: For a given camera mode/capability:
- mFPS is the target and upper limit, i.e. what the camera will deliver in optimal conditions.
- mMinFPS is a lower limit guarantee the camera will never dip below during operation.
Do I have this right?
How these map to constraints is perhaps not entirely straight forward. See [1]
[1] https://github.com/w3c/mediacapture-main/issues/193
Flags: needinfo?(jib)
Comment 3•10 years ago
|
||
According to Harald [1], down-sampling seems to be OK (and I infer that up-sampling is not).
Question for you about the GIPS stack, is the frameRate a camera will give in present lighting conditions observable, without turning the camera light on?
[1] https://github.com/w3c/mediacapture-main/issues/193#issuecomment-113090102
Flags: needinfo?(rjesup)
Comment 4•10 years ago
|
||
That seems almost certainly like a hardware dependent thing.
Reporter | ||
Comment 5•10 years ago
|
||
(In reply to Jan-Ivar Bruaroey [:jib] from comment #3)
> According to Harald [1], down-sampling seems to be OK (and I infer that
> up-sampling is not).
>
> Question for you about the GIPS stack, is the frameRate a camera will give
> in present lighting conditions observable, without turning the camera light
> on?
I've never heard of it being observable. At low levels, you can lock the adaptation range for a camera. This may or may not be available to an application, depending on OS/etc.
webrtc.org code can decimate the framerate to match whatever we want (i.e. downsampling on framerate.
Flags: needinfo?(rjesup)
Comment 6•8 years ago
|
||
Mass change P3->P4 to align with new Mozilla triage process.
Priority: P3 → P4
Reporter | ||
Updated•4 years ago
|
Assignee: rjesup → nobody
Updated•3 years ago
|
Severity: normal → S3
You need to log in
before you can comment on or make changes to this bug.
Description
•