VSync timestamp handling is confused, inconsistent, and non-optimal
Categories
(Core :: Graphics, defect, P3)
Tracking
()
People
(Reporter: mstange, Unassigned)
References
(Blocks 1 open bug)
Details
I'm not sure what I want to achieve with this bug, other than having a place to write down what I found. Anyways, here are a bunch of things I realized are not good about our current vsync story:
- Vsync is global rather than per window.
- We always ignore the timestamp from the macOS vsync callback: It gives us an output time that's 25ms in the future. That's more than one frame. This means that the output timestamp of the previous vsync callback invocation is still in the future compared to TimeStamp::Now(), so we completely ignore it and overwrite it with
TimeStamp::Now()
. - All parts of the code are confused about what they mean by vsync time. The time when the callback is called? The expected present time ("output time")? The time for which animations should be sampled?
mAsyncImageManager->SetCompositionTime(...)
should ideally be called with the output time, for optimal AV sync. Currently, WebRenderBridgeParent::MaybeGenerateFrame calls it with TimeStamp::Now().- Nothing passes through any "vsync timestamp" from
CompositorVsyncScheduler::Composite
viaWebRenderBridgeParent::CompositeToTarget
toWebRenderBridgeParent::MaybeGenerateFrame
.CompositeToTarget
doesn't have a timestamp argument. - However,
CompositorVsyncScheduler::Composite
stashes the vsync timestamp inmLastVsync
andmLastCompose
, and exposes it viaGetLastVsyncTime()
andGetLastComposeTime()
. - APZ digs a tunnel to obtain a vsync time for APZ animation sampling, via
GetLastComposeTime()
:
TimeStamp animationTime = cbp->GetTestingTimeStamp().valueOr(
mCompositorScheduler->GetLastComposeTime());
TimeDuration frameInterval = cbp->GetVsyncInterval();
// As with the non-webrender codepath in AsyncCompositionManager, we want to
// use the timestamp for the next vsync when advancing animations.
if (frameInterval != TimeDuration::Forever()) {
animationTime += frameInterval;
}
apz->SetSampleTime(animationTime);
... and it says "I want the time of the next vsync"!!
- But then of course there's the APZ frame delay mechanism that wants to sample scroll offsets as they were at the previous vsync, in order to be consistent with what the main thread saw. So do we want to sample in the future or in the past?
As an aside, sampling animations at the output timestamp is problematic unless you also delay the kickoff of the animation.
For example, let's say somebody presses the keyboard down arrow to kick off a smooth scroll animation. This animation is supposed to have smooth acceleration and deceleration.
So let's say the key is pressed at time 22 (this is the native event timestamp). It arrives at our parent process main thread at time 25 (3ms latency is normal). Furthermore, let's say our vsync callback is usually called around 25ms before the output time.
If we naively say "start the animation at the native event time, let it run for 100ms, and always sample it at the output time when the vsync callback is called, to be optimally responsive", this means the following:
At time 25, we receive the event. We create an animation that starts at 22 and ends at 122. We start listening for vsync.
Let's say the vsync callback comes in at 30, with an output time of 55. So we sample the animation at 55 and draw some output.
This now means that the section of the animation that was between 22 and 55 never made it to the screen! We simply chopped off the first 33ms of the animation.
Chopping off the animation start means that there's a jump. But our goal was a smooth acceleration.
So I think this means two things:
First of all, starting the animation at the event timestamp was not a good idea; starting an animation in the past always chops
off the beginning. We process the event at 25, so we should start the animation at 25 and not earlier.
Furthermore, sampling the animation in the future also skips the beginning, unless the animation is delayed by the "output latency".
I'm not sure which approach is better. Let's try them both:
- Sampling in the past
The key press event comes in at 25. We create an animation that runs from 25 to 125. We start listening for vsync, and we choose to use a "sampling shift" of 28ms.
The vsync callback is called at 30, with an output time of 55. We apply the sampling shift to the output time, and get a sampling time of 55 - 28 = 28.
We sample the animation at 28. We don't miss any part of the smooth acceleration. Good.
- Delaying animation start by the output latency
The key press event comes in at 25. And somehow we know that our output latency is around 25ms.
We create an animation that runs from 50 to 150.
The vsync callback is called at 30, with an output time of 55. We sample the animation at 55.
We don't miss any part of the smooth acceleration. Good.
Also, somewhat obviously, animations without jumps start with the start value at time 0. So having a draw at time 0 doesn't produce any change on the screen. It's only on the next frame that the first change is visible.
So chopping off the first "vsync interval"-sized piece from an animation seems fine.
Comment 1•5 years ago
•
|
||
Related: bug 1563075
TL;DR: it would be nice to move away from VSync and have a more generalised abstraction that also allows for e.g. OS compositor driven synchronisation (e.g. on Wayland, modern X11, most likely other platforms as well), as well as VSync, depending on platform.
Site node: the current Wayland VSync emulation code works per window and might serve as inspiration (it's still experimental, see bug 1629140).
Edit: GTK4 opengl X11 and Wayland backends (don't know about MacOS/Win) have quite sophisticated implementations for that - and arguably FF even with WR is more similar to a toolkit than a game engine, right? ;)
Comment 2•5 years ago
|
||
Please pick a priority and platform for triage purposes :) Thanks.
Updated•5 years ago
|
GTK3 implementation for "frame time" is here: https://gitlab.gnome.org/GNOME/gtk/-/blob/gtk-3-24/gdk/gdkframeclockidle.c. See the comments in compute_smooth_frame_time()
and gdk_frame_clock_paint_idle()
for some of the considerations. Since Firefox already uses GDK3, it might make sense to reuse the existing machinery. Alternatively Firefox can implement a similar mechanism.
The relevant user-facing GDK APIs are gdk_frame_clock_begin_updating()
, gdk_frame_clock_end_updating()
and gdk_frame_clock_get_frame_time()
.
Note that GDK uses relative times with an unspecified base. As an example, an application that animates a single pixel across the screen, at a speed of 1,000 pixels/second (so 1 pixel/millisecond), would look something like the following pseudo-code:
// frame_time is in us
static base_frame_time;
void animation_callback (frame_time) {
if (!base_frame_time)
{
// First callback
base_frame_time = frame_time;
}
// draw_pixel() can draw at fractional positions
draw_pixel ((frame_time - base_frame_time) / 1000.0f, 0);
}
start_animation() {
base_frame_time = 0;
start_animation_callback (animation_callback);
}
As for event timestamps, I'm not sure why you need them, unless you are doing some sort of interpolation. See for example https://gitlab.gnome.org/GNOME/gtk/-/merge_requests/1117 - this MR interpolates input events to the screen refresh, in order to achieve smooth scrolling.
Comment 5•5 years ago
|
||
Thanks Yariv. Small note: the bug was started with focus on MacOS and AFAIK we only use GTK/GDK on Linux and friends. So FF would definitely need it's own implementation.
Updated•5 years ago
|
Reporter | ||
Updated•2 years ago
|
Description
•