Closed
Bug 1273326
Opened 9 years ago
Closed 7 years ago
Investigate a way to provide programmatic timeline data
Categories
(Core :: DOM: Core & HTML, defect)
Core
DOM: Core & HTML
Tracking
()
RESOLVED
DUPLICATE
of bug 1348405
People
(Reporter: Harald, Unassigned)
References
(Blocks 1 open bug)
Details
(Whiteboard: btpp-followup-2016-07-05)
Outcome from discussions with several key frameworks and websites. Automated performance regression testing only gets developers so far.
Performance telemetry in the wild on users machine needs a new programmatic API. It should give ebough information to rebuild a waterfall profile for a user. Most interesting data entries are JS/CSS parsing, execution, JIT, GC, restyle, reflow, etc, to fill the gap of what happens between frames.
Reporter | ||
Comment 2•9 years ago
|
||
Ben to add more color and details on how data might be captured.
Flags: needinfo?(ben.maurer)
Comment 3•9 years ago
|
||
I hope we could re-use PerformanceObserver[1] to expose the new data.
The question is what data to expose in a useful and without leaking privacy sensitive stuff
(for example the :visited stuff).
Some random thoughts:
- We shouldn't expose any thing which hint about "GC", but use some more generic term there.
Perhaps "MemoryManagement" or something. Not all memory management is GC. There is CC and other stuff too.
- MemoryManagement is often rather global thing, various browsing contexts affecting to it, so it is a bit unclear to me how a web
site would utilize the information about memory management performance when it doesn't know what else is running in the same process.
- reflow/restyle/paint etc would probably try to not count time spent in cross-origin-iframes.
- What does "execution" mean here? That sounds like some low level profiler then, for which we need quite a bit different API than
what PerformanceObserver might give us.
- Is the "JIT" about jit compilation. If so, I assume that happens often in background threads in UAs, so what kind of information is useful about it?
Or is the "JIT" about whether some JS which is being run has been JITed? or what?
- JS/CSS parsing. Again, some of that may happen in other threads... Should we expose only the main thread stuff?
[1] https://w3c.github.io/performance-timeline/#the-performanceobserver-interface
Flags: needinfo?(bugs)
Comment 4•9 years ago
|
||
(In reply to Olli Pettay [:smaug] from comment #3)
> Some random thoughts:
> - We shouldn't expose any thing which hint about "GC", but use some more
> generic term there.
> Perhaps "MemoryManagement" or something. Not all memory management is GC.
> There is CC and other stuff too.
> - MemoryManagement is often rather global thing, various browsing contexts
> affecting to it, so it is a bit unclear to me how a web
> site would utilize the information about memory management performance
> when it doesn't know what else is running in the same process.
The goal here isn't to necessarily allow us to take specific actions. I think we'd generally use these numbers in two ways:
(1) "oh wow, there's a lot of time in memory management, let's work with browser vendors to start debugging this"
(2) "hmmm, in this A/B test we doubled the amount of time in memory management, why would this be"
> - reflow/restyle/paint etc would probably try to not count time spent in
> cross-origin-iframes.
We don't care about perf in iframes. We're fine just measuring the main frame.
> - What does "execution" mean here? That sounds like some low level profiler
> then, for which we need quite a bit different API than
What would be helpful here is to figure out how much time has been spent in actual javascript code (as opposed to JS triggered stylesheet / recalc, GC, parsing, etc)
Some other thoughts:
- IMHO we can skip any data on paints right now. I think this is the biggest risk of triggering issues with :visited.
- How much of the stuff that "could happen off the main thread" is actually happening off the main thread today? I'm totally OK if the metrics here are non-standardized. This stuff is really different per UA and this is more of a diagnostic API than anything else.
I think a reasonable API for me might look something like
getDetailedPerfCounters() which returns an array mapping strings to DOMHighResTimeStamp which represent counters of time spent doing specific operations since the page opened. Eg:
{
"moz.style_recalc": 0.01,
"moz.layout": 0.04,
}
A user would poll this counter at reasonable intervals and would be able to create a graph of how time was being spent in the browser. To implement the browser would measure start/stop times of the relevant events and increment an internal counter for the page.
Comment 5•9 years ago
|
||
Oh, you'd want some total number for various different operations? So, not PerformanceObserver but something simpler?
Though, I guess one could implement the former using the latter.
PerformanceObserver could have entries like
{
name: "style_recalc"
...
duration: 0.01
}
Would that work?
(I don't recall what all calculations may reveal :visited. I'd assume style recalc and layout, and not only paint, but not sure.)
Comment 6•9 years ago
|
||
I wonder, would it be acceptable to disable the whole :visited if the API was used?
Comment 7•9 years ago
|
||
(In reply to Olli Pettay [:smaug] from comment #5)
> Oh, you'd want some total number for various different operations? So, not
> PerformanceObserver but something simpler?
> Though, I guess one could implement the former using the latter.
> PerformanceObserver could have entries like
>
> {
> name: "style_recalc"
> ...
> duration: 0.01
> }
> Would that work?
Yeah, that'd be fine. The only reason I suggested the counter approach is because it might be more suitable as an always-on API. For example, parsing / jiting might be an extremely short duration event, we wouldn't want an entry for each one.
Maybe a combination of the two would make sense -- create performance observer events if duration > 20 ms, but always keep an up to date set of counters.
We'd be OK disabling the whole thing if :visited is used, but I actually think that :visited is only an issue if you expose painting information
Comment 8•9 years ago
|
||
I wasn't proposing to disable the API if :visited was used, but disable :visited.
How does that sound to you?
Comment 9•9 years ago
|
||
Or, do weknow that people won't want information about paints ever? If so, we might not need :visited limitations.
Comment 10•9 years ago
|
||
we're probably going to want to use this api on every pageview for fb so we'll have to commit to not using :visited either way.
That said at least for the time being i think painting is the least of our concerns. Many sites could get by with everything other than painting info
Comment 11•9 years ago
|
||
It is just that better to start from stricter API and loosen up the restrictions later if it turns out the API is too strict. Also, it is not clear to me why :visited issues wouldn't apply to restyling too;
if painting exposes :visited, something needs to happen to trigger repaints, and in case of :visited which changes colors and such, it is restyling or such.
Reporter | ||
Comment 12•9 years ago
|
||
Benoit, could you help prioritize the most insightful task categories between frames that developers should be able to read out using an API, with focus on web apps (not games or such)? Would paint be important or could it be squished together or even left out as it doesn't have a huge impact on performance (vs layout and similar expensive operations)
Flags: needinfo?(bgirard)
Comment 13•9 years ago
|
||
Reading the discussion I think I'm missing context in this discussion but from what I can gather:
- The important categories are what our devtools (and similarly all major devtools) expose: 1) Reflow 2) Restyle 3) Paint/Rasterize 4) Script 5) requestAnimationFrame/Refresh Observers.
- Feels like what is being discussed here has overlap with https://www.w3.org/TR/performance-timeline/ and https://www.w3.org/TR/navigation-timing-2/. If we can swing some resources (and wait a bit longer) we could take the time to design a proper spec and build a good spec cross-vendor spec. I'd love to drive this I can't jump on this immediately.
If we want to get something fast I'd say that we either expose our timeline via privileged API and we provide a sample add-on that can load a page, wait for some trigger and report the results.
Flags: needinfo?(bgirard)
Comment 14•9 years ago
|
||
(In reply to Benoit Girard (:BenWa) from comment #13)
> Reading the discussion I think I'm missing context in this discussion but
> from what I can gather:
> - The important categories are what our devtools (and similarly all major
> devtools) expose: 1) Reflow 2) Restyle 3) Paint/Rasterize 4) Script 5)
> requestAnimationFrame/Refresh Observers.
> - Feels like what is being discussed here has overlap with
> https://www.w3.org/TR/performance-timeline/ and
> https://www.w3.org/TR/navigation-timing-2/. If we can swing some resources
> (and wait a bit longer) we could take the time to design a proper spec and
> build a good spec cross-vendor spec. I'd love to drive this I can't jump on
> this immediately.
Here's my worry -- if we try to make a spec out of this I think we'll end up bikeshedding on any possible implementation of how a browser vendor could implement each one of these events. For example "What happens if a browser runs stylesheet recalculation in parallel with JS".
What I'd like to try to do here is to focus on creating an API that allows browser vendors to expose performance information that may not make sense in a standardized way over time. Once we create this API I could imagine that multiple vendors might discover that, for the time being, their browsers are similar enough that they might actually have similar metrics. Maybe we could figure out a way to let vendors agree on a way to formalize these shared definitions while still not forcing them to commit to metrics that constrain their internal design.
I see this API as a debugging API. As a consumer of the API I am far more interested in the rapid ability of vendors to expose information than I am of the consistency of information between vendors. I think that people are currently afraid of exposing information that might prevent them from making future optimizations. I'd prefer an API that has information that is relevant today even if it might be gone or changed tomorrow.
Comment 15•9 years ago
|
||
(In reply to Ben Maurer from comment #14)
> (In reply to Benoit Girard (:BenWa) from comment #13)
> > Reading the discussion I think I'm missing context in this discussion but
> > from what I can gather:
> > - The important categories are what our devtools (and similarly all major
> > devtools) expose: 1) Reflow 2) Restyle 3) Paint/Rasterize 4) Script 5)
> > requestAnimationFrame/Refresh Observers.
> > - Feels like what is being discussed here has overlap with
> > https://www.w3.org/TR/performance-timeline/ and
> > https://www.w3.org/TR/navigation-timing-2/. If we can swing some resources
> > (and wait a bit longer) we could take the time to design a proper spec and
> > build a good spec cross-vendor spec. I'd love to drive this I can't jump on
> > this immediately.
>
> Here's my worry -- if we try to make a spec out of this I think we'll end up
> bikeshedding on any possible implementation of how a browser vendor could
> implement each one of these events. For example "What happens if a browser
> runs stylesheet recalculation in parallel with JS".
>
From my experience in practice it has worked out well and often times the spec discussion resolve useful issues. Particularly for something like this which, in mind, is an overdue spec. All the devtools timeline are nearly identical so that's a sign that this is very mature and all the major vendors have already converged on the same thing. But point taken.
> What I'd like to try to do here is to focus on creating an API that allows
> browser vendors to expose performance information that may not make sense in
> a standardized way over time. Once we create this API I could imagine that
> multiple vendors might discover that, for the time being, their browsers are
> similar enough that they might actually have similar metrics. Maybe we could
> figure out a way to let vendors agree on a way to formalize these shared
> definitions while still not forcing them to commit to metrics that constrain
> their internal design.
As a starting point what about exposing the devtools timeline programmatically? This is already available via privileged APIs [1]. I'd like to hear if you've used this data manually from devtools before and if it's been useful. If not I'd be interested in hearing in what way that data is lacking. Perhaps it's lacking in the GC/Memory use case still. In particular it should be useful to make sure that no frames are taking more than 16ms and to make sure that reflow/restyle are not occurring when they should not.
I'd suggest, as a starting point, to consider building an add-on to expose this privilege API to your domain. You can quickly iterate and update this add-on in under an hour. You get access to a lot of powerful privilege API that can't trivial be exposed to web content for various reason. If you get enough users to opt-in you get a good, but biased, sample.
It is something that we should consider exposing to unprivileged web content but this API would make a lot of timing attacks even more trivial then they already are (and not just :visited). I'll defer to others on this.
It would be fairly easy to expose sampling information inside the timeline. This would provide very complete low-level gecko specific data. We do this for our Talos regression tests. But I think we're just looking for simple regression triggers so probably not the right thing to look at yet.
> I'd prefer an API that has information that is relevant today even if it might be gone or changed tomorrow.
The data expose by the timeline isn't something that I see really changing within the next foreseeable future. With certain browser feature(s), say off-main-thread-rasterization, it's possible that some browsers on some platforms some phases wont happen in the content thread and thus will effectively disappear from the timeline thread.
[1] https://dxr.mozilla.org/mozilla-central/source/docshell/base/nsIDocShell.idl#720
Comment 16•9 years ago
|
||
(In reply to Benoit Girard (:BenWa) from comment #15)
> (In reply to Ben Maurer from comment #14)
> > (In reply to Benoit Girard (:BenWa) from comment #13)
> > > Reading the discussion I think I'm missing context in this discussion but
> > > from what I can gather:
> > > - The important categories are what our devtools (and similarly all major
> > > devtools) expose: 1) Reflow 2) Restyle 3) Paint/Rasterize 4) Script 5)
> > > requestAnimationFrame/Refresh Observers.
> > > - Feels like what is being discussed here has overlap with
> > > https://www.w3.org/TR/performance-timeline/ and
> > > https://www.w3.org/TR/navigation-timing-2/. If we can swing some resources
> > > (and wait a bit longer) we could take the time to design a proper spec and
> > > build a good spec cross-vendor spec. I'd love to drive this I can't jump on
> > > this immediately.
> >
> > Here's my worry -- if we try to make a spec out of this I think we'll end up
> > bikeshedding on any possible implementation of how a browser vendor could
> > implement each one of these events. For example "What happens if a browser
> > runs stylesheet recalculation in parallel with JS".
> >
>
> From my experience in practice it has worked out well and often times the
> spec discussion resolve useful issues. Particularly for something like this
> which, in mind, is an overdue spec. All the devtools timeline are nearly
> identical so that's a sign that this is very mature and all the major
> vendors have already converged on the same thing. But point taken.
Yeah, I totally agree that right now all the browser vendors would likely have similar stats. That said, I do think there are still cases of browser specific issues. As an example,
>
> > What I'd like to try to do here is to focus on creating an API that allows
> > browser vendors to expose performance information that may not make sense in
> > a standardized way over time. Once we create this API I could imagine that
> > multiple vendors might discover that, for the time being, their browsers are
> > similar enough that they might actually have similar metrics. Maybe we could
> > figure out a way to let vendors agree on a way to formalize these shared
> > definitions while still not forcing them to commit to metrics that constrain
> > their internal design.
>
> As a starting point what about exposing the devtools timeline
> programmatically? This is already available via privileged APIs [1]. I'd
> like to hear if you've used this data manually from devtools before and if
> it's been useful. If not I'd be interested in hearing in what way that data
> is lacking. Perhaps it's lacking in the GC/Memory use case still. In
> particular it should be useful to make sure that no frames are taking more
> than 16ms and to make sure that reflow/restyle are not occurring when they
> should not.
>
> I'd suggest, as a starting point, to consider building an add-on to expose
> this privilege API to your domain. You can quickly iterate and update this
> add-on in under an hour. You get access to a lot of powerful privilege API
> that can't trivial be exposed to web content for various reason. If you get
> enough users to opt-in you get a good, but biased, sample.
Yeah, we're already using programmatic access to about:tracing in chrome, so getting the data via a privileged API extension is useful.
That said, our goal is to come up with something we could enable on 100% of hits to FB / something that has very low perf impact. Is devtools low impact enough that we could enable it everywhere.
> > I'd prefer an API that has information that is relevant today even if it might be gone or changed tomorrow.
>
> The data expose by the timeline isn't something that I see really changing
> within the next foreseeable future. With certain browser feature(s), say
> off-main-thread-rasterization, it's possible that some browsers on some
> platforms some phases wont happen in the content thread and thus will
> effectively disappear from the timeline thread.
Fair enough. But I also want to have the freedom to let browser vendors expose things that might be more detailed. For example, different browsers might have different notions of "time spent JITting code". Maybe one browser would like to expose "we spent N ms in interperted code, N ms jitting code with the first level compiler and N ms jitting with the second level compiler". I'd really like for vendors to be able to provide detailed information like this that may or may not make sense across versions or browsers.
Comment 17•9 years ago
|
||
Yes, low level JIT info is very far from something that could be standardized.
> Is devtools low impact enough that we could enable it everywhere.
Depends what feature(s) and settings. For the timeline the perf impact is completely negligible. For profiling/sampling data they're a lot of trade-offs you can make which give you more and more data for a greater cost. I think that's pretty similar to about:tracing that has a lot of optional feature. You can capture quite a bit info if you're willing to trade away say 5% performance (1Khz sampling, timeline, js and native stack) and still decent info for 1% overhead (pseudo stack, timeline).
Comment 18•9 years ago
|
||
(In reply to Benoit Girard (:BenWa) from comment #15)
> As a starting point what about exposing the devtools timeline
> programmatically? This is already available via privileged APIs [1]. I'd
> like to hear if you've used this data manually from devtools before and if
> it's been useful. If not I'd be interested in hearing in what way that data
> is lacking. Perhaps it's lacking in the GC/Memory use case still. In
> particular it should be useful to make sure that no frames are taking more
> than 16ms and to make sure that reflow/restyle are not occurring when they
> should not.
IIUC the issue is not so much the availability of specific things, but the fact that Facebook is far more interested in doing in-the-wild telemetry than in-house testing.
Comment 19•8 years ago
|
||
Would the marked entries http://mxr.mozilla.org/mozilla-central/source/devtools/client/locales/en-US/markers.properties?rev=0c6bacea2396&mark=17-26,33-33,41-46#17 and some generic entry for GC/CC/memorymanagement be ok.
Oh, also some marker for other JS callbacks, like animationFrameCallback.
(I'm not sure about the usefulness of 'Worker' and 'MessagePort', but I guess those could be added too.)
:visited should be disabled when doing any of this. And need to think of other privacy issues too.
Comment 20•8 years ago
|
||
Where are we with this?
Flags: needinfo?(bugs)
Whiteboard: btpp-followup-2016-07-05
Reporter | ||
Comment 21•8 years ago
|
||
Ranking the platform tasks that came up so far:
1. JS parsing
2. Memory management (GC/CC)
3. Main thread (+addons)
4. CSS parsing
5. Style/Layout
We can start small, ship this as trial for a limited set of sites to get feedback and iterate.
Reporter | ||
Comment 22•7 years ago
|
||
Greg, would the actor work solve this use case? Looks like we could provide a simple library that simplifies recording profiles while we implement CDP and match Chrome's API.
Flags: needinfo?(gtatum)
Flags: needinfo?(bugs)
Flags: needinfo?(ben.maurer)
Comment 23•7 years ago
|
||
If this is a public-facing API that any arbitrary site can turn on, then I wouldn't conflate it with the DevTools debugging protocol, especially with the way we lazily load DevTools. Our debugging protocol has a lot of complexities and opinions specifically for making it so we can shuttle information to a DevTools panel.
I don't have a large amount of context for this, but that's my initial thought.
Flags: needinfo?(gtatum)
Reporter | ||
Comment 24•7 years ago
|
||
For now Long Task (v2 more than v1) should probably address this.
Status: NEW → RESOLVED
Closed: 7 years ago
Resolution: --- → DUPLICATE
Assignee | ||
Updated•6 years ago
|
Component: DOM → DOM: Core & HTML
You need to log in
before you can comment on or make changes to this bug.
Description
•