Closed Bug 1399465 Opened 4 years ago Closed 4 years ago

Document DAMP

Categories

(DevTools :: Framework, enhancement, P3)

enhancement

Tracking

(firefox58 fixed)

RESOLVED FIXED
Firefox 58
Tracking Status
firefox58 --- fixed

People

(Reporter: ochameau, Assigned: ochameau)

References

(Blocks 1 open bug)

Details

Attachments

(1 file)

* How to run it locally?
./mach talos-test --activeTests damp
* How to run it on try?
./mach try -b do -p linux64 -u none -t g2-e10s --rebuild-talos 6
* What does it do?
Open toolbox, reload the debugged document and close the toolbox, that, for each tool.
And we do that for two documents, a simple one with almost no DOM (simple) and bild.de (complicated)
* How to see the results from try?
  Looks for "T-e10s(+6)", click on "+6", click on "g2", then on the bottomn panel, click on "Compare result against another revision".
  You are now on PerfHerder, now click on "Compare",
  Next to "Talos" select menu, in the filter textbox, type "damp",
  under "damp opt e10s" item, mouse over the "linux64" line, click on "subtests" link
  You won! You get the results for each DAMP test.


* Where are its sources?
http://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools
Severity: normal → enhancement
Priority: -- → P3
I would also add some doc about how to interpret the results. Like, what does each DAMP test do? How to read the results?
* How to see evolution of a given DAMP test on perfherder:
  Get to graphs page, like this and add a test:
  https://treeherder.mozilla.org/perf.html#/graphs?series=mozilla-central,1418016,1,1
* how to understand which changeset introduced a regression in graphs page?
  https://treeherder.mozilla.org/perf.html#/graphs?series=mozilla-central,1418016,1,1
  click on a colored circle, where you see a spike or a drop, it will show a black popup
  and click on the changeset hash like "cb717386aec8" and you will get a mercurial changelg.
* How do you change code in DAMP?
  Edit the source directly for try.  Production branches require building and signing the checked in XPI file.  (Show steps.)
(In reply to J. Ryan Stinnett [:jryans] (use ni?) from comment #3)
> * How do you change code in DAMP?
>   Edit the source directly for try.  Production branches require building
> and signing the checked in XPI file.  (Show steps.)

See these pages for more info and getting the signature keys:
https://wiki.mozilla.org/EngineeringProductivity/HowTo/SignExtensions#Signing_an_Addon
https://mana.mozilla.org/wiki/display/ateam/Signing+Extensions+in+Tree
  $ cd testing/talos/talos/tests/devtools/addon/
  $ rm *.xpi
  $ zip -r devtools.xpi
  $ jpm sign --api-key xxx --api-secret yyy --xpi devtools.xpi
  $ mv damp*.xpi devtools-signed.xpi
  $ hg/git commit -a (signing the addon doesn't require review, so r=me)
  $ push

This is subject to change as it still relies on jpm which is a SDK tool.
(In reply to Alexandre Poirot [:ochameau] from comment #4)
>   $ zip -r devtools.xpi

It should be: 
  $ zip -r devtools.xpi .

Also, before $ zip -r devtools.xpi, you should do:
  $ vi install.rdf # And bump the version number "<em:version>0.0.17</em:version>"

And after $ mv damp*.xpi devtools-signed.xpi, you can do:
  $ rm devtools.xpi
With bug 1402426 now fixed, I believe the steps in comment 4 / comment 5 are now outdated.

Is that right?
Flags: needinfo?(poirot.alex)
(In reply to J. Ryan Stinnett [:jryans] (use ni?) from comment #6)
> With bug 1402426 now fixed, I believe the steps in comment 4 / comment 5 are
> now outdated.
> 
> Is that right?

Yes, that is going to simplify working on DAMP *A LOT*!
Flags: needinfo?(poirot.alex)
Blocks: damp
Comment on attachment 8916984 [details]
Bug 1399465 - Document DAMP.

https://reviewboard.mozilla.org/r/188010/#review194894

::: devtools/docs/tests/performance-tests.md:11
(Diff revision 1)
> +## How to run it locally?
> +
> +```bash
> +./mach talos-test --activeTests damp
> +```
> +For now, there is no way to filter precisely which DAMP test to run,

I found it seems possible to toggle subtests here

http://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/damp.html#12

But haven't success since can't run the whole DAMP in yesterday's build.
(In reply to Fred Lin [:gasolin] from comment #9)
> Comment on attachment 8916984 [details]
> Bug 1399465 - Document DAMP.
> 
> https://reviewboard.mozilla.org/r/188010/#review194894
> 
> ::: devtools/docs/tests/performance-tests.md:11
> (Diff revision 1)
> > +## How to run it locally?
> > +
> > +```bash
> > +./mach talos-test --activeTests damp
> > +```
> > +For now, there is no way to filter precisely which DAMP test to run,
> 
> I found it seems possible to toggle subtests here
> 
> http://searchfox.org/mozilla-central/source/testing/talos/talos/tests/
> devtools/addon/content/damp.html#12

So yes, you can filter tests by hacking DAMP codebase, but there is no command line argument to do that.
For example, here, you can comment all tests you don't want to run.
FYI I'm working on a command line argument in bug 1401207.
Suggestion from Nicolas, add a paragraph about adding a new DAMP test/modifying an existing one:
* how to track things correctly?
  Wait for all promises, wait for all reflows/render. i.e. try to wait for all asynchronous events to be finished
* Avoid writing tests that are too slow?
  Remember that each DAMP test is executed 25 times, so avoid having tests that are running for a while.
  May be there is no value in running a loop for 1 million iterations?
  May be 10k is already enough?
  May be a document doesn't need 10k elements but only 1k to track inspector markup view efficiency?
(Would be great to first land the general usage documentation first)
Comment on attachment 8916984 [details]
Bug 1399465 - Document DAMP.

https://reviewboard.mozilla.org/r/188010/#review196702

::: devtools/docs/tests/performance-tests.md:1
(Diff revision 1)
> +# DevTools Performance Tests: DAMP

Please remove DevTools from the title. This is the DevTools documentation. No need to reiterate.

::: devtools/docs/tests/performance-tests.md:3
(Diff revision 1)
> +# DevTools Performance Tests: DAMP
> +
> +DevTools have a special test suite to track its own performances.

Please replace all "performances" in this document with "performance".

::: devtools/docs/tests/performance-tests.md:4
(Diff revision 1)
> +# DevTools Performance Tests: DAMP
> +
> +DevTools have a special test suite to track its own performances.
> +This test suite is called 'DAMP' (DevTools At Maximum Performances).

These two lines are so repetitive and reiterative.

They could be summarised down to "DAMP (DevTools At Maximum Performance) is our test suite to track performance".

::: devtools/docs/tests/performance-tests.md:15
(Diff revision 1)
> +```
> +For now, there is no way to filter precisely which DAMP test to run,
> +so you have to run the whole test suite.
> +Note that the first run is slower as it pulls a large tarball with various website copies.
> +
> +## How to run in on try?

run *it* on try

::: devtools/docs/tests/performance-tests.md:20
(Diff revision 1)
> +## How to run in on try?
> +
> +```bash
> +./mach try -b o -p linux64 -u none -t g2-e10s --rebuild-talos 6
> +```
> +* Linux appear to build and run quickly and offer quite stable results over the other OSes.

"Linux" is a third person. Therefore the verbs should be in the third person (appears, offers).

Also: add a comma after quickly.

::: devtools/docs/tests/performance-tests.md:21
(Diff revision 1)
> +
> +```bash
> +./mach try -b o -p linux64 -u none -t g2-e10s --rebuild-talos 6
> +```
> +* Linux appear to build and run quickly and offer quite stable results over the other OSes.
> +The vast majority of performances issues for DevTools are OS agnostic, so it doesn't really matter on which one you run them.

s/matter on which one you run them/matter which one you run them on/

::: devtools/docs/tests/performance-tests.md:27
(Diff revision 1)
> +* "g2-e10s" is the talos bucket in which we run DAMP.
> +* And 6 is the number of times we run DAMP tests. That's to do averages between all the 6 runs and helps filtering out the noise.
> +
> +## What does it do?
> +
> +DAMP is measures three important operations:

extraneous 'is'

::: devtools/docs/tests/performance-tests.md:31
(Diff revision 1)
> +
> +DAMP is measures three important operations:
> +* Open a toolbox
> +* Reload the web page
> +* Close the toolbox
> +It measures the time it takes to do each of these operations, that, for most panels:

add a \n

::: devtools/docs/tests/performance-tests.md:32
(Diff revision 1)
> +DAMP is measures three important operations:
> +* Open a toolbox
> +* Reload the web page
> +* Close the toolbox
> +It measures the time it takes to do each of these operations, that, for most panels:
> +inspector, console, netmonitor debugger, memory, performance.

That sentence is super awkward (what does ', that, ' add there?)

It would make more sense as "It measures the time it takes to do each of these operations for the following panels: inspector, console, netmonitor debugger, memory, performance."

::: devtools/docs/tests/performance-tests.md:34
(Diff revision 1)
> +* Reload the web page
> +* Close the toolbox
> +It measures the time it takes to do each of these operations, that, for most panels:
> +inspector, console, netmonitor debugger, memory, performance.
> +
> +It run all these three tests two times. Each time against a different web page:

It is a third person. s/run/runs/

::: devtools/docs/tests/performance-tests.md:35
(Diff revision 1)
> +* Close the toolbox
> +It measures the time it takes to do each of these operations, that, for most panels:
> +inspector, console, netmonitor debugger, memory, performance.
> +
> +It run all these three tests two times. Each time against a different web page:
> +* "simple": an empty webpage. This test highlight the performances of all tools against the simplest possible page.

"This test" is a third person. s/highlight/highlights/

::: devtools/docs/tests/performance-tests.md:38
(Diff revision 1)
> +
> +It run all these three tests two times. Each time against a different web page:
> +* "simple": an empty webpage. This test highlight the performances of all tools against the simplest possible page.
> +* "complicated": a copy of bild.de website. This is supposed to represent a typical website to debug via DevTools.
> +
> +Then, there is a couple of extra tests:

"there is" goes with a singular object: There is a cat

If you have a plural amount of things being there, like "extra tests", you need to use "there are":

*There are* a couple of extra tests

::: devtools/docs/tests/performance-tests.md:40
(Diff revision 1)
> +* "simple": an empty webpage. This test highlight the performances of all tools against the simplest possible page.
> +* "complicated": a copy of bild.de website. This is supposed to represent a typical website to debug via DevTools.
> +
> +Then, there is a couple of extra tests:
> +* "cold": we run the three operations (open toolbox, page reload and close toolbox) first with the inspector.
> +It is run first after Firefox startup. Before any other test.

This sounds so robotic. It'd sound more natural as

"This is run first after Firefox's startup, before any other test".

::: devtools/docs/tests/performance-tests.md:41
(Diff revision 1)
> +* "complicated": a copy of bild.de website. This is supposed to represent a typical website to debug via DevTools.
> +
> +Then, there is a couple of extra tests:
> +* "cold": we run the three operations (open toolbox, page reload and close toolbox) first with the inspector.
> +It is run first after Firefox startup. Before any other test.
> +This test allows to measure "cold startup". When user first interacts with DevTools, many resources are loaded and cached,

Please remember to use articles, otherwise you sound like a robot again:

s/measure/measure a/
s/When user/When a user/

::: devtools/docs/tests/performance-tests.md:43
(Diff revision 1)
> +Then, there is a couple of extra tests:
> +* "cold": we run the three operations (open toolbox, page reload and close toolbox) first with the inspector.
> +It is run first after Firefox startup. Before any other test.
> +This test allows to measure "cold startup". When user first interacts with DevTools, many resources are loaded and cached,
> +so that all next interactions will be significantly faster.
> +* "consoleBulkLogging": call `console.log()` 10 times in a row and see how long it takes to update the Web Console.

Use `console` (back ticks) instead of "console" (double quotes) to refer to code

Also, third person issue again:
s/call/calls/

For consistency: s/Web Console/Console panel/
otherwise it sounds as if you're talking about another panel

::: devtools/docs/tests/performance-tests.md:44
(Diff revision 1)
> +* "cold": we run the three operations (open toolbox, page reload and close toolbox) first with the inspector.
> +It is run first after Firefox startup. Before any other test.
> +This test allows to measure "cold startup". When user first interacts with DevTools, many resources are loaded and cached,
> +so that all next interactions will be significantly faster.
> +* "consoleBulkLogging": call `console.log()` 10 times in a row and see how long it takes to update the Web Console.
> +* "consoleStreamLogging": call `console.log()` 100 times, but wait for a requestAnimationFrame between each call,

as above, please use back ticks on methods here and also on requestAnimationFrame

::: devtools/docs/tests/performance-tests.md:45
(Diff revision 1)
> +It is run first after Firefox startup. Before any other test.
> +This test allows to measure "cold startup". When user first interacts with DevTools, many resources are loaded and cached,
> +so that all next interactions will be significantly faster.
> +* "consoleBulkLogging": call `console.log()` 10 times in a row and see how long it takes to update the Web Console.
> +* "consoleStreamLogging": call `console.log()` 100 times, but wait for a requestAnimationFrame between each call,
> +then record the meantime the Web Console takes to display all the log messages.

s/meantime/mean time/
s/Web Console/Console panel/

::: devtools/docs/tests/performance-tests.md:49
(Diff revision 1)
> +* "consoleStreamLogging": call `console.log()` 100 times, but wait for a requestAnimationFrame between each call,
> +then record the meantime the Web Console takes to display all the log messages.
> +
> +## How to see the results from try?
> +
> +Looks for "T-e10s(+6)", click on "+6", click on "g2", then on the bottomn panel, click on "Compare result against another revision".

s/looks/look

"look" = imperative, giving instructions to the reader

::: devtools/docs/tests/performance-tests.md:53
(Diff revision 1)
> +
> +Looks for "T-e10s(+6)", click on "+6", click on "g2", then on the bottomn panel, click on "Compare result against another revision".
> +You are now on PerfHerder, now click on "Compare",
> +Next to "Talos" select menu, in the filter textbox, type "damp",
> +under "damp opt e10s" item, mouse over the "linux64" line, click on "subtests" link
> +You won! You get the results for each DAMP test.

I think this 'you won' is out of place here ;)

Also how do you read the results? This section could also benefit from screenshots; could you file a follow up bug?

::: devtools/docs/tests/performance-tests.md:55
(Diff revision 1)
> +You are now on PerfHerder, now click on "Compare",
> +Next to "Talos" select menu, in the filter textbox, type "damp",
> +under "damp opt e10s" item, mouse over the "linux64" line, click on "subtests" link
> +You won! You get the results for each DAMP test.
> +
> +## More information on DAMP / how to contribute to it?

Remove 'More information on DAMP /' and use only 'How to contribute to DAMP'

::: devtools/docs/tests/performance-tests.md:57
(Diff revision 1)
> +under "damp opt e10s" item, mouse over the "linux64" line, click on "subtests" link
> +You won! You get the results for each DAMP test.
> +
> +## More information on DAMP / how to contribute to it?
> +
> +DAMP is based on top of more generic test suite called [Talos](https://wiki.mozilla.org/Buildbot/Talos).

ARTICLES!!

s/of more/of a more/

::: devtools/docs/tests/performance-tests.md:59
(Diff revision 1)
> +
> +## More information on DAMP / how to contribute to it?
> +
> +DAMP is based on top of more generic test suite called [Talos](https://wiki.mozilla.org/Buildbot/Talos).
> +Talos is a Mozilla test suite to follow all Firefox components performance.
> +It is python based and its sources live in mozilla-central, [here](https://dxr.mozilla.org/mozilla-central/source/testing/talos/).

s/python based/written in Python/
Also you can make the sentence WAY shorter:

It is written in Python, here are [the sources](https://dxr.mozilla.org/mozilla-central/source/testing/talos/).

::: devtools/docs/tests/performance-tests.md:60
(Diff revision 1)
> +## More information on DAMP / how to contribute to it?
> +
> +DAMP is based on top of more generic test suite called [Talos](https://wiki.mozilla.org/Buildbot/Talos).
> +Talos is a Mozilla test suite to follow all Firefox components performance.
> +It is python based and its sources live in mozilla-central, [here](https://dxr.mozilla.org/mozilla-central/source/testing/talos/).
> +Compared the other test suites, it isn't run on the cloud, but on dedicated hardware.

Compare needs 'to'.
s/Compare/Compared to/

::: devtools/docs/tests/performance-tests.md:61
(Diff revision 1)
> +
> +DAMP is based on top of more generic test suite called [Talos](https://wiki.mozilla.org/Buildbot/Talos).
> +Talos is a Mozilla test suite to follow all Firefox components performance.
> +It is python based and its sources live in mozilla-central, [here](https://dxr.mozilla.org/mozilla-central/source/testing/talos/).
> +Compared the other test suites, it isn't run on the cloud, but on dedicated hardware.
> +That, to ensure performance numbers are stable over time and between two runs.

"That, " sounds very awkward (it's a literal translation maybe?)

s/That, /This is/

makes it sound more English :)

::: devtools/docs/tests/performance-tests.md:62
(Diff revision 1)
> +DAMP is based on top of more generic test suite called [Talos](https://wiki.mozilla.org/Buildbot/Talos).
> +Talos is a Mozilla test suite to follow all Firefox components performance.
> +It is python based and its sources live in mozilla-central, [here](https://dxr.mozilla.org/mozilla-central/source/testing/talos/).
> +Compared the other test suites, it isn't run on the cloud, but on dedicated hardware.
> +That, to ensure performance numbers are stable over time and between two runs.
> +So DAMP is based on Talos, Talos has various type of tests.

Do not reiterate. We already know that DAMP is based on Talos. You already said that on the first paragraph.

So you can remove it: s/So DAMP is based on Talos, //

And then rewrite the section as:

Talos runs various types of tests. More specifically, DAMP is a [Page loader test](https://wiki.mozilla.org/Buildbot/Talos/Tests#Page_Load_Tests).

The [source code](link) for DAMP is also in mozilla-central. [The main script](link) contains the implementation of all the tests described in "What does it do?" section.

::: devtools/docs/tests/performance-tests.md:69
(Diff revision 1)
> +DAMP sources are also on mozilla-central, [here](http://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/).
> +The main script lives [here](http://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/damp.js) and contains the implementation of all the tests described in "What does it do?" paragraph.
> +
> +## How to see the performance trends?
> +
> +Here is a couple of links to track performances of each tool:

Do not confuse the reader. s/tool/panel/

::: devtools/docs/tests/performance-tests.md:86
(Diff revision 1)
> +  * [Simple](https://treeherder.mozilla.org/perf.html#/graphs?timerange=5184000&series=mozilla-central,1417996,1,1&series=mozilla-central,1417992,1,1&series=mozilla-central,1417994,1,1&series=mozilla-central,1470289,1,1)
> +  * [Complicated](https://treeherder.mozilla.org/perf.html#/graphs?timerange=5184000&series=mozilla-central,1418041,1,1&series=mozilla-central,1418039,1,1&series=mozilla-central,1418040,1,1&series=mozilla-central,1470290,1,1)
> +
> +On these graphs, each circle is a push on mozilla-central.
> +When you see a spike or a drop, you can try to identify the patch that relates to it by clicking the circles.
> +It will should a black popup. Then click o the changeset hash like "cb717386aec8" and you will get a mercurial changelog.

s/should/show/
s/click o/click on/

::: devtools/docs/tests/performance-tests.md:87
(Diff revision 1)
> +  * [Complicated](https://treeherder.mozilla.org/perf.html#/graphs?timerange=5184000&series=mozilla-central,1418041,1,1&series=mozilla-central,1418039,1,1&series=mozilla-central,1418040,1,1&series=mozilla-central,1470290,1,1)
> +
> +On these graphs, each circle is a push on mozilla-central.
> +When you see a spike or a drop, you can try to identify the patch that relates to it by clicking the circles.
> +It will should a black popup. Then click o the changeset hash like "cb717386aec8" and you will get a mercurial changelog.
> +Then it is up to you to read the changelog and see which changeset may have hit the performances.

It would be really useful if you provided a changeset that regressed, and point to what caused the performance regression, because it's the hardest part of this work and you just glossed over it.
Attachment #8916984 - Flags: review?(spenades) → review-
(In reply to Soledad Penades [:sole] [:spenades] from comment #12)
> Comment on attachment 8916984 [details]
> Bug 1399465 - Document DAMP.
> 
> https://reviewboard.mozilla.org/r/188010/#review196702

Thanks for this thoughtful review!

> ::: devtools/docs/tests/performance-tests.md:43
> (Diff revision 1)
> > +Then, there is a couple of extra tests:
> > +* "cold": we run the three operations (open toolbox, page reload and close toolbox) first with the inspector.
> > +It is run first after Firefox startup. Before any other test.
> > +This test allows to measure "cold startup". When user first interacts with DevTools, many resources are loaded and cached,
> > +so that all next interactions will be significantly faster.
> > +* "consoleBulkLogging": call `console.log()` 10 times in a row and see how long it takes to update the Web Console.
> 
> Use `console` (back ticks) instead of "console" (double quotes) to refer to
> code

I've kept the double quotes for "consoleBulkLogging", this is a test name, not actual code.
Otherwise we would have to also use back ticks for "cold".

> ::: devtools/docs/tests/performance-tests.md:53
> (Diff revision 1)
> > +
> > +Looks for "T-e10s(+6)", click on "+6", click on "g2", then on the bottomn panel, click on "Compare result against another revision".
> > +You are now on PerfHerder, now click on "Compare",
> > +Next to "Talos" select menu, in the filter textbox, type "damp",
> > +under "damp opt e10s" item, mouse over the "linux64" line, click on "subtests" link
> > +You won! You get the results for each DAMP test.
> 
> Also how do you read the results? This section could also benefit from
> screenshots; could you file a follow up bug?

Added screenshots to the new patch.

> ::: devtools/docs/tests/performance-tests.md:87
> (Diff revision 1)
> > +  * [Complicated](https://treeherder.mozilla.org/perf.html#/graphs?timerange=5184000&series=mozilla-central,1418041,1,1&series=mozilla-central,1418039,1,1&series=mozilla-central,1418040,1,1&series=mozilla-central,1470290,1,1)
> > +
> > +On these graphs, each circle is a push on mozilla-central.
> > +When you see a spike or a drop, you can try to identify the patch that relates to it by clicking the circles.
> > +It will should a black popup. Then click o the changeset hash like "cb717386aec8" and you will get a mercurial changelog.
> > +Then it is up to you to read the changelog and see which changeset may have hit the performances.
> 
> It would be really useful if you provided a changeset that regressed, and
> point to what caused the performance regression, because it's the hardest
> part of this work and you just glossed over it.

Also added screenshots with a new example paragraph.
Assignee: nobody → poirot.alex
Attachment #8916984 - Flags: review?(nchevobbe)
Comment on attachment 8916984 [details]
Bug 1399465 - Document DAMP.

https://reviewboard.mozilla.org/r/188010/#review198472

I have some questions and comments before r+, but overall this is really nice.
It would have helped me a lot when I started :)

::: devtools/docs/tests/performance-tests.md:10
(Diff revision 2)
> +For now, there is no way to filter precisely which DAMP test to run,
> +so you have to run the whole test suite.

I guess this is not true anymore with --subtests ?

::: devtools/docs/tests/performance-tests.md:43
(Diff revision 2)
> +* "consoleBulkLogging": calls `console.log()` 10 times in a row and see how long it takes to update the Console Panel.
> +* "consoleStreamLogging": calls `console.log()` 100 times, but wait for a `requestAnimationFrame` between each call,
> +then record the mean time the Console Panel takes to display all the log messages.

There are also the new ones (console expand, and inspector mutations).
Should we keep track of all the tests we'll add here ? Or could we say that other, panel-specific tests, can be added ?

::: devtools/docs/tests/performance-tests.md:48
(Diff revision 2)
> +
> +Look for "T-e10s(+6)", click on "+6", then click on "g2":
> +![TreeHerder jobs](perfherder-g2.png)

could we add a step 1 which says to go to treeherder, th elink should be printed in terminal when running mach try

::: devtools/docs/tests/performance-tests.md:55
(Diff revision 2)
> +![TreeHerder jobs](perfherder-g2.png)
> +
> +On the bottom panel that just opened, click on "Compare result against another revision".
> +![TreeHerder panel](perfherder-compare-link.png)
> +
> +You are now on PerfHerder, now click on "Compare",

we repeat "now" twice, we can have: 
You are now on PerfHerder, click on "Compare",

::: devtools/docs/tests/performance-tests.md:62
(Diff revision 2)
> +And here you get the results for each DAMP test:
> +![PerfHerder subtests](perfherder-subtests.png)
> +

Maybe add a comment that tests can be filtered by their name in the UI ? Not sure about that.

Should we mention what "show important results" do ?

Also, maybe we could add a small explanation of what each column represent (e.g. explain what the variance mean, …)

::: devtools/docs/tests/performance-tests.md:78
(Diff revision 2)
> +Here is a couple of links to track performance of each panel:
> +* Inspector:
> +  * [Cold](https://treeherder.mozilla.org/perf.html#/graphs?timerange=5184000&series=mozilla-central,1556628,1,1)
> +  * [Simple](https://treeherder.mozilla.org/perf.html#/graphs?timerange=5184000&series=mozilla-central,1417971,1,1&series=mozilla-central,1417969,1,1&series=mozilla-central,1417966,1,1)
> +  * [Complicated](https://treeherder.mozilla.org/perf.html#/graphs?timerange=5184000&series=mozilla-central,1418016,1,1&series=mozilla-central,1418020,1,1&series=mozilla-central,1418018,1,1)
> +* Console:
> +  * [Simple](https://treeherder.mozilla.org/perf.html#/graphs?timerange=5184000&series=mozilla-central,1417964,1,1&series=mozilla-central,1417960,1,1&series=mozilla-central,1417962,1,1)
> +  * [Complicated](https://treeherder.mozilla.org/perf.html#/graphs?timerange=5184000&series=mozilla-central,1418014,1,1&series=mozilla-central,1418010,1,1&series=mozilla-central,1418012,1,1)
> +* Debugger:
> +  * [Simple](https://treeherder.mozilla.org/perf.html#/graphs?timerange=5184000&series=mozilla-central,1417977,1,1&series=mozilla-central,1417973,1,1&series=mozilla-central,1417975,1,1)
> +  * [Complicated](https://treeherder.mozilla.org/perf.html#/graphs?timerange=5184000&series=mozilla-central,1418026,1,1&series=mozilla-central,1418022,1,1&series=mozilla-central,1418024,1,1)
> +* Netmonitor:
> +  * [Simple](https://treeherder.mozilla.org/perf.html#/graphs?timerange=5184000&series=mozilla-central,1417996,1,1&series=mozilla-central,1417992,1,1&series=mozilla-central,1417994,1,1&series=mozilla-central,1470289,1,1)
> +  * [Complicated](https://treeherder.mozilla.org/perf.html#/graphs?timerange=5184000&series=mozilla-central,1418041,1,1&series=mozilla-central,1418039,1,1&series=mozilla-central,1418040,1,1&series=mozilla-central,1470290,1,1)

i see a timerange in the urls, is it showing the last week (or month) ? 
Could we add it like : 
Here is a couple of links to track performance of each panel over the last month/week

::: devtools/docs/tests/performance-tests.md:102
(Diff revision 2)
> +
> +For example, open [this page](https://treeherder.mozilla.org/perf.html#/graphs?timerange=31536000&series=mozilla-central,1417969,1,1&series=mozilla-central,1417971,1,1&series=mozilla-central,1417966,1,1&highlightedRevisions=a06f92099a5d&zoom=1482734645161.3916,1483610598216.4773,594.756508587898,969.2883437938906).
> +This is tracking inspector opening performance against the "Simple" page.
> +![Perfherder graphs](regression-graph.png)
> +
> +See the regression on Dec 31th?

s/31th?/31th ?
Attachment #8916984 - Flags: review?(nchevobbe) → review-
(In reply to Nicolas Chevobbe [:nchevobbe] from comment #15)
> Comment on attachment 8916984 [details]
> Bug 1399465 - Document DAMP.
> 
> https://reviewboard.mozilla.org/r/188010/#review198472
> 
> I have some questions and comments before r+, but overall this is really
> nice.
> It would have helped me a lot when I started :)
> 
> ::: devtools/docs/tests/performance-tests.md:10
> (Diff revision 2)
> > +For now, there is no way to filter precisely which DAMP test to run,
> > +so you have to run the whole test suite.
> 
> I guess this is not true anymore with --subtests ?

Updated.

> ::: devtools/docs/tests/performance-tests.md:43
> (Diff revision 2)
> > +* "consoleBulkLogging": calls `console.log()` 10 times in a row and see how long it takes to update the Console Panel.
> > +* "consoleStreamLogging": calls `console.log()` 100 times, but wait for a `requestAnimationFrame` between each call,
> > +then record the mean time the Console Panel takes to display all the log messages.
> 
> There are also the new ones (console expand, and inspector mutations).
> Should we keep track of all the tests we'll add here ? Or could we say that
> other, panel-specific tests, can be added ?

No, we can't maintain details for all tests. I'm expecting to drastically grow the list of tests.
But I think we should at least document the open/reload/close ones, which should be the base.
And also the special "cold" test. Then I should better strip console ones and only mention there is others, specific to one particular feature of a given panel?

> ::: devtools/docs/tests/performance-tests.md:48
> (Diff revision 2)
> > +
> > +Look for "T-e10s(+6)", click on "+6", then click on "g2":
> > +![TreeHerder jobs](perfherder-g2.png)
> 
> could we add a step 1 which says to go to treeherder, th elink should be
> printed in terminal when running mach try

Done

> ::: devtools/docs/tests/performance-tests.md:62
> (Diff revision 2)
> > +And here you get the results for each DAMP test:
> > +![PerfHerder subtests](perfherder-subtests.png)
> > +
> 
> Maybe add a comment that tests can be filtered by their name in the UI ? Not
> sure about that.

Done

> Should we mention what "show important results" do ?

Done

> Also, maybe we could add a small explanation of what each column represent
> (e.g. explain what the variance mean, …)

Done

> i see a timerange in the urls, is it showing the last week (or month) ? 
> Could we add it like : 
> Here is a couple of links to track performance of each panel over the last
> month/week

Last 60 days. The default is 14 days, but you don't always see a meaningful trend.
See for example this one:
https://treeherder.mozilla.org/perf.html#/graphs?series=mozilla-central,1556628,1,1

> > +See the regression on Dec 31th?
> 
> s/31th?/31th ?

There is a space before question mark in french, but not in english:
https://simple.wikipedia.org/wiki/Question_mark
Attachment #8916984 - Flags: review?(spenades)
Comment on attachment 8916984 [details]
Bug 1399465 - Document DAMP.

https://reviewboard.mozilla.org/r/188008/#review198822

::: devtools/docs/tests/performance-tests.md:21
(Diff revision 3)
> +
> +
> +## How to run it on try?
> +
> +```bash
> +./mach try -b o -p linux64 -u none -t g2-e10s --rebuild-talos 6

might helpful to add "or use mozreview to push the patch to bugzilla and use mozreview's `Trigger a Try Build` function with following command
`try: -b o -p linux64 -u none -t g2-e10s --rebuild-talos 6`"
> No, we can't maintain details for all tests. I'm expecting to drastically grow the list of tests.
> But I think we should at least document the open/reload/close ones, which should be the base.
> And also the special "cold" test. Then I should better strip console ones and only mention there is others, specific to one particular feature of a given panel?

Having the default ones makes sense. And yes, I think you can remove the console ones, and say that there are other, panel-specific tests.
Comment on attachment 8916984 [details]
Bug 1399465 - Document DAMP.

https://reviewboard.mozilla.org/r/188010/#review198944

Only a minor comment, but this is (very) good to land. Thanks a lot for doing that Alex :)

::: devtools/docs/tests/performance-tests.md:15
(Diff revision 3)
> +Note that the first run is slower as it pulls a large tarball with various website copies.
> +This will run all DAMP tests, you can filter by test name with:
> +```bash
> +./mach talos-test --activeTests damp --subtests console
> +```
> +This command will run all tests related to the console.

s/This command will run all tests related to the console./This command will run all tests which contains "console" in their name.

This makes it more obvious how tests are filtered (i.e. there is not a registery of given subtests keywork)

::: devtools/docs/tests/performance-tests.md:51
(Diff revision 3)
> +First, open TreeHerder. A link is displayed in your console when executing `./mach try`.
> +You should also receive a mail with a link to it.

great

::: devtools/docs/tests/performance-tests.md:71
(Diff revision 3)
> +This table has the following columns:
> +* Base:
> +  Average time it took to run the test on the base build (by default, the last 2 days of DAMP runs on mozilla-central revisions)
> +* New:
> +  Average time it took to run the test on the new build, the one with your patches.
> +  Both "Base" and "New" have a  "± x.xx%" suffix which tells you the variance of the timings.
> +  i.e. the average difference in percent between the median timing and both the slowest and the fastest.
> +* Delta:
> +  Difference in percent between the base and new runs.
> +  The color of this can be red, orange or green:
> +   * Red means "certainly regressing"
> +   * Orange means "possibly regressing"
> +   * Green means "certainly improving"
> +   * No colored background means "nothing to conclude"
> +  The difference between certainly and possibly is explained by the next column.
> +* Confidence:
> +  If there is a significant difference between the two runs, tells if the results is trustworthy.
> +   * "low" either means there isn't a significant difference between the two runs, or the difference is smaller than the typical variance of the given test.
> +   If the test is known to have an execution time varying by 2% between two runs of the same build, and you get a 1% difference between your base and new builds,
> +   the confidence will be low. You really can't make any conclusion.
> +   * "med" means medium confidence and the delta is around the size of the variance. It may highlight a regression, but it can still be justified by the test noise.
> +   * "high" means that this is a high confidence difference. The delta is significantly higher than the typical test variance. A regression is most likely detected.

nice !
Attachment #8916984 - Flags: review?(nchevobbe) → review+
Attachment #8916984 - Flags: review?(spenades)
https://hg.mozilla.org/mozilla-central/rev/95f3296b1c97
Status: NEW → RESOLVED
Closed: 4 years ago
Resolution: --- → FIXED
Target Milestone: --- → Firefox 58
Comment on attachment 8916984 [details]
Bug 1399465 - Document DAMP.

https://reviewboard.mozilla.org/r/188010/#review198832

::: devtools/docs/tests/performance-tests.md:36
(Diff revision 3)
> +* Open a toolbox
> +* Reload the web page
> +* Close the toolbox
> +It measures the time it takes to do each of these operations for the following panels:
> +
> +inspector, console, netmonitor debugger, memory, performance.

nit: missed `,` between `netmonitor debugger`

::: devtools/docs/tests/performance-tests.md:20
(Diff revision 4)
> +This command will run all tests which contains "console" in their name.
> +
> +## How to run it on try?
> +
> +```bash
> +./mach try -b o -p linux64 -u none -t g2-e10s --rebuild-talos 6

How about use the artifact build? For most cases we won't touch the platform part and artifact build can save lots of build times

`./mach try -b o -p linux64 -u none -t g2-e10s --rebuild-talos 6 --artifact`
Blocks: 1415472
Product: Firefox → DevTools
You need to log in before you can comment on or make changes to this bug.