Closed Bug 1415472 Opened 7 years ago Closed 7 years ago

Document how to add a new DAMP test

Categories

(DevTools :: Framework, enhancement, P3)

enhancement

Tracking

(firefox59 fixed)

RESOLVED FIXED
Firefox 59
Tracking Status
firefox59 --- fixed

People

(Reporter: ochameau, Assigned: ochameau)

References

(Blocks 1 open bug)

Details

Attachments

(1 file)

Adding a new DAMP test is not obvious and should be documented. Here is a couple of subjects to cover: * Where the tests live? It is already covered by existing DAMP test, but it should be more precise. Say that damp.html should be modified to add the test name and description. And say where to put the test in damp.js. * Always ensure the test regress if you backout the related performance fix. Writing a performance test is challenging, it is quite easy to be irrelevant. So, always push to try twice: - once with just the new DAMP test - and another one with the test *and* the related performance fix (or worse case scenario, a fake regression with a setTimeout put in production code we would like to cover) Then, use PerfHerder to compare the two and ensure there is a significant difference between the two runs. * Document the "summary bump" issue When you add a new test, it will increase the "summary" result of DAMP and most likely spawn a performance alert that sheriff will report. * Make the test take less than one second, and ideally even less. Running performance test is expensive. We have to run them multiple times and dedicated hardware. So we should avoid being unnecessary slow to run. * May be mention that only important interactions should be covered? Related to tests being expensive, may be we should restrict perf tests to only important use cases?
Assignee: nobody → poirot.alex
Priority: -- → P3
Comment on attachment 8936247 [details] Bug 1415472 - Document writing new DAMP test. https://reviewboard.mozilla.org/r/207012/#review212944 This is a great start! Lots of comments as you'll see. I also think this could benefit from linking to how to run the tests locally - and possibly clarify if the try runs need any special, simplified syntax. ::: devtools/docs/tests/writing-perf-tests.md:4 (Diff revision 2) > +# Writing new performance test > + > +See [Performance tests (DAMP)](performance-tests.md) for an overall description of our performance tests. > +Here, we will describe how to contribute a new test with an example: track the performance of clicking inside the inspector panel. s/contribute/write ::: devtools/docs/tests/writing-perf-tests.md:6 (Diff revision 2) > +# Writing new performance test > + > +See [Performance tests (DAMP)](performance-tests.md) for an overall description of our performance tests. > +Here, we will describe how to contribute a new test with an example: track the performance of clicking inside the inspector panel. > + > +## Where to write the test? The header would be better as "Where is the code for the test?" otherwise it might sound as if you're explaining which code editor to use :-) ::: devtools/docs/tests/writing-perf-tests.md:8 (Diff revision 2) > +See [Performance tests (DAMP)](performance-tests.md) for an overall description of our performance tests. > +Here, we will describe how to contribute a new test with an example: track the performance of clicking inside the inspector panel. > + > +## Where to write the test? > + > +For now, all the tests live in a single file [damp.js](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.js). Can we link without the revision hash? (unless it's intentional and you want to link to that specific moment in time) ::: devtools/docs/tests/writing-perf-tests.md:9 (Diff revision 2) > +Here, we will describe how to contribute a new test with an example: track the performance of clicking inside the inspector panel. > + > +## Where to write the test? > + > +For now, all the tests live in a single file [damp.js](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.js). > +First you have to decide which kind of test you want to write. This line is superfluous, since you're explaining immediately the two types of test. Let them decide which one they write once they learn which one is which :-) ::: devtools/docs/tests/writing-perf-tests.md:10 (Diff revision 2) > + > +## Where to write the test? > + > +For now, all the tests live in a single file [damp.js](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.js). > +First you have to decide which kind of test you want to write. > +There is two kind of tests. A first set are being run against two documents: s/There is/There are/ (there is --> one object) s/kind/kinds/ (this is a plural) ::: devtools/docs/tests/writing-perf-tests.md:10 (Diff revision 2) > + > +## Where to write the test? > + > +For now, all the tests live in a single file [damp.js](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.js). > +First you have to decide which kind of test you want to write. > +There is two kind of tests. A first set are being run against two documents: I also don't understand what you mean with "A first set are being run against two documents" - do you mean that the two types of tests are run against the two test documents, or that each type is run against the corresponding test document? ::: devtools/docs/tests/writing-perf-tests.md:11 (Diff revision 2) > +## Where to write the test? > + > +For now, all the tests live in a single file [damp.js](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.js). > +First you have to decide which kind of test you want to write. > +There is two kind of tests. A first set are being run against two documents: > + * "Simple", an empty webpage (This one helps highlighting the load time of panels), s/webpage /webpage./ And remove the parenthesis, it'll make this clearer :) ::: devtools/docs/tests/writing-perf-tests.md:12 (Diff revision 2) > + > +For now, all the tests live in a single file [damp.js](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.js). > +First you have to decide which kind of test you want to write. > +There is two kind of tests. A first set are being run against two documents: > + * "Simple", an empty webpage (This one helps highlighting the load time of panels), > + * "Complicated", a copy of bild.be, a german newspaper website. Which is supposed to represent a significant website to debug. This one helps seeing the performance of tools updating against significant website contents. s/german/German/ And then I'd write: This allows us to examine the performance of the tools when inspecting complicated, big websites. ::: devtools/docs/tests/writing-perf-tests.md:14 (Diff revision 2) > +First you have to decide which kind of test you want to write. > +There is two kind of tests. A first set are being run against two documents: > + * "Simple", an empty webpage (This one helps highlighting the load time of panels), > + * "Complicated", a copy of bild.be, a german newspaper website. Which is supposed to represent a significant website to debug. This one helps seeing the performance of tools updating against significant website contents. > + > +If you want to execute your test against these two documents, contribute to [this function](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.js#563-673). In this big `_getToolLoadingTests` function, there is one method for each tool, so contribute to the method of the tool you want to test. s/If you want to execute your test against these two documents, contribute to/To run your test against these two documents, add it to/ Then add a new line before "In this big `_getToolLoadingTests` function, there is one method for each tool, so contribute to the method of the tool you want to test." Also replace the line with: "Look for the `_getToolLoadingTests` function. There is one method per tool. Since we want to test how long does it take to click on the inspector, we will find the `inspector` method, and add the new test code there, like this:" ::: devtools/docs/tests/writing-perf-tests.md:15 (Diff revision 2) > +There is two kind of tests. A first set are being run against two documents: > + * "Simple", an empty webpage (This one helps highlighting the load time of panels), > + * "Complicated", a copy of bild.be, a german newspaper website. Which is supposed to represent a significant website to debug. This one helps seeing the performance of tools updating against significant website contents. > + > +If you want to execute your test against these two documents, contribute to [this function](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.js#563-673). In this big `_getToolLoadingTests` function, there is one method for each tool, so contribute to the method of the tool you want to test. > +Regarding the click example, you would contribute to the `_getToolLoadingTests` > `inspector` method: Remove this line (it's explained in the paragraph above) ::: devtools/docs/tests/writing-perf-tests.md:31 (Diff revision 2) > + > + await this.closeToolboxAndLog(label + ".inspector"); > + await this.testTeardown(); > + }, > +``` > +Otherwise, if you have to test against a specific document, you should introduce an independant test function. Like [this one](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.js#330-348) or [this other one](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.js#350-402). You also have to register the new test function you just introduced [right here](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.js#863-864). We will see in next paragraph how to name the test, which is the string used as key in this `tests` object. s/Otherwise, if you have to test/Or if you want to test/ s/independant/independent/ Where it says "right here" can you say which function or place it refers to? This --> "We will see in next paragraph how to name the test, which is the string used as key in this `tests` object." is not good. You don't need to announce what will happen in the next paragraph, because this is not a TV series. Remove the paragraph and move the relevant content (naming) to the naming section. ::: devtools/docs/tests/writing-perf-tests.md:33 (Diff revision 2) > + await this.testTeardown(); > + }, > +``` > +Otherwise, if you have to test against a specific document, you should introduce an independant test function. Like [this one](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.js#330-348) or [this other one](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.js#350-402). You also have to register the new test function you just introduced [right here](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.js#863-864). We will see in next paragraph how to name the test, which is the string used as key in this `tests` object. > + > +For the click example, if you prefer to test against a specific document, you would do something like: Is this another way of 'loading' documents by setting the document with a data:text/html uri? If so, remove the line and say "You could also use extremely simple documents like this:" ::: devtools/docs/tests/writing-perf-tests.md:49 (Diff revision 2) > +... > +startTest(doneCallback, config) { > + ... > + // And you have to register the test in `startTest` function > + tests["inspector.click"] = this._inspectorClickTest; > + ... > +} All this indentation is a tad confusing, I don't know if this is part of the same method, or the same object, or what! ::: devtools/docs/tests/writing-perf-tests.md:63 (Diff revision 2) > + > +## How to name your test and register it? > + > +If your are writing a test executing against Simple and Complicated documents, your test name will look like: `(simple|complicated).${tool-name}.${test-name}`. > +So for our example, it would be `simple.inspector.click` and `complicated.inspector.click`. > +Otherwise, for independant test, the name doesn't have any convention other than it should start with tool name if the test is specific to a given tool. s/independant/independent The line would read nicer as "For independent tests that don't use the Simple or Complicated documents, the test name only needs to start with the tool name, if the test is specific to that tool". ::: devtools/docs/tests/writing-perf-tests.md:70 (Diff revision 2) > + > +Once you come up with a name, you will have to register your test [here](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.html#12-42) and [here](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.html#44-71) with a short description of it. > + > +## How to write a performance test? > + > +Now, we saw how to register and where to write a test script, now let's see how to write a performance test. What is "a test script" here? Is it the skeleton we wrote above? ::: devtools/docs/tests/writing-perf-tests.md:71 (Diff revision 2) > +Once you come up with a name, you will have to register your test [here](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.html#12-42) and [here](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.html#44-71) with a short description of it. > + > +## How to write a performance test? > + > +Now, we saw how to register and where to write a test script, now let's see how to write a performance test. > +There is a `runTest` method that helps recording a precise action duration. Remove, this is redundant (the same content is on the next line), and also anticipates things in the wrong order. ::: devtools/docs/tests/writing-perf-tests.md:79 (Diff revision 2) > +``` > +let test = this.runTest("my.test.name"); // `runTest` expects the test name as argument > + > +// Do an action you want to record here > + > +// Once your action is completed, call `runTest` retuned object's `done` method. s/retuned/returned/ ::: devtools/docs/tests/writing-perf-tests.md:83 (Diff revision 2) > +So for our click example it would be: > +``` > +let inspector = toolbox.getPanel("inspector"); > +let window = inspector.panelWin; // Get inspector's panel window object > +let body = window.document.body; > + > +let test = this.runTest("inspector.click"); > +body.addEventListener("click", function () { > + test.done(); > +}, { once: true }); > +body.click(); > +``` Where does this code go? If it is on the "space" we left before, then showing the complete example would help understanding better. ::: devtools/docs/tests/writing-perf-tests.md:98 (Diff revision 2) > +body.click(); > +``` > + > +## How to write a good performance test? > + > +* Ensure that its results change when regressing/fixing the code or feature you want to watch. You should replace these `*` with level three headers `###` so it is formatted nicely ::: devtools/docs/tests/writing-perf-tests.md:101 (Diff revision 2) > +## How to write a good performance test? > + > +* Ensure that its results change when regressing/fixing the code or feature you want to watch. > + > +If you are writing the new test to cover a recent regression and you have a patch to fix it, > +push your test to try without _and_ with the regression fix. That, to ensure the test reports the fix as an improvement. s/That, /This is to/ (it sounds so not English at all!) ::: devtools/docs/tests/writing-perf-tests.md:102 (Diff revision 2) > + > +* Ensure that its results change when regressing/fixing the code or feature you want to watch. > + > +If you are writing the new test to cover a recent regression and you have a patch to fix it, > +push your test to try without _and_ with the regression fix. That, to ensure the test reports the fix as an improvement. > +If you are introducing the test without any patch to improve the performance, try slowing down the code your are trying to cover with fake slowness by using setTimeout, or very slow for loops iteration over one million elements. Again, to ensure your test would catch a significant regression s/introducing the test/introducing a test/ Also this line is a bit confusing; I can't understand why making the test code slow would help catch regressions. ::: devtools/docs/tests/writing-perf-tests.md:111 (Diff revision 2) > +Running performance test is expensive. We are currently running them 25 times for each changeset landed in Firefox. > +So try to avoid having your new test run in more than 1 second. > + > +* Cover important user interactions > + > +For the same reason, that the performance test are expensive. Avoid testing edge cases, or bundle them into an existing test to prevent slowing overall DAMP tests execution. What about this: s/For the same reason, that the performance test are expensive.../Since running performance tests is expensive, avoid testing edge cases, or bundle them into a more generic test to avoid increasing too much the total DAMP suite running time. Also, I'd really love an example here of edge case that doesn't need to be tested independently, and can be included in a more generic one.
Attachment #8936247 - Flags: review?(spenades) → review-
(In reply to Soledad Penades [:sole] [:spenades] from comment #3) > ::: devtools/docs/tests/writing-perf-tests.md:8 > (Diff revision 2) > > +See [Performance tests (DAMP)](performance-tests.md) for an overall description of our performance tests. > > +Here, we will describe how to contribute a new test with an example: track the performance of clicking inside the inspector panel. > > + > > +## Where to write the test? > > + > > +For now, all the tests live in a single file [damp.js](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.js). > > Can we link without the revision hash? (unless it's intentional and you want > to link to that specific moment in time) It is intentional for all other links that refer to specific line, but yes, it makes sense to show whatever is the latest for this link. > ::: devtools/docs/tests/writing-perf-tests.md:10 > (Diff revision 2) > > + > > +## Where to write the test? > > + > > +For now, all the tests live in a single file [damp.js](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.js). > > +First you have to decide which kind of test you want to write. > > +There is two kind of tests. A first set are being run against two documents: > > I also don't understand what you mean with "A first set are being run > against two documents" - do you mean that the two types of tests are run > against the two test documents, or that each type is run against the > corresponding test document? There is two kinds, the "first set" is the first kind. The two kinds are: * generic simple and complicated documents. * specific document, or no document at all, nor any toolbox. > ::: devtools/docs/tests/writing-perf-tests.md:33 > (Diff revision 2) > > + await this.testTeardown(); > > + }, > > +``` > > +Otherwise, if you have to test against a specific document, you should introduce an independant test function. Like [this one](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.js#330-348) or [this other one](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.js#350-402). You also have to register the new test function you just introduced [right here](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.js#863-864). We will see in next paragraph how to name the test, which is the string used as key in this `tests` object. > > + > > +For the click example, if you prefer to test against a specific document, you would do something like: > > Is this another way of 'loading' documents by setting the document with a > data:text/html uri? If so, remove the line and say "You could also use > extremely simple documents like this:" Yes. I took your proposal, but added "specific" in this sentence as that's the important difference compared the simple/complicated. > ::: devtools/docs/tests/writing-perf-tests.md:70 > (Diff revision 2) > > + > > +Once you come up with a name, you will have to register your test [here](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.html#12-42) and [here](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.html#44-71) with a short description of it. > > + > > +## How to write a performance test? > > + > > +Now, we saw how to register and where to write a test script, now let's see how to write a performance test. > > What is "a test script" here? Is it the skeleton we wrote above? Yes. > ::: devtools/docs/tests/writing-perf-tests.md:83 > (Diff revision 2) > > +So for our click example it would be: > > +``` > > +let inspector = toolbox.getPanel("inspector"); > > +let window = inspector.panelWin; // Get inspector's panel window object > > +let body = window.document.body; > > + > > +let test = this.runTest("inspector.click"); > > +body.addEventListener("click", function () { > > + test.done(); > > +}, { once: true }); > > +body.click(); > > +``` > > Where does this code go? If it is on the "space" we left before, then > showing the complete example would help understanding better. Yes, I made a complete example based on simple/complicated kind of test. > ::: devtools/docs/tests/writing-perf-tests.md:102 > (Diff revision 2) > > + > > +* Ensure that its results change when regressing/fixing the code or feature you want to watch. > > + > > +If you are writing the new test to cover a recent regression and you have a patch to fix it, > > +push your test to try without _and_ with the regression fix. That, to ensure the test reports the fix as an improvement. > > +If you are introducing the test without any patch to improve the performance, try slowing down the code your are trying to cover with fake slowness by using setTimeout, or very slow for loops iteration over one million elements. Again, to ensure your test would catch a significant regression > > s/introducing the test/introducing a test/ > > Also this line is a bit confusing; I can't understand why making the test > code slow would help catch regressions. You have to ensure that the test reports a regression if you make the code you are trying to profile be slow. If you don't verify, by making the profiled code slow on purpose, you may very easily have a test script that do not report any regression. > Also, I'd really love an example here of edge case that doesn't need to be > tested independently, and can be included in a more generic one. Added preffed-off features as a good example. Also, I introduced another very important paragraph: ### Verify that you wait for all asynchronous code
Comment on attachment 8936247 [details] Bug 1415472 - Document writing new DAMP test. https://reviewboard.mozilla.org/r/207012/#review213108 ::: devtools/docs/tests/writing-perf-tests.md:76 (Diff revision 3) > + > +Once you come up with a name, you will have to register your test [here](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.html#12-42) and [here](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.html#44-71) with a short description of it. > + > +## How to write a performance test? > + > +Now, we saw how to register and where to write a test script, now let's see how to write a performance test. This line is redundant. Delete it. ::: devtools/docs/tests/writing-perf-tests.md:78 (Diff revision 3) > + > +## How to write a performance test? > + > +Now, we saw how to register and where to write a test script, now let's see how to write a performance test. > +When you write a performance test, in most cases, you only care about the time it takes to complete a very precise action. > +This `runtTest` helpers allows you to do that like this: s/runtTest/runTest s/helpers/helper But I don't understand what it allows me to do. "That like this" - what is "that"? What does runTest return that gives me the time it took to complete the action? This is what I expect to see in this section but I don't. ::: devtools/docs/tests/writing-perf-tests.md:115 (Diff revision 3) > +## How to write a good performance test? > + > +### Verify that you wait for all asynchronous code > + > +If your test involves asynchronous code, which is very likely given the DevTools codebase, please review carefully your test script. > +You should ensure that your test function resolves only once every asynchronous code dispatched by your script ends. s/ends/has finished executing ::: devtools/docs/tests/writing-perf-tests.md:116 (Diff revision 3) > +Not only the functions related to the very precise feature your are trying to measure. > +You should ensure that _any_ code ran directly or indirectly by your code is completed. Switch these two lines around - they are in the wrong order (and also the first one is grammatically incomplete, it is missing the verb). s/your are/you are ::: devtools/docs/tests/writing-perf-tests.md:120 (Diff revision 3) > +You should ensure that your test function resolves only once every asynchronous code dispatched by your script ends. > +Not only the functions related to the very precise feature your are trying to measure. > +You should ensure that _any_ code ran directly or indirectly by your code is completed. > + > +This is to prevent introducing noise in the test run after yours. If any asynchronous code is pending, > +it is likely to run in parallel of the next test and increase it's variance. s/parallel of/parallel with/ s/it's/its Please pay attention to possessives! ::: devtools/docs/tests/writing-perf-tests.md:121 (Diff revision 3) > +Not only the functions related to the very precise feature your are trying to measure. > +You should ensure that _any_ code ran directly or indirectly by your code is completed. > + > +This is to prevent introducing noise in the test run after yours. If any asynchronous code is pending, > +it is likely to run in parallel of the next test and increase it's variance. > +Having noise prevents detecting smaller regression. This sentence is a bit mechanical (plus "regression" should be plural). It would read better as "Noise in the tests makes it hard to detect small regressions". ::: devtools/docs/tests/writing-perf-tests.md:127 (Diff revision 3) > + > +### Ensure that its results change when regressing/fixing the code or feature you want to watch. > + > +If you are writing the new test to cover a recent regression and you have a patch to fix it, > +push your test to try without _and_ with the regression fix. This is to ensure the test reports the fix as an improvement. > +If you are introducing a test without any patch to improve the performance, try slowing down the code your are trying to cover with fake slowness by using setTimeout, or very slow for loops iteration over one million elements. Again, to ensure your test would catch a significant regression s/your are/you are (second time in the same doc! slow down and review!) I insist that you're not explaining this concept of 'fake slowness' and this section is missing an example. Add `backticks` to `setTimeout` and `for`. Is the sentence "Again, to ensure your test would catch a significant regression" incomplete? Or did you mean "This is to ensure your test would catch a significant regression"? ::: devtools/docs/tests/writing-perf-tests.md:131 (Diff revision 3) > +push your test to try without _and_ with the regression fix. This is to ensure the test reports the fix as an improvement. > +If you are introducing a test without any patch to improve the performance, try slowing down the code your are trying to cover with fake slowness by using setTimeout, or very slow for loops iteration over one million elements. Again, to ensure your test would catch a significant regression > + > +### Keep your test execution short. > + > +Running performance test is expensive. We are currently running them 25 times for each changeset landed in Firefox. s/test/tests ::: devtools/docs/tests/writing-perf-tests.md:132 (Diff revision 3) > +If you are introducing a test without any patch to improve the performance, try slowing down the code your are trying to cover with fake slowness by using setTimeout, or very slow for loops iteration over one million elements. Again, to ensure your test would catch a significant regression > + > +### Keep your test execution short. > + > +Running performance test is expensive. We are currently running them 25 times for each changeset landed in Firefox. > +So try to avoid having your new test run in more than 1 second. There are way too many negatives and verbs here. Simplify the sentence: Aim to run tests in less than a second. ::: devtools/docs/tests/writing-perf-tests.md:136 (Diff revision 3) > +Running performance test is expensive. We are currently running them 25 times for each changeset landed in Firefox. > +So try to avoid having your new test run in more than 1 second. > + > +### Cover important user interactions > + > +Since running performance tests is expensive, avoid testing edge cases, or bundle them into a more generic test to avoid increasing too much the total DAMP suite running time. "Since running performance tests is expensive," we already know it, it's in the previous section. Don't repeat yourself! remove this and go straight to "Avoid testing edge cases..." Also, examples. This section desperately calls for an example of "edge cases that can be bundled into a more generic test". ::: devtools/docs/tests/writing-perf-tests.md:137 (Diff revision 3) > +So try to avoid having your new test run in more than 1 second. > + > +### Cover important user interactions > + > +Since running performance tests is expensive, avoid testing edge cases, or bundle them into a more generic test to avoid increasing too much the total DAMP suite running time. > +For example, testing preffed-off features or rarely used panels isn't worth testing. I don't think this is "an example". This is another fact. It'd read better as "Additionally, testing preffed-off features or rarely used panels is not worth it".
Attachment #8936247 - Flags: review?(spenades) → review-
(In reply to Soledad Penades [:sole] [:spenades] from comment #6) > ::: devtools/docs/tests/writing-perf-tests.md:78 > (Diff revision 3) > > + > > +## How to write a performance test? > > + > > +Now, we saw how to register and where to write a test script, now let's see how to write a performance test. > > +When you write a performance test, in most cases, you only care about the time it takes to complete a very precise action. > > +This `runtTest` helpers allows you to do that like this: > > But I don't understand what it allows me to do. "That like this" - what is > "that"? > > What does runTest return that gives me the time it took to complete the > action? This is what I expect to see in this section but I don't. I used to have a sentence to say that in my previous patch but you told me: > +There is a `runTest` method that helps recording a precise action duration. Remove, this is redundant (the same content is on the next line), and also anticipates things in the wrong order. So it looks like you prefer having the old sentence. So I reverted it back and rephrased it a bit. > ::: devtools/docs/tests/writing-perf-tests.md:116 > (Diff revision 3) > > +Not only the functions related to the very precise feature your are trying to measure. > > +You should ensure that _any_ code ran directly or indirectly by your code is completed. > > Switch these two lines around - they are in the wrong order (and also the > first one is grammatically incomplete, it is missing the verb). The order is hard to define here. But I rephrased that a bit. The two "you should ensure" sentences were redundant. Also, I added a list of examples of things you should wait for. > ::: devtools/docs/tests/writing-perf-tests.md:127 > (Diff revision 3) > > + > > +### Ensure that its results change when regressing/fixing the code or feature you want to watch. > > + > > +If you are writing the new test to cover a recent regression and you have a patch to fix it, > > +push your test to try without _and_ with the regression fix. This is to ensure the test reports the fix as an improvement. > > +If you are introducing a test without any patch to improve the performance, try slowing down the code your are trying to cover with fake slowness by using setTimeout, or very slow for loops iteration over one million elements. Again, to ensure your test would catch a significant regression > > I insist that you're not explaining this concept of 'fake slowness' and this > section is missing an example. Added an example for the click test. > Is the sentence "Again, to ensure your test would catch a significant > regression" incomplete? Or did you mean "This is to ensure your test would > catch a significant regression"? I used your sentence, but also rephrased "This is to ensure the test reports the fix as an improvement.", which would then be redundant. > ::: devtools/docs/tests/writing-perf-tests.md:136 > (Diff revision 3) > > +Running performance test is expensive. We are currently running them 25 times for each changeset landed in Firefox. > > +So try to avoid having your new test run in more than 1 second. > > + > > +### Cover important user interactions > > + > > +Since running performance tests is expensive, avoid testing edge cases, or bundle them into a more generic test to avoid increasing too much the total DAMP suite running time. > > "Since running performance tests is expensive," we already know it, it's in > the previous section. Don't repeat yourself! remove this and go straight to > "Avoid testing edge cases..." > > Also, examples. This section desperately calls for an example of "edge cases > that can be bundled into a more generic test". I dropped this paragraph for now. I can't put any example right now as you can't modify simple, nor complicated documents. I'll try to document that later, once we have custom documents for all tools. For now, only inspector has one.
Comment on attachment 8936247 [details] Bug 1415472 - Document writing new DAMP test. https://reviewboard.mozilla.org/r/207010/#review213176 Thank you, this is great!
Comment on attachment 8936247 [details] Bug 1415472 - Document writing new DAMP test. https://reviewboard.mozilla.org/r/207012/#review213182 Only the gods of MozReview know why the r+ drop down didn't show up before. Anyway, great stuff! Thank you.
Attachment #8936247 - Flags: review?(spenades) → review+
Status: NEW → RESOLVED
Closed: 7 years ago
Resolution: --- → FIXED
Target Milestone: --- → Firefox 59
Product: Firefox → DevTools
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: