Closed Bug 1154874 Opened 10 years ago Closed 7 years ago

Add a debugger-specific DAMP test for Talos

Categories

(DevTools :: Framework, defect)

defect
Not set
normal

Tracking

(firefox60 fixed)

RESOLVED FIXED
Firefox 60
Tracking Status
firefox60 --- fixed

People

(Reporter: bgrins, Assigned: yulia)

References

(Blocks 1 open bug)

Details

Attachments

(1 file, 13 obsolete files)

59 bytes, text/x-review-board-request
ochameau
: review+
Details
As suggested in https://bugzilla.mozilla.org/show_bug.cgi?id=1150215#c4 and https://bugzilla.mozilla.org/show_bug.cgi?id=1150215#c13, it would be great to measure the performance of specific tasks within the tools. My thinking is that those tests would be part of a second suite done in a follow up to Bug 1150215 (damp2?). The reasoning being (a) they are probably much longer running so their results would mask results for basic toolbox open / close and (b) toolbox open / close time is important to get in ASAP so we can start tracking performance in the short term. Point (a) may not be as important if tooling could give alerts on individual tests within a suite. If that's the case we could fold these tests into damp.
Attached patch metaperf.patch (obsolete) — Splinter Review
This integrates metaperf from https://github.com/jsantell/metaperf/ into the damp addon, and also creates a new manifest / test class. The damp overlay just loads both tests but only instantiates the relevant one based on the page URL. Note that this doesn't bundle the octane page, since that patch was too big for bugzilla
Attached file browser_dbg_stepinto_performance.js (obsolete) —
This is a work-in-progress attempted test for stepping through deeply nested (recursing) code. It depends on the document added for the test in bug 1183325, should be in the tree by now. It takes 862 "step into" commands to go through that script, and at last run this test took about 11 minutes (!) to run (some of that in the background while I was doing other stuff though). So - the question is: how do I get something like this into Talos to make sure it will be run regularly and stats recorded? (PS: please don't tell me this test is excessive - I have had real debug sessions where stepping through code was veery slow..)
Attachment #8673099 - Flags: feedback?(bgrinstead)
TODO: I also want to write a test for a script with many top level functions and little recursion, like generated by http://hallvord.com/temp/moz/scriptgen.htm with a high number for "width".. depth: 3, width: 12 would do nicely I think. And of course it would be good to have the tests mentioned in the comments linked to in the description..
See Also: → 1210090
Comment on attachment 8673099 [details] browser_dbg_stepinto_performance.js I like the idea. We will need to migrate this into the damp talos suite if we want to get this into automation. It's a bit of work to get set up, although we are working through that right now in Bug 1210090. Here are the steps to get the environment set up: https://bugzilla.mozilla.org/show_bug.cgi?id=1210090#c19. I can send you the page set and further instructions to get it running locally if you are interested in pursuing this.
Attachment #8673099 - Flags: feedback?(bgrinstead) → feedback+
Summary: Add a more advanced performance tests for devtools in Talos → Add a debugger-specific DAMP test for Talos
I'm very interested in this happening, although I'd be happy if somebody else could do the leg work within a reasonable time.. It's not really what I'm supposed to work on and judged by how much time it takes to figure out how to write tests it might cost me quite some time :-p However, if you don't find anybody else who can do this in the near future I will. It's really, really important to me that it gets done, and that debugger performance improves.
Flags: needinfo?(bgrinstead)
(In reply to Hallvord R. M. Steen [:hallvors] from comment #5) > I'm very interested in this happening, although I'd be happy if somebody > else could do the leg work within a reasonable time.. It's not really what > I'm supposed to work on and judged by how much time it takes to figure out > how to write tests it might cost me quite some time :-p > > However, if you don't find anybody else who can do this in the near future I > will. It's really, really important to me that it gets done, and that > debugger performance improves. Agreed - having some talos coverage for debugger stepping / resuming would be great. And thanks for the heads up / prototype of the test. I don't have time right now to tackle this but it's on my radar. Just cc'ing some debugger and DAMP folks to make sure they are aware of it also. The idea here is to add a probe for debugger step time. I'm guessing it would step N times in the test case and average out that number, and the test case would be repeated M times. I'm sure there are any other specific things we might be interested in instrumentin within the debugger - some of it is encapsulated by the existing DAMP test with jsdebugger.open.DAMP (things like source listing, I believe, or anything that happens before the debugger fires it's open Promse). A couple other things I can think of that we might want to add: * Pause / resume time * Setting / removing a breakpoint
Flags: needinfo?(bgrinstead)
I'll give you good folks around three weeks to do something, or I'll go ahead and spend lots of time figuring it out ;)
Attached patch damp-1-refactor.patch (obsolete) — Splinter Review
A refactor to make adding new tests easier
Attached patch damp-2-debugger-template.patch (obsolete) — Splinter Review
A setup for running a separate debugger step test. With the two patches applied then running `./mach talos-test -a damp` should make it easier to work on the implementation.
Attached patch damp-2-debugger-template.patch (obsolete) — Splinter Review
Forgot to hg add the stepping file in the last patch
Attachment #8698709 - Attachment is obsolete: true
So, I'm trying to puzzle this together from cutting and pasting and looking at other tests. But I'm not familiar enough with the debugger API. Right now I got this: https://pastebin.mozilla.org/8855822 but the error quoted in the comment on line 9 got me stuck. I don't know if I'm using the promise stuff correctly either.
Flags: needinfo?(bgrinstead)
(In reply to Hallvord R. M. Steen [:hallvors] from comment #11) > So, I'm trying to puzzle this together from cutting and pasting and looking > at other tests. But I'm not familiar enough with the debugger API. Right now > I got this: > https://pastebin.mozilla.org/8855822 > but the error quoted in the comment on line 9 got me stuck. I don't know if > I'm using the promise stuff correctly either. I believe on line 4 you'll need to replace "debugger" with "jsdebugger". Sorry about that.
Flags: needinfo?(bgrinstead)
I've gotten a step further today - https://pastebin.mozilla.org/8855822 almost works now, BUT calling button.click() does NOT work for some reason. I've tried dispatching a synthetic click event, I've tried window.eval() - nothing works to start that script so we can stop at the debugger statement and get the test running. :-/ If I could send a string to be evaluated like the console does, it would likely work. (Test starts running pretty well if I manually click the button or type the command into the console when mach talos-test has started Firefox). Is there a convenient way to do something like that?
Flags: needinfo?(bgrinstead)
(Just so it's absolutely clear: it's the button inside the page that doesn't respond as expected to click() - the "Step into" button works fine)
Attached patch damp-2-debugger-template.patch (obsolete) — Splinter Review
Does this fix the issue for you locally? For me with this and part 1 applied I can run `./mach talos-test -a damp` and get results for the stepping test
Attachment #8698711 - Attachment is obsolete: true
Flags: needinfo?(bgrinstead)
This provides some helpers for testSetup and testTeardown, gets rid of the method which built the tests with arrays of functions that returned promises and instead converts each one into a single function with Task.async Review commit: https://reviewboard.mozilla.org/r/30605/diff/#index_header See other reviews: https://reviewboard.mozilla.org/r/30605/
Attachment #8707203 - Flags: review?(nfitzgerald)
Comment on attachment 8707203 [details] MozReview Request: Bug 1154874 - Make the damp test easier to read and extend;r=fitzgen https://reviewboard.mozilla.org/r/30605/#review27411 LGTM
Attachment #8707203 - Flags: review?(nfitzgerald) → review+
Yay - with Brian's changes from above and one more little tweak (opening split console and using it to run the button .click() code) this test now works. However, only 10 steps is insufficient to really get at the performance issues I'm seeing when using the debugger for complex work. The time it takes to step increases by the depth of the stack trace, it seems. I've ran this test a few times (it runs again and again - expected?) and here are some values: 20 steps: approx 80ms/step 150 or 200 steps: approx 420ms/step Setting it to test 500 steps the value decreases again, presumably because one then goes through the code with the most recursion and starts winding back up to code where the stack isn't as deep. (Note: these values are not from a "clean" test run - there's lots of stuff going on on this computer, so it's not a "scientific" perf test result - but the DAMP test ran repeatedly giving quite consistent results and they agree with my experience using the debugger on complex problems). (I want another test with less depth but lots and lots of global functions that call each other linearly - just to see how that affects debugger performance). Is it OK to push this test with stepCounter set to 150? Should I report a bug on fixing performance when stepping with a big stack?
Flags: needinfo?(bgrinstead)
When I did "hg push review" it created this: https://reviewboard.mozilla.org/r/21481/ - so I guess that's a place you can have a look at the latest state of this, Brian :)
(In reply to Hallvord R. M. Steen [:hallvors] from comment #18) > Yay - with Brian's changes from above and one more little tweak (opening > split console and using it to run the button .click() code) this test now > works. However, only 10 steps is insufficient to really get at the > performance issues I'm seeing when using the debugger for complex work. The > time it takes to step increases by the depth of the stack trace, it seems. > I've ran this test a few times (it runs again and again - expected?) and > here are some values: > > 20 steps: approx 80ms/step > 150 or 200 steps: approx 420ms/step > So that's somewhere like 75 seconds per test run (which is repeated 25 times IIRC). If those numbers are right, that would more than double the runtime of the DAMP (currently ~20 minutes). That's just adding too much to the runtime IMO. 20 steps would be less than 2 seconds per run which would add around a minute to the total test runtime, which seems much more reasonable. > Setting it to test 500 steps the value decreases again, presumably because > one then goes through the code with the most recursion and starts winding > back up to code where the stack isn't as deep. > > (Note: these values are not from a "clean" test run - there's lots of stuff > going on on this computer, so it's not a "scientific" perf test result - but > the DAMP test ran repeatedly giving quite consistent results and they agree > with my experience using the debugger on complex problems). > > (I want another test with less depth but lots and lots of global functions > that call each other linearly - just to see how that affects debugger > performance). Sounds like a test for 'step over' performance? > Is it OK to push this test with stepCounter set to 150? > Should I report a bug on fixing performance when stepping with a big stack? Yes, could you please file a bug about perf getting worse when stepping through a deep stack?
Flags: needinfo?(bgrinstead)
(In reply to Hallvord R. M. Steen [:hallvors] from comment #19) > When I did "hg push review" it created this: > https://reviewboard.mozilla.org/r/21481/ - so I guess that's a place you can > have a look at the latest state of this, Brian :) Looks like you have patches from multiple bugs applied in that review push. Can you remove any others and then also fold the "Scaffold for debugger stepping test" and "Add DAMP test for stepping in the deb..." patches into a single one? Then if you re-push to review it should pick up the changes.
Flags: needinfo?(hsteen)
Depends on: 1239422
Landed the refactor in Bug 1239422, so if you pull fxteam you could update the mozreview request with just your changes.
See Also: → 1239473
Thanks Brian, I reported the bug about slow stepping - https://bugzilla.mozilla.org/show_bug.cgi?id=1239473
Think I got everything right for that review .. finally. (Had some issues with hg after a crash or restart this morning.) Note: I changed the test to measure the performance of just 50 "step" commands, but I also changed the helper page to not stop until near the deepest part of the recursion. So we'll measure the performance of the step commands with a pretty long stack. If 50 are too many we can go down to 10 probably - but it would be nice if optimised instead so that 50 step commands didn't have much of an impact on the total DAMP running time ;) I'm guessing the debugger perhaps adds all those stack functions to the DOM when it's placed at the bottom of the window and you have the long stack history strip?
Flags: needinfo?(hsteen)
Attachment #8698705 - Attachment is obsolete: true
Attachment #8673099 - Attachment is obsolete: true
Attachment #8707119 - Attachment is obsolete: true
FYI I'm trying to get Bug 1239750 landed before reviewing this. Also, this will need some minor rebasing on top of Bug 1239422 (basically getting rid of the 'null' argument to openToolbox and closeToolbox).
Sounds good, let me know when/if you need me to do that.
Attachment #8707203 - Attachment is obsolete: true
Attachment #8601771 - Attachment is obsolete: true
Attachment #8708711 - Flags: review?(bgrinstead)
Comment on attachment 8708711 [details] MozReview Request: Bug 1154874, add another performance test for the step over command, r?bgrins https://reviewboard.mozilla.org/r/31145/#review29265 I'm going to attach rebased versions of this and the other patch ::: testing/talos/talos/tests/devtools/addon/content/pages/step-over.html:18 (Diff revision 1) > +function f0_0_0_0(){ Does this file need to be so big? 200K seems a bit excessive to check into the tree for the step over test
Attachment #8708627 - Flags: review?(bgrinstead)
Attached patch damp-debugger-step-in.patch (obsolete) — Splinter Review
James, can you take a look at the sequence of steps that are being taken here in _getDebuggerSteppingTest and pages/stepping.html. Is this collecting useful and relevant data for debugger performance?
Attachment #8712407 - Flags: feedback?(jlong)
Attached patch damp-debugger-step-over.patch (obsolete) — Splinter Review
James, can you take a look at the sequence of steps that are being taken here in _getDebuggerStepOverTest and pages/step-over.html. Is this collecting useful and relevant data for debugger performance? I've shrunk pages/step-over.html a lot (it was around 200KB before), and I'm not seeing much of a difference in step time (most of the functions obv weren't being stepped into while stepping over). While it's possible that we could regress step time for large scripts but not small, I'm not sure if we should be covering that in this test. If we want to cover large script perf then maybe both stepping tests should happen inside a single large script. Possibly using a real-world script like jQuery, although it might be hard to find a case that allows us to step into a really deep stack there.
Attachment #8712412 - Flags: feedback?(jlong)
(In reply to Brian Grinstead [:bgrins] from comment #32) > I've shrunk pages/step-over.html a lot (it was around 200KB before), and I'm > not seeing much of a difference in step time (most of the functions obv > weren't being stepped into while stepping over). While it's possible that > we could regress step time for large scripts but not small, I'm not sure if > we should be covering that in this test. If we want to cover large script > perf then maybe both stepping tests should happen inside a single large > script. Possibly using a real-world script like jQuery, although it might > be hard to find a case that allows us to step into a really deep stack there. To be clear, I'm fine with using the current files if you don't have an opinion about it. But you might know what sort of shape/size of file would be best to be testing.
Assignee: nobody → hsteen
Status: NEW → ASSIGNED
Comment on attachment 8712407 [details] [diff] [review] damp-debugger-step-in.patch Review of attachment 8712407 [details] [diff] [review]: ----------------------------------------------------------------- Looks great to me! Definitely would be good to measure. Not sure how different performance is on different kinds of scripts, but this seems like a good thing to test. ::: testing/talos/talos/tests/devtools/addon/content/damp.js @@ +267,5 @@ > + }` > + ) + ")()", true); > + > + // now we need to wait until the debugger stopped > + yield new Promise(resolve => { While unlikely, isn't this a race condition? If the debugger stopped on `debugger`, it could have already happened because the JS is being executed before this. Might need to create this promise above the `loadFrameScript` call and yield on it here.
Attachment #8712407 - Flags: feedback?(jlong) → feedback+
Comment on attachment 8712412 [details] [diff] [review] damp-debugger-step-over.patch Review of attachment 8712412 [details] [diff] [review]: ----------------------------------------------------------------- I don't know how different stepping in and stepping over is debugger-wise, but seems fine to have separate tests.
Attachment #8712412 - Flags: feedback?(jlong) → feedback+
Comment on attachment 8712407 [details] [diff] [review] damp-debugger-step-in.patch Review of attachment 8712407 [details] [diff] [review]: ----------------------------------------------------------------- ::: testing/talos/talos/tests/devtools/addon/content/damp.js @@ +267,5 @@ > + }` > + ) + ")()", true); > + > + // now we need to wait until the debugger stopped > + yield new Promise(resolve => { The js that causes the pause is wrapped in a timeout so this should always execute before it. Anyway, it wouldn't hurt to store the promise above.
Oh, if you're sure it's not racy, it's fine then. Isn't it running in a separate process though, so even async conditions aren't as reliable? I'm fine if you don't want to change it.
See Also: → 1244348
Comment on attachment 8712407 [details] [diff] [review] damp-debugger-step-in.patch Review of attachment 8712407 [details] [diff] [review]: ----------------------------------------------------------------- ::: testing/talos/talos/tests/devtools/addon/content/damp.js @@ +284,5 @@ > + let end = performance.now(); > + > + // Return "average milliseconds taken per 'step into' command" > + this._results.push({ > + name: "debuggerstep", We should rename this to "debuggerstepin"
Comment on attachment 8712412 [details] [diff] [review] damp-debugger-step-over.patch Review of attachment 8712412 [details] [diff] [review]: ----------------------------------------------------------------- ::: testing/talos/talos/tests/devtools/addon/content/damp.js @@ +326,5 @@ > + } > + > + let end = performance.now(); > + > + // Return "average milliseconds taken per 'step into' command" This comment should be updated as 'step over'
Joel, can you please help us get two new probes added for DAMP for these patches? "debuggerstepin" which is the "average milliseconds taken per 'step in' command" "debuggerstepover" which is the "average milliseconds taken per 'step over' command"
Flags: needinfo?(jmaher)
on it, I will work to get these graph server additions done today in bug 1251209. I am determined to make this the last time we need to update graph server as we are using perfherder for everything now but leaving graph server running for a few more weeks in case there is a big issue.
Flags: needinfo?(jmaher)
Do I need to fix Brian's last review requests or will you, Joel? Great if you can :)
Flags: needinfo?(jmaher)
I had no intention of fixing the nits in the patch, but I could. It looks as if there are small text based changed needed in comment 38 and comment 39. :hallvors, if you have the patch loaded locally, it would be faster for you to handle, right? I am pretty sure :bgrins pinged me to ensure the graph server work was handled ahead of time.
Flags: needinfo?(jmaher)
I don't, I removed those changes to work on the bug 967853 and I'm not familiar enough with hg to feel like I know what I'm doing when switching between working on different things. (At this point I'm more used to Git and use it for anything but mozilla-central - and most of the stuff I touch daily is not in moz-central..)
I don't mind doing this, I can hack around hg workflows. I just need to know which specific patches/reviews I need to do- there are 4 of them, do all need attention?
No, just fixing the last one should do - thanks! :)
pushing the two patches to try seemed to fail miserably: https://treeherder.mozilla.org/#/jobs?repo=try&revision=ff1178eb75fb
I was just asking Joel to help with the graph server changes. I can wrap up the code changes
Flags: needinfo?(bgrinstead)
(In reply to Joel Maher (:jmaher) from comment #41) > I am determined to make this the last time we need to update graph server Nice :)
btw, graph server isn't used anymore (firefox 47+), but the new hitch is addon signing. All the talos addons are signed, so even when developing locally or pushing to try you need to bump the version and sign it (I think you need to bump it twice as we did trunk first, bumped it and did aurora). Here is how we are signing them: https://wiki.mozilla.org/EngineeringProductivity/HowTo/SignExtensions and I have filed a bug which I might get to work on this week to make developing easier and ideally try server easier: https://bugzilla.mozilla.org/show_bug.cgi?id=1253736
Thanks Brian :) I'm in the middle of setting up git-for-mozilla-central per https://glandium.org/blog/?page_id=3438 because I don't trust my understanding of hg - but not sure when that setup will start working..
It's probably going to take me a while to get to this as I figure out the new development workflow with addon signing as in https://bugzilla.mozilla.org/show_bug.cgi?id=1253736#c2
Huh, applied the latest patches and I'm getting an error now on the step-over test. I'm not going to have time right now to look into this and get it landed. It can be run with `./mach talos-test -a damp`. James, does anything here jump out as an obvious problem or thing that needs to be updated based on changes lately in the debugger frontend? I see something in the trace about Editor.appendTo, so maybe it has something to do with Bug 1233927 (although I haven't reverted to before that changeset to confirm that theory). Full Message: Error: fillFrames command sent while not paused. Currently resuming Full Stack: ThreadClient.prototype._assertPaused@resource://gre/modules/commonjs/toolkit/loader.js -> resource://devtools/shared/client/main.js:1711:13 ThreadClient.prototype.fillFrames@resource://gre/modules/commonjs/toolkit/loader.js -> resource://devtools/shared/client/main.js:2075:5 StackFrames.prototype._onPaused@chrome://devtools/content/debugger/debugger-controller.js:723:5 DebuggerController.connectThread/<@chrome://devtools/content/debugger/debugger-controller.js:328:11 Handler.prototype.process@resource://gre/modules/Promise.jsm -> resource://gre/modules/Promise-backend.js:937:23 this.PromiseWalker.walkerLoop@resource://gre/modules/Promise.jsm -> resource://gre/modules/Promise-backend.js:816:7 Promise*this.PromiseWalker.scheduleWalkerLoop@resource://gre/modules/Promise.jsm -> resource://gre/modules/Promise-backend.js:747:11 this.PromiseWalker.schedulePromise@resource://gre/modules/Promise.jsm -> resource://gre/modules/Promise-backend.js:779:7 this.PromiseWalker.completePromise@resource://gre/modules/Promise.jsm -> resource://gre/modules/Promise-backend.js:714:7 promiseMiddleware/</</</<@resource://gre/modules/commonjs/toolkit/loader.js -> resource://devtools/client/shared/redux/middleware/promise.js:39:9 DevToolsUtils.executeSoon*executeSoon@resource://gre/modules/commonjs/toolkit/loader.js -> resource://devtools/shared/DevToolsUtils.js:34:11 promiseMiddleware/</</<@resource://gre/modules/commonjs/toolkit/loader.js -> resource://devtools/client/shared/redux/middleware/promise.js:34:7 Handler.prototype.process@resource://gre/modules/Promise.jsm -> resource://gre/modules/Promise-backend.js:937:23 this.PromiseWalker.walkerLoop@resource://gre/modules/Promise.jsm -> resource://gre/modules/Promise-backend.js:816:7 Promise*this.PromiseWalker.scheduleWalkerLoop@resource://gre/modules/Promise.jsm -> resource://gre/modules/Promise-backend.js:747:11 this.PromiseWalker.schedulePromise@resource://gre/modules/Promise.jsm -> resource://gre/modules/Promise-backend.js:779:7 this.PromiseWalker.completePromise@resource://gre/modules/Promise.jsm -> resource://gre/modules/Promise-backend.js:714:7 TaskImpl_run@resource://gre/modules/Task.jsm:324:13 resolve@resource://devtools/shared/deprecated-sync-thenables.js:40:40 then@resource://devtools/shared/deprecated-sync-thenables.js:20:43 resolve@resource://devtools/shared/deprecated-sync-thenables.js:72:11 listenerJson@resource://gre/modules/commonjs/toolkit/loader.js -> resource://devtools/shared/client/main.js:740:9 emitOnObject@resource://gre/modules/commonjs/toolkit/loader.js -> resource://gre/modules/commonjs/sdk/event/core.js:112:9 emit@resource://gre/modules/commonjs/toolkit/loader.js -> resource://gre/modules/commonjs/sdk/event/core.js:89:38 Request.prototype.emit@resource://gre/modules/commonjs/toolkit/loader.js -> resource://devtools/shared/client/main.js:1234:29 DebuggerClient.prototype.onPacket/emitReply@resource://gre/modules/commonjs/toolkit/loader.js -> resource://devtools/shared/client/main.js:1018:29 DevTools RDP*DebuggerClient.prototype.request@resource://gre/modules/commonjs/toolkit/loader.js -> resource://devtools/shared/client/main.js:724:5 DebuggerClient.requester/<@resource://gre/modules/commonjs/toolkit/loader.js -> resource://devtools/shared/client/main.js:284:12 exports.makeInfallible/<@resource://gre/modules/commonjs/toolkit/loader.js -> resource://devtools/shared/ThreadSafeDevToolsUtils.js:101:14 loadSources/<@resource://gre/modules/commonjs/toolkit/loader.js -> resource://devtools/client/debugger/content/actions/sources.js:65:30 TaskImpl_run@resource://gre/modules/Task.jsm:319:40 TaskImpl@resource://gre/modules/Task.jsm:280:3 createAsyncFunction/asyncFunction@resource://gre/modules/Task.jsm:254:14 Task_spawn@resource://gre/modules/Task.jsm:168:12 loadSources@resource://gre/modules/commonjs/toolkit/loader.js -> resource://devtools/client/debugger/content/actions/sources.js:64:16 DebuggerController.connectThread@chrome://devtools/content/debugger/debugger-controller.js:318:19 DebuggerController.connect<@chrome://devtools/content/debugger/debugger-controller.js:283:11 TaskImpl_run@resource://gre/modules/Task.jsm:319:40 resolve@resource://gre/modules/commonjs/toolkit/loader.js -> resource://devtools/shared/deprecated-sync-thenables.js:40:40 then@resource://gre/modules/commonjs/toolkit/loader.js -> resource://devtools/shared/deprecated-sync-thenables.js:20:43 resolve@resource://gre/modules/commonjs/toolkit/loader.js -> resource://devtools/shared/deprecated-sync-thenables.js:72:11 DebuggerController.reconfigureThread/<@chrome://devtools/content/debugger/debugger-controller.js:430:9 DebuggerClient.requester/</<@resource://gre/modules/commonjs/toolkit/loader.js -> resource://devtools/shared/client/main.js:296:9 exports.makeInfallible/<@resource://gre/modules/commonjs/toolkit/loader.js -> resource://devtools/shared/ThreadSafeDevToolsUtils.js:101:14 emitOnObject@resource://gre/modules/commonjs/toolkit/loader.js -> resource://gre/modules/commonjs/sdk/event/core.js:112:9 emit@resource://gre/modules/commonjs/toolkit/loader.js -> resource://gre/modules/commonjs/sdk/event/core.js:89:38 Request.prototype.emit@resource://gre/modules/commonjs/toolkit/loader.js -> resource://devtools/shared/client/main.js:1234:29 DebuggerClient.prototype.onPacket/emitReply@resource://gre/modules/commonjs/toolkit/loader.js -> resource://devtools/shared/client/main.js:1018:29 DevTools RDP*DebuggerClient.prototype.request@resource://gre/modules/commonjs/toolkit/loader.js -> resource://devtools/shared/client/main.js:724:5 DebuggerClient.requester/<@resource://gre/modules/commonjs/toolkit/loader.js -> resource://devtools/shared/client/main.js:284:12 exports.makeInfallible/<@resource://gre/modules/commonjs/toolkit/loader.js -> resource://devtools/shared/ThreadSafeDevToolsUtils.js:101:14 DebuggerController.reconfigureThread@chrome://devtools/content/debugger/debugger-controller.js:409:5 DebuggerController.connect<@chrome://devtools/content/debugger/debugger-controller.js:282:11 TaskImpl_run@resource://gre/modules/Task.jsm:319:40 TaskImpl@resource://gre/modules/Task.jsm:280:3 createAsyncFunction/asyncFunction@resource://gre/modules/Task.jsm:254:14 DebuggerPanel.prototype.open/<@resource://gre/modules/commonjs/toolkit/loader.js -> resource://devtools/client/debugger/panel.js:52:19 Handler.prototype.process@resource://gre/modules/Promise.jsm -> resource://gre/modules/Promise-backend.js:937:23 this.PromiseWalker.walkerLoop@resource://gre/modules/Promise.jsm -> resource://gre/modules/Promise-backend.js:816:7 Promise*this.PromiseWalker.scheduleWalkerLoop@resource://gre/modules/Promise.jsm -> resource://gre/modules/Promise-backend.js:747:11 this.PromiseWalker.schedulePromise@resource://gre/modules/Promise.jsm -> resource://gre/modules/Promise-backend.js:779:7 this.PromiseWalker.completePromise@resource://gre/modules/Promise.jsm -> resource://gre/modules/Promise-backend.js:714:7 Handler.prototype.process@resource://gre/modules/Promise.jsm -> resource://gre/modules/Promise-backend.js:937:23 this.PromiseWalker.walkerLoop@resource://gre/modules/Promise.jsm -> resource://gre/modules/Promise-backend.js:816:7 Promise*this.PromiseWalker.scheduleWalkerLoop@resource://gre/modules/Promise.jsm -> resource://gre/modules/Promise-backend.js:747:11 this.PromiseWalker.schedulePromise@resource://gre/modules/Promise.jsm -> resource://gre/modules/Promise-backend.js:779:7 this.PromiseWalker.completePromise@resource://gre/modules/Promise.jsm -> resource://gre/modules/Promise-backend.js:714:7 TaskImpl_run@resource://gre/modules/Task.jsm:324:13 resolve@resource://gre/modules/commonjs/toolkit/loader.js -> resource://devtools/shared/deprecated-sync-thenables.js:40:40 then@resource://gre/modules/commonjs/toolkit/loader.js -> resource://devtools/shared/deprecated-sync-thenables.js:20:43 resolve@resource://gre/modules/commonjs/toolkit/loader.js -> resource://devtools/shared/deprecated-sync-thenables.js:72:11 DebuggerView._initializeEditor/<@chrome://devtools/content/debugger/debugger-view.js:320:7 Handler.prototype.process@resource://gre/modules/Promise.jsm -> resource://gre/modules/Promise-backend.js:937:23 this.PromiseWalker.walkerLoop@resource://gre/modules/Promise.jsm -> resource://gre/modules/Promise-backend.js:816:7 Promise*this.PromiseWalker.scheduleWalkerLoop@resource://gre/modules/Promise.jsm -> resource://gre/modules/Promise-backend.js:747:11 this.PromiseWalker.schedulePromise@resource://gre/modules/Promise.jsm -> resource://gre/modules/Promise-backend.js:779:7 this.PromiseWalker.completePromise@resource://gre/modules/Promise.jsm -> resource://gre/modules/Promise-backend.js:714:7 Editor.prototype.appendTo/onLoad@resource://gre/modules/commonjs/toolkit/loader.js -> resource://devtools/client/sourceeditor/editor.js:455:7 EventListener.handleEvent*Editor.prototype.appendTo@resource://gre/modules/commonjs/toolkit/loader.js -> resource://devtools/client/sourceeditor/editor.js:458:5 DebuggerView._initializeEditor@chrome://devtools/content/debugger/debugger-view.js:317:5 DebuggerView.initialize@chrome://devtools/content/debugger/debugger-view.js:70:5 DebuggerController.startupDebugger<@chrome://devtools/content/debugger/debugger-controller.js:236:11 TaskImpl_run@resource://gre/modules/Task.jsm:319:40 TaskImpl@resource://gre/modules/Task.jsm:280:3
Flags: needinfo?(bgrinstead) → needinfo?(jlong)
Doesn't look related to appendTo; that's just one of the previous stacks in the promise chain (I wish our async stacks were more clear). The editor is initialized previously but the real error comes from here: https://dxr.mozilla.org/mozilla-central/source/devtools/client/debugger/debugger-controller.js#328 Basically, when the debugger initializes it sees that the engine is paused so it puts the UI in the paused state and fetches frames from the engine. When the frames are fetched, the engine isn't paused anymore for some reason. I don't really have time to deeply look into this right now either. It would help to get some context about this and maybe pair code over vidyo with Brian or somebody in the next week.
Flags: needinfo?(jlong)
I get this: 21:01:11 ERROR - PROCESS | 14560 | Full message: TypeError: toolbox.getPanel is not a function 21:01:11 INFO - PROCESS | 14560 | Full stack: Damp.prototype._getDebuggerSteppingTest<@chrome://damp/content/damp.j s:256:20 which seems related to this code in damp.js: let toolbox = yield this.openToolbox(null, "jsdebugger"); let stepCounter = 50; let panelWin = toolbox.getPanel("jsdebugger").panelWin; Is this a red herring or an actual error - and why? (I'll attach a new patch with all the new test stuff, since the existing one(s) don't merge cleanly anymore)
Attached patch b1154874.patch (obsolete) — Splinter Review
Tests in this updated patch run fine for me with a current checkout of moz-central.
Attachment #8708627 - Attachment is obsolete: true
Attachment #8708711 - Attachment is obsolete: true
Attachment #8712407 - Attachment is obsolete: true
Attachment #8712412 - Attachment is obsolete: true
Attachment #8759314 - Flags: review?(bgrinstead)
Comment on attachment 8708627 [details] Bug 1154874, Add DAMP test for stepping in the debugger, Review request updated; see interdiff: https://reviewboard.mozilla.org/r/31131/diff/1-2/
Attachment #8708627 - Attachment description: MozReview Request: Bug 1154874, Add DAMP test for stepping in the debugger, r?bgrins → Bug 1154874, Add DAMP test for stepping in the debugger,
Attachment #8708627 - Attachment is obsolete: false
Attachment #8708627 - Flags: review?(bgrinstead)
James, will it be possible at some point to make the debugger.html project compatible with this test? I want to make sure that it's worth the effort to land it right now, since the current debugger doesn't have many active changes that need to be tracked. It'd be an interesting comparison between the old and new frontends, but only if it's a relatively easy conversion (which I think it would be based on what the test is doing, but I want to double check before we move forward here).
Flags: needinfo?(jlong)
Talks with bgrins on IRC. summary: if it's not too much effort, I'd say go ahead and land it. The debugger realistically won't land for a while, and it should be easy to update this to work with it. It will still be a tool panel that the toolbox can control .
Flags: needinfo?(jlong)
It will be very interesting to see the stats from the old and the new debugger (if you land it now and adapt if necessary later) - hopefully the new one will be faster..
(In reply to Hallvord R. M. Steen [:hallvors] from comment #60) > It will be very interesting to see the stats from the old and the new > debugger (if you land it now and adapt if necessary later) - hopefully the > new one will be faster.. Agreed - I will review the patch this week and hopefully we can get it landed soon
I'm confused about the state of these patches - which one should I be reviewing?
Flags: needinfo?(hsteen)
Comment on attachment 8708627 [details] Bug 1154874, Add DAMP test for stepping in the debugger, Review request updated; see interdiff: https://reviewboard.mozilla.org/r/31131/diff/2-3/
I realised I have not fixed the little nits in comment 38 and comment 39, and tried to push an update..
I have however kept the 200K test file, since I think it's good to have complex files available for performance testing..
Comment on attachment 8759314 [details] [diff] [review] b1154874.patch Marking obsolete as per comment 63
Attachment #8759314 - Attachment is obsolete: true
Attachment #8759314 - Flags: review?(bgrinstead)
Attachment #8708627 - Flags: review?(bgrinstead) → review+
Joel, do we need to do anything special to land this DAMP change?
Flags: needinfo?(jmaher)
there is some magic needed as we require signed addons on all our integration and release branches. Locally and on try we don't require signed addons, so development is easier. If you want to land the changes as they are now and do a followup bug with the .xpi changed, we can do that, or you can modify your mozreview push to include the .xpi file. Here is a simple wiki page with instructions for signing the addon: https://wiki.mozilla.org/EngineeringProductivity/HowTo/SignExtensions remember to bump the version so it signs! Ask for help if you have trouble.
Flags: needinfo?(jmaher)
Hallvord, can you please sign the addon as in Comment 70 and include it in the patch?
Flags: needinfo?(hsteen)
Brian - sorry, Mozilla didn't renew my contract so I no longer have LDAP access and apparently can't do the signing stuff.
Flags: needinfo?(hsteen) → needinfo?(bgrinstead)
Looks like this just needs signing to land, I'll see what I can do.
Flags: needinfo?(bgrinstead)
I noticed this patch adds more chrome:// URLs, so I'll change those to be http:// so we get remote browsers for them on e10s.
Wow, that's way off on the subtests - I wouldn't have expected that. Can you try again with a new base and see if it's the same?
Flags: needinfo?(bgrinstead)
Let's wait on landing this until we get the initial enabling of the new frontend so this doesn't end up blocking that
Depends on: 1300861
Attachment #8951542 - Flags: review?(poirot.alex)
Attachment #8951542 - Flags: review?(bgrinstead)
I've updated this so that the test is applied to the frontend. Brian, I added you as a reviewer here because you were so involved in the process last time. I cleaned up the step-in file, as there was a lot of repetition (a lot of the functions were copied). I am not sure if that was intentional. I can bring that back Also, it wasn't clear what a good repeat would be, I used 20 for the step over and step in tests, and stepping out just 3 times, because it exits right away.
Assignee: hsteen → ystartsev
Attachment #8708627 - Attachment is obsolete: true
Assignee: ystartsev → hsteen
Assignee: hsteen → ystartsev
Comment on attachment 8951542 [details] Bug 1154874 - Add a debugger-specific DAMP test for Talos [New Frontend] https://reviewboard.mozilla.org/r/220850/#review227472 Sorry for the delay, I wanted to see the results against a try build with a custom revision[1], here it is: https://treeherder.mozilla.org/perf.html#/comparesubtest?originalProject=try&originalRevision=81b1181fbad91eab16ec8e4b63b0f8029de63858&newProject=try&newRevision=e376a601706d729ea853c4ad138810577f3f76cc&originalSignature=f79b6a4f1f8f53326d3056f6c8008c0ff4de0a94&newSignature=f79b6a4f1f8f53326d3056f6c8008c0ff4de0a94&showOnlyConfident=1&framework=1 I couldn't explain the impact on memory panel, especially given that's not executed after debugger test. I would easily ignore that as we are not tracking memory performance. open test is slightly slower while reload is slightly faster. I imagine it is because the reference test changed? Overall the patch looks good, but I would like to see the couple of point being addressed or discussed. And also, it would be great to push to try with different threasholds: * stepCount is set at 20, what would it do with 200? 1000? * we only test step out 2 times, is it possible to do it more? [1] a custom revision allows to ensure comparing against the same m-c changeset you were working with. To do that, in addition to the try push you are used to do, you need to push another one, withouth your patches. So with the HEAD/tip changeset you were working with, when you had your patches. ::: testing/talos/talos/tests/devtools/addon/content/damp.js:205 (Diff revision 4) > + sourceId: source.get("id"), > + line, > + column: 0 > + }; > + dbg.actions.addBreakpoint(location); > + return debuggerHelper.waitForDispatch(dbg, "ADD_BREAKPOINT"); Would it be safer against races by doing: let onDispatched = debuggerHelper.waitForDispatch(dbg, "ADD_BREAKPOINT"); dbg.actions.addBreakpoint(location); return onDispatched; ::: testing/talos/talos/tests/devtools/addon/content/damp.js:232 (Diff revision 4) > + return Promise.all([onLoadedScope, onStateChange]); > + }, > + > + async resume(dbg) { > + dbg.actions.resume(); > + return this.waitForResumed(dbg); Would it be safer against race by doing: let onResumed = this.waitForResumed(dbg); dbg.actions.resume(); return onResumed; ? ::: testing/talos/talos/tests/devtools/addon/content/damp.js:876 (Diff revision 4) > + await triggerPause(); > + test.done(); > + > + test = this.runTest(name + ".pause.settle.DAMP"); > + await this.waitForPendingPaints(toolbox); > + test.done(); I'm not sure this waitForPendingPaints is necessary, from what I can see with the profiler is resolves immediately. ::: testing/talos/talos/tests/devtools/addon/content/damp.js:909 (Diff revision 4) > + const stepCount = 20; > + > + const pauseDebugger = async (line, file) => { > + await debuggerHelper.addBreakpoint(dbg, line, file); > + await debuggerHelper.evalInContent(dbg, tab, testFunction); > + await debuggerHelper.waitForPaused(dbg); Would it be safer against races to do: let onPaused = debuggerHelper.waitForPaused(dbg); await debuggerHelper.evalInContent(dbg, tab, testFunction); await onPaused; ? ::: testing/talos/talos/tests/devtools/addon/content/damp.js:913 (Diff revision 4) > + await debuggerHelper.evalInContent(dbg, tab, testFunction); > + await debuggerHelper.waitForPaused(dbg); > + }; > + > + // Initial Pause > + await this.pauseAndLog(`${label}.jsdebugger`, toolbox, () => pauseDebugger(21, "App.js")); Could you inline `pauseDebugger` into `pauseAndLog`, or make it be part of DebuggerHelper? ::: testing/talos/talos/tests/devtools/addon/content/damp.js:925 (Diff revision 4) > + // step-in-test.js has a function nesting depth of 3, with a max step count of 150; > + // see https://github.com/codehag/debugger-talos-example/blob/master/src/step-over-test.js for > + // more information > + await pauseDebugger(10194, "step-in-test.js"); > + > + await this.avgCustomTestAndLog("stepin", toolbox, () => debuggerHelper.stepIn(dbg), stepCount); I'm wondering what is the value of such helper. We already repeat actions in other tests, but we just record the time it takes to execute all the actions. We do no try to extract the average. Your helper allows to change the number of repetitions and it shouldn't change the test value. May be it is easier to do like other tests, like this: let test = this.runTest(`${label}.jsdebugger.stepin`); for (let i = 0; i <= stepCount; i++) { await debuggerHelper.stepIn(dbg); } test.done(); Also, compared to avgCustomTestAndLog, we are using runTest, which will automatically add marker to better track this subtest in perf-html. One possible drawback is that it won't wait for pending reflows. But note that waitForPendingPaints was introduced as a stop gap to highlight the tests that weren't waiting correctly for all async operations. It shouldn't be used as a generic way to wait for render/async operations. Note that with current patch, most call waitForPendingPaints did not wait for any repaint. ::: testing/talos/talos/tests/devtools/addon/content/damp.js:936 (Diff revision 4) > + // see https://github.com/codehag/debugger-talos-example/blob/master/src/step-over-test.js for > + // more information > + await pauseDebugger(16, "step-over-test.js"); > + await this.avgCustomTestAndLog("stepover", toolbox, () => debuggerHelper.stepOver(dbg), stepCount); > + await debuggerHelper.resume(dbg); > + await debuggerHelper.removeBreakpoints(dbg); Here and before, isn't it safer as a general practice to remove the breakpoints before resuming? (to avoid hitting it again)
Attachment #8951542 - Flags: review?(poirot.alex)
Comment on attachment 8951542 [details] Bug 1154874 - Add a debugger-specific DAMP test for Talos [New Frontend] https://reviewboard.mozilla.org/r/220850/#review227876 Thanks Yulia for working on this! The overall approach looks good to me, so r+ from me and I'll leave the final review to Alex. ::: testing/talos/talos/tests/devtools/addon/content/damp.js:262 (Diff revision 4) > + return dbg.win.document.querySelector(name); > + }, > + > + async waitForLoadedScopes(dbg) { > + const scopes = await this.waitForElement(dbg, ".scopes-list"); > + await this.waitUntil(() => scopes.querySelector(".tree-node[aria-level=\"1\"]")); Nit: Could this function be collapsed to a single call to waitForElement? ``` await this.waitForElement(".scopes-list .tree-node[aria-level=\"1\"]");` ``` Or, remove the waitForElementHelper and make it: ``` await this.waitUntil(() => dbg.win.document.querySelector(".scopes-list .tree-node[aria-level=\"1\"]")); ``` ::: testing/talos/talos/tests/devtools/addon/content/damp.js:925 (Diff revision 4) > + // step-in-test.js has a function nesting depth of 3, with a max step count of 150; > + // see https://github.com/codehag/debugger-talos-example/blob/master/src/step-over-test.js for > + // more information > + await pauseDebugger(10194, "step-in-test.js"); > + > + await this.avgCustomTestAndLog("stepin", toolbox, () => debuggerHelper.stepIn(dbg), stepCount); There is one other case where we measure the average time which is for console stream logging. IIRC the idea with that was: 1) to not pollute the overall DAMP total with a really large value that would have too much sway on the overall DAMP score 2) to give a human-readable number (i.e. "how long does a single rAF take with a console.log injected in the middle of it"), so we can come back and look at the answer to that question from the data: https://treeherder.mozilla.org/perf.html#/graphs?series=mozilla-central,1418049,1,1. As I later understood it, (1) was actually unnecessary in that the regression/improvement tracking weights the subtests based on their typical values (but Alex, please confirm this is the case). But depending on how we want to view the data here, maybe (2) is still relevant in this case to answer the question "how long does a single step take?". I suppose could still answer that by looking at the number and dividing by 20.
Attachment #8951542 - Flags: review?(bgrinstead) → review+
I made the changes, thanks you two! Alex, if you can take a look again that would be great!
Flags: needinfo?(poirot.alex)
Comment on attachment 8951542 [details] Bug 1154874 - Add a debugger-specific DAMP test for Talos [New Frontend] https://reviewboard.mozilla.org/r/220850/#review229242 Almost there, I think we miss a couple of forced GC and find the right balance in step counts. Otherwise regarding the record of total time vs average time, if you think that's helpful to record the average, you may tweak runTest to accept a divisor (i.e. the stepCount, and divide the total duration before recording it into DAMP results). ::: testing/talos/talos/tests/devtools/addon/content/damp.js:873 (Diff revision 5) > + const dbg = await debuggerHelper.createContext(panel); > + const pauseLocation = { line: 22, file: "App.js" }; > + > + dump("Pausing debugger\n"); > + let test = this.runTest("custom.jsdebugger.pause.DAMP"); > + dump(`TAB ${!!tab}`); Looks like a debugging statement landed in your final patch. ::: testing/talos/talos/tests/devtools/addon/content/damp.js:880 (Diff revision 5) > + test.done(); > + > + await debuggerHelper.removeBreakpoints(dbg); > + await debuggerHelper.resume(dbg); > + test = this.runTest(`custom.jsdebugger.pause.settle.DAMP`); > + test.done(); Sorry if my previous comment was unclear, but if you remove waitForPendingPaint, you should also remove the settle subtest. As-is, the settle subtest will resolve immediately and be always 0. To know if waitForPendingPaints is any useful you can log utils.isMozAfterPaintPending in it and see if it is true at least once. Otherwise, here is a profile where I added some additional markers: https://perfht.ml/2FzCtAt (and reduced stepCount to 5, just to make DAMP faster to run) It helps seeing (in the marker chart), that long GC are being done during step subtests. It is bad as it will introduce noise in them. I would suggest adding calls to `garbageCollect` at the end of `pauseDebuggerAndLog` and at the end of `for (const stepTest of stepTests)` within `stepDebuggerAndLog`. ::: testing/talos/talos/tests/devtools/addon/content/damp.js:883 (Diff revision 5) > + await debuggerHelper.resume(dbg); > + test = this.runTest(`custom.jsdebugger.pause.settle.DAMP`); > + test.done(); > + }, > + > + async stepDebuggerAndLog(tab, toolbox, { selectedFile, testFunction }) { `selectedFile` looks unused. ::: testing/talos/talos/tests/devtools/addon/content/damp.js:914 (Diff revision 5) > + for (const stepTest of stepTests) { > + await debuggerHelper.pauseDebugger(dbg, tab, testFunction, stepTest.location); > + const test = this.runTest(`custom.jsdebugger.${stepTest.key}.DAMP`); > + for (let i = 0; i <= stepCount; i++) { > + await debuggerHelper.step(dbg, stepTest.key); > + } It looks like we have to tweak stepCount to lower values. In the current state of the debugger, it takes about one minute to run each: custom.jsdebugger.stepIn.DAMP 66,258.17 custom.jsdebugger.stepOut.DAMP 53,454.50 custom.jsdebugger.stepOver.DAMP 45,097.17 It would be better to be up to a couple of seconds in order to not slow down DAMP overall execution time. Do not hesitate to push to try with various values.
Attachment #8951542 - Flags: review?(poirot.alex)
> > + await this.avgCustomTestAndLog("stepin", toolbox, () => debuggerHelper.stepIn(dbg), stepCount); > > There is one other case where we measure the average time which is for > console stream logging. IIRC the idea with that was: > > 1) to not pollute the overall DAMP total with a really large value that > would have too much sway on the overall DAMP score > 2) to give a human-readable number (i.e. "how long does a single rAF take > with a console.log injected in the middle of it"), so we can come back and > look at the answer to that question from the data: > https://treeherder.mozilla.org/perf.html#/graphs?series=mozilla-central, > 1418049,1,1. > > As I later understood it, (1) was actually unnecessary in that the > regression/improvement tracking weights the subtests based on their typical > values (but Alex, please confirm this is the case). Yes, there is a special median algorithm that should work even if number have different order of magnitude. But that's more theorical than practical. The "summary" result of DAMP is just broken, no matter if we keep subtests results in the same order of magnitude or not. It very rarely reports regression/improvement. Now, I watch all subtests individually to workaround that. My main concern about avgCustomTestAndLog was that it wasn't using runTest and so wasn't adding markers. I don't have any particular concern about recording average instead of total time. So if you prefer human readable value, you can keep recording average.
Flags: needinfo?(poirot.alex)
Comment on attachment 8951542 [details] Bug 1154874 - Add a debugger-specific DAMP test for Talos [New Frontend] https://reviewboard.mozilla.org/r/220850/#review229812 ::: testing/talos/talos/tests/devtools/addon/content/damp.js:880 (Diff revision 5) > + test.done(); > + > + await debuggerHelper.removeBreakpoints(dbg); > + await debuggerHelper.resume(dbg); > + test = this.runTest(`custom.jsdebugger.pause.settle.DAMP`); > + test.done(); thanks for the clarification re: waitForPendingPaints and the settle. From your last comment -- it looks like those waits were unnecessary. I've added the garbageCollect call however
Attachment #8951542 - Attachment is obsolete: true
Ok, we have the try timing for the stepping test down to ~2.2 seconds https://treeherder.mozilla.org/#/jobs?repo=try&revision=332d5c61bdc5cc0bf2f1123a433db9312124fb5f Other comments are also addressed. This should be ready to!
Attachment #8955446 - Flags: review?(poirot.alex)
Comment on attachment 8955446 [details] Bug 1154874 - Add a debugger-specific DAMP test for Talos [New Frontend] https://reviewboard.mozilla.org/r/224606/#review230986 Looks good now, thanks a lot Yulia! I already asked you about main.js, but I can't see any reply about this. Do you think you can avoid its rename? ::: testing/talos/talos/tests/devtools/addon/content/damp.js:919 (Diff revision 1) > + }, > + > + async stepDebuggerAndLog(tab, toolbox, { testFunction }) { > + const panel = await toolbox.getPanelWhenReady("jsdebugger"); > + const dbg = await debuggerHelper.createContext(panel); > + const stepCount = 5; 5 seems still too much. It impacts a lot the other tests: https://treeherder.mozilla.org/perf.html#/comparesubtest?originalProject=try&originalRevision=73bfcd923dc4&newProject=try&newRevision=8180570616876e53d3b8b9fae3b06c98cf9d894e&originalSignature=f79b6a4f1f8f53326d3056f6c8008c0ff4de0a94&newSignature=f79b6a4f1f8f53326d3056f6c8008c0ff4de0a94&showOnlyConfident=1&framework=1 And the steps tests are around 2s. With only 2 iterations: https://treeherder.mozilla.org/perf.html#/comparesubtest?originalProject=try&originalRevision=73bfcd923dc43b4b46ad4c95edb2160f2664a65b&newProject=try&newRevision=b8e2790c2d9c3ffb7a374a885e97b34d7ac8ca67&originalSignature=f79b6a4f1f8f53326d3056f6c8008c0ff4de0a94&newSignature=f79b6a4f1f8f53326d3056f6c8008c0ff4de0a94&showOnlyConfident=1&framework=1 The other tests are less impact and the step tests are around 1s.
Attachment #8955446 - Flags: review?(poirot.alex) → review+
added 2 iterations and cleaned up the name of the custom test
Pushed by ystartsev@mozilla.com: https://hg.mozilla.org/integration/autoland/rev/a2363b01e4ed Add a debugger-specific DAMP test for Talos [New Frontend] r=ochameau
Status: ASSIGNED → RESOLVED
Closed: 7 years ago
Resolution: --- → FIXED
Target Milestone: --- → Firefox 60
Depends on: 1444113
Product: Firefox → DevTools
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: