Closed Bug 492569 Opened 15 years ago Closed 12 years ago

implement a way for Talos to run perf tests that developers can add in mozilla-central

Categories

(Testing :: Talos, enhancement, P5)

enhancement

Tracking

(Not tracked)

RESOLVED WONTFIX

People

(Reporter: ted, Unassigned)

References

Details

(Keywords: student-project, Whiteboard: [talos])

There are lots of things that it would be nice to measure performance on, such as "how long does the awesomebar take to display results if you have N items in history" or "how long does it take to scroll down some really long page". At the moment, AFAICT, adding new perf tests involves creating the test and then modifying the Talos config to execute it. It would be way easier for developers if we had an easy way to add tests (in some format) to mozilla-central, which Talos could then download and run on the build to gather perf numbers. This way, adding new perf tests would simply involve developers checking in test files, which would be pretty low-maintenence.

On an implementation level, we could do something similar to the package-tests target, that packaged the perf tests and uploaded them alongside the build, so Talos could download it and execute them. I don't know what kind of test format we'd want, maybe similar to Mochitest, where the test files are .html/.xul?

See also bug 387174, although I think that was purely on the Talos side.
Alice says this is "probably a lot of work". I'm going to put it in Future give that no one has picked it up, but don't let that stop anyone from doing so.
Component: Release Engineering: Talos → Release Engineering: Future
Nobody has picked it up in the 2 hours the bug has been open, you mean? :)

It would be great for Alice to sketch the work here, to help whoever does pick it up in the future.  I had thought that it would be relatively straightforward (adding one test that's a meta test, and teaching it some tricks about finding subtests in the tree), so I would clearly be a bad guide to a future implementor!
I've been thinking this over.  So, Talos understands test to be urls.  New tests are urls that Talos can load that dump information in an understood format that is monitored for (using javascript dump) and then close the browser.  Talos handles monitoring for browser crash/freeze, watching memory counters and sending information to the graph server.

Would we be more interested in
1 - batching new test pages together in a set that will then be run sequentially (this would depend upon tests not having side effects upon each other)
2- having each new test being a lone unit that will be run separately and then reported separately

Option 1 would probably have easier initial set up (Talos could pull down sets of these test pages and run them as DeveloperTests or some such), option 2 is more complicated to set up but also has the downside of understanding how to crunch the data in the graph server.  We could get a lot of test name bloat depending upon how many tests we are talking about creating.

Also, when it comes to the graph server we require db updates to incorporate new test information.  So we'd have to have an intermediate step where the test was correctly added to the graph server - meaning that we couldn't just have a developer check in a new test and then expect the output to appear.

Basically, I think that this is more difficult then unit test word because of reporting complexity.  For unit tests there is just a log with pass/fail.  Performance tests have no meaning without past history, a single performance result is effectively useless.

I'd be interested to see a new test developed that could then be used as an example case for integration with Talos.  There's nothing stopping us from incorporating new tests as it is, I've pulled in jresig's dromaeo tests without too much pain.  Maybe we are coming at this from the wrong angle - the effort should be on designing these new tests and less focus on getting them in Talos.  As it is it seems like we are looking to do Talos changes when it isn't actually blocking anything.
Whiteboard: [tsnap]
Whiteboard: [tsnap]
Mass move of bugs from Release Engineering:Future -> Release Engineering. See
http://coop.deadsquid.com/2010/02/kiss-the-future-goodbye/ for more details.
Component: Release Engineering: Future → Release Engineering
Priority: -- → P5
Whiteboard: [talos]
Moving this bug to Testing:General for suite development, as requested by
bmoss, ctalbert. 

Once this new suite is developed, please file a separate bug in
mozilla.org/ReleaseEngineering to enable running the new suite on builds
automatically.
Component: Release Engineering → General
Product: mozilla.org → Testing
QA Contact: release → general
Version: other → unspecified
Moving to Talos component.
Component: General → Talos
currently we have peptests and talos as ways to measure performance.  For talos, we are looking for benchmarks that have all resources available.  Now that talos is easier to run locally this doesn't seem like a valid route for us to take.
Status: NEW → RESOLVED
Closed: 12 years ago
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.