Closed Bug 744902 Opened 12 years ago Closed 12 years ago

Add performance testing to Marionette

Categories

(Remote Protocol :: Marionette, defect)

x86
All
defect
Not set
normal

Tracking

(Not tracked)

RESOLVED FIXED
mozilla16

People

(Reporter: mdas, Assigned: mdas)

References

Details

Attachments

(2 files, 5 obsolete files)

We'd like to gather performance data with Marionette. Performance data can be gathered from content or chrome, as well as within JS scripts.
This patch adds support for the following Marionette calls:

addPerfData(testSuite, testName, data) 
getPerfData()

And enables the '--perf' and '--repeat=' flags for runtests.py. The --perf flag indicates that the tests being run are performance tests and that the results should be posted to datazilla. The '--repeat' flag takes in a number of times we should repeat the test. This flag works with non-perf tests, so it can be useful for finding intermittent failures.

Performance data is gathered per suite by runtests.py. Each suite has a number of tests, and those tests can have a number of repetitions. For example, let's say we want to test application startup time in gaia. We can have an 'appStartup' suite, and we can have tests named 'browserStartup', 'dialerStartup', etc. These tests will then have a list of numbers which represent startup times. So say we run this suite with --repeat=4. We will have 5 startup times for each test (browserStartup, dialerStartup, etc.), and these will all be under the 'appStartup' category. Once submitted into datazilla, we will have one mean for the entire suite (appStartup), and we can drill down from there to see the mean over a particular test (ex: browserStartup), or to see each value in each test. We can see outliers, get the standard deviation and all that cool stuff from this datastructure that we send over.

The patch is missing the code to submit to datazilla. This will be updated once I can test it out.
Attachment #616684 - Flags: review?(jgriffin)
Comment on attachment 616684 [details] [diff] [review]
Performance patch and test. Missing submission code

Review of attachment 616684 [details] [diff] [review]:
-----------------------------------------------------------------

Looks good!

::: testing/marionette/client/marionette/runtests.py
@@ +1,4 @@
>  from datetime import datetime
> +from collections import defaultdict
> +from copy import deepcopy
> +import urllib2

We don't seem to be using this import.
Attachment #616684 - Flags: review?(jgriffin) → review+
(In reply to Jonathan Griffin (:jgriffin) from comment #2)
> Comment on attachment 616684 [details] [diff] [review]
> Performance patch and test. Missing submission code
> 
> Review of attachment 616684 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> Looks good!
> 
> ::: testing/marionette/client/marionette/runtests.py
> @@ +1,4 @@
> >  from datetime import datetime
> > +from collections import defaultdict
> > +from copy import deepcopy
> > +import urllib2
> 
> We don't seem to be using this import.

Yeah, I'll be using that when we submit to datazilla.

I should also add support for taking in a conf file that tells the runner where to post the data. It should consume a perf.conf file in the runner's directory if none is given, which points to datazilla.
Also, make data submission to datazilla a nice python package, so you can just pass the data you want to it and submit without building a string each time. This way you can update this library when the json or data formats change.
Attached patch full patch v0.1 (obsolete) — Splinter Review
this patch includes the submission code. Also, I added code to allow the performance data server to be set in an .ini file instead of being passed in as a command line option
Attachment #616684 - Attachment is obsolete: true
Attachment #630709 - Flags: review?(jgriffin)
Comment on attachment 630709 [details] [diff] [review]
full patch v0.1

Review of attachment 630709 [details] [diff] [review]:
-----------------------------------------------------------------

Looks good!  Just a few nits:

::: testing/marionette/client/marionette/runtests.py
@@ +288,5 @@
> +                'platform': 'emulator'
> +            },
> +            'test_build' : {
> +                'name': 'B2G',
> +                'version': 'prerelease',

It kind of sucks to hardcode 'prerelease', but there's no way to determine a version of B2G right now.  We should add a method to Marionette to retrieve the version from application.ini using adb, _or_ we should let Marionette return the version along with the capabilities data returned on initial connection (probably the latter).  Then we can use that version here.  That will be consistent with talos.  Let's file a follow-up bug for this.

@@ +310,5 @@
> +
> +        for dataset in datasets:
> +            data = {"data": json.dumps(dataset)}
> +            req = urllib2.Request(options.perfserv, urllib.urlencode(data))
> +            response = urllib2.urlopen(req)

Should probably wrap this in try/except, so that we can catch errors and print them but still allow the rest of the cleanup to occur.

@@ +473,5 @@
> +                      default=None,
> +                      help='dataserver for perf data submission. Entering this value '
> +                      'will overwrite the perfserv value in any passed .ini files.')
> +    parser.add_option('--repeat', dest='repeat', action='store', type=int,
> +                      default=0, help='number of times to repeat perf test.')

I'd say remove the word 'perf' from this help text, since this will actually cause any test to be repeated, perf or not.

::: testing/marionette/client/marionette/tests/unit/test_perf.py
@@ +30,5 @@
> +# and other provisions required by the GPL or the LGPL. If you do not delete 
> +# the provisions above, a recipient may use your version of this file under 
> +# the terms of any one of the MPL, the GPL or the LGPL. 
> +# 
> +# ***** END LICENSE BLOCK ***** 

Should use MPL 2.0 license here.

::: testing/marionette/marionette-simpletest.js
@@ +39,5 @@
>      this.ok(pass, name, diag);
>    },
>  
> +  addPerfData: function Marionette__addPerfData(testGroup, testName, data) {
> +    this.perfData.addPerfData(testGroup, testName, data);

The testGroup parameter should probably be testSuite, to be consistent with marionette-prefs.js.
Attachment #630709 - Flags: review?(jgriffin) → review+
Attached patch full patch v0.2 (obsolete) — Splinter Review
I addressed your comments, and created the follow bug, Bug 763089.

I also added the datazilla client library here:
https://github.com/mozilla/datazilla_client

And updated the virtualenv code to install it.
Attachment #630709 - Attachment is obsolete: true
Attached patch full patch v0.3 (obsolete) — Splinter Review
had unnecessary imports.
Attachment #631554 - Attachment is obsolete: true
Comment on attachment 631555 [details] [diff] [review]
full patch v0.3

Carrying r+ forward.
Attachment #631555 - Flags: review+
https://hg.mozilla.org/mozilla-central/rev/301213b90a98
Status: NEW → RESOLVED
Closed: 12 years ago
Flags: in-testsuite+
Resolution: --- → FIXED
Attached patch full patch v0.4Splinter Review
Need to update datazilla calls to reflect name changes
Attachment #631555 - Attachment is obsolete: true
I removed the dependency in the virtualenv and added it to setup.py
Attachment #632349 - Flags: review?(jgriffin)
Add mozrunner as a dependency, since it is needed and will install mozprocess as its own dependency.
Attachment #632349 - Attachment is obsolete: true
Attachment #632349 - Flags: review?(jgriffin)
Attachment #632356 - Flags: review?(jgriffin)
Comment on attachment 632356 [details] [diff] [review]
datazilla is now a dependency in setup.py v0.2

Thanks!
Attachment #632356 - Flags: review?(jgriffin) → review+
Depends on: 830169
Product: Testing → Remote Protocol
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: