mozmill should use manifests to keep track of and store metadata with respect to tests

VERIFIED FIXED

Status

defect
VERIFIED FIXED
9 years ago
3 years ago

People

(Reporter: k0scist, Assigned: k0scist)

Tracking

Dependency tree / graph

Firefox Tracking Flags

(Not tracked)

Details

(Whiteboard: [mozmill-2.0+][mozmill-1.5.2+][mozmill-doc-needed])

Attachments

(2 attachments, 1 obsolete attachment)

* Manifest should help designate what is and is not a test file
 * Manifest should provide canonical names for tests: bug 580057
 * Manifest should indicate what test is restart, which is not, and can tie together restart tests so they don't have to be named test1, test2, etc
 * Use same manifest as universal manifest (including parsing code). 

See https://wiki.mozilla.org/Auto-tools/Projects/Mozmill/Mozmill_2.0#Manifest_support
Blocks: 580057
Blocks: 585112
Manifests should also indicate which version of (e.g.) Firefox should be used.  It should be hard to run tests with the wrong FF version
Generally agree!  Re: versions, let's make sure to think out maintenance over time when we implement that.  I agree it should be hard to do the wrong thing, but I also don't want to sign up for huge housekeeping on every release.
(In reply to comment #2)
> Generally agree!  Re: versions, let's make sure to think out maintenance over
> time when we implement that.  I agree it should be hard to do the wrong thing,
> but I also don't want to sign up for huge housekeeping on every release.

An easy way (not necessarily correct, but a way) to implement this which would require minimal maintainence is just to have the manifest note the e.g. FF version once.  so each of the branches in the mozmill-tests would not e.g. 3.6, 4.0, etc
Yeah, I'd definitely want a default/override system there, where I can mark a whole branch and then override specific tests as necessary.  We'll have to take a look at how the other testsuites do it, but I suspect as long as we're only tracking a minimum version (e.g. >= 4.0b5) then things work out ok.
(In reply to comment #1)
> Manifests should also indicate which version of (e.g.) Firefox should be used. 
> It should be hard to run tests with the wrong FF version

Why? We have branches for it. I'm not sure why we should make it even more complicated to setup new folders with tests or even include new tests. Can you be a bit clearer about which versions you are talking about? Different branches or by specifying a build id?
(In reply to comment #5)
> (In reply to comment #1)
> > Manifests should also indicate which version of (e.g.) Firefox should be used. 
> > It should be hard to run tests with the wrong FF version
> 
> Why? We have branches for it. I'm not sure why we should make it even more
> complicated to setup new folders with tests or even include new tests. Can you
> be a bit clearer about which versions you are talking about? Different branches
> or by specifying a build id?

Its really an optional feature.  It is also independent of the existing branching structure -- in that case, each branch would have a top-level manifest that would specify the version it is for.  If a test is intended for a particular version of Firefox, it should be marked as such so that user error does not run it accidentally.
Depends on: 615144
So getting closer to a working spec for manifests, this bug comes closer to light.  To summarize, a manifest (or several) consists of a list of tests, which are really dictionaries containing data about the test.  Since each harness will be different, it is (mostly) up to the harness to determine what metadata is appropriate, though things like 'path', 'name', and 'disabled' are pretty generic and should be used in mozmill.  

(This isn't really the appropriate place to discuss the wider manifest format, though join #ateam and ping either jmaher or myself(jhammel) for more details or send to my bugmail.  I'm just summarizing.)

As a mozmill-specific thing, one key will be either 'restart' or (key, value) 'type' = 'restart'.  This allows, or implies, for manifest runs, it will be known whether to invoke MozMill or MozmillRestart.

... However, currently MozMill and MozmillRestart are separate classes.  They have almost completely different running logic, but the tests ran and passes and fails are stored with the instance.  This means that, for instance, you could do

MozMill(<args>).run(tests) # yes, I know i'm forgetting .start()...this is a cartoon
MozmillRestart.run(restart_tests)

The results will be accumulated and reported separately.  In other words, if you run 2 restart tests and 2 normal tests, there is no good way of saying "I ran four tests".

There are other reasons to reconsider this class structure too.  Essentially, Mozmill and MozmillRestart should only be in the test running and cleanup business and do the rest by event-dispatching.

This isn't meant to warn, frighten, or otherwise beguile.  I'll play with the code and see if I can get a better handle on things.  Its just a heads-up progress report on this bug.

Another note, for running non-manifest tests, you'll still specify --restart to run the restart tests and without it to run the normal tests.  It is not worth time to add complicated implicit logic there.

Note also that I'm talking about manifests on master.  For 1.5.X (2, likely) manifests may be introduced, but they will be of a much more limited form, likely only noting the path of the tests, whether the test is disabled (so that they may quickly be turned on or off to fix e.g. an m-c bustage), and perhaps what os(es) it runs/doesn't run on.  For this version of mozmill, the normal tests will be run with `mozmill` and restart tests will be run with `mozmill-restart`
A current version of manifest for 1.5 mozmill is available at https://github.com/k0s/mozmill/tree/manifests ; it requires http://k0s.org/mozilla/hg/ManifestDestiny to be install (not yet included in setup.py as its not on pypi)
Do we already have a spec how the manifest will look like re m-c logic and usage from our own repository?
So this is still a work in progress.  It probably makes sense to commit in a few pieces here.  For those playing along at home, there is a version at https://github.com/k0s/mozmill/tree/manifests-master that does work successfully

The very brief rundown:
 - uses manifests 
 - discriminates on normal + restart tests...can be done together
 - improves stopping logic, making the cleanup calls idempotent and move them close to what is being cleaned up
 - introduce a new results class and accumulate information there which dispatches to the handlers;  this is necessary as if you're using both MozMill and MozMillRestart you need a way to aggregate the results

What this doesn't do (again, the very brief):
 - pass more test information around; to MozMill + MozMillRestart, tests are still paths
 - cleanup the mess of restart tests -- again, these should be done basically with manifests versus `test1.js, test2.js`, etc
 - a little polish

Also, a minor item: I can't figure out how to discriminate on reports now.  previous we had report_type based on MozMill or MozMillRestart;  do we want to send two different reports? One report?

This patch isn't quite ready, but when its done (probably Monday), I'm inclined to land it and ticket the other issues separately.

Also, this is all for master;  see comment 8 for the much less pervasive 1.5 version
Whiteboard: [mozmill-2.0?] → [mozmill-2.0?][mozmill-1.5.2?]
I've created sample manifests for what is now on m-c.  You can find them here: http://k0s.org/mozilla/hg/mozilla-central-patches/file/844fad38b068/mozmill-manifests

These were created with the ManifestDestiny python package.  After this package is installed, the basic manifest structure can be created by:

create-manifest . --ignore 'test-files' --pattern '*.js' --in-place manifest.ini

This will not fill in the `type=restart` for the restartTests. That has to be added by hand.  It just gives you a skeleton to start from
Jeff, can you give an example how such a manifest will look like?
(In reply to comment #9)
> Do we already have a spec how the manifest will look like re m-c logic and
> usage from our own repository?

See http://k0s.org/mozilla/hg/mozilla-central-patches/file/844fad38b068/mozmill-manifests for m-c
(In reply to comment #12)
> Jeff, can you give an example how such a manifest will look like?

See above link
Posted patch manifests on master (obsolete) — Splinter Review
Add manifests to master.  Currently restart tests are all run after normal tests.  Not sure if that's okay.  Requires a new package, ManifestDestiny.  Doesn't alter the restart test logic.  The restart tests should be in a manifest but aren't.  This can be tackled in a follow-up bug
Attachment #495626 - Flags: review?(ctalbert)
If we're generally happy with the 1.5.2 solution, I should put that up for review as well.  Any yays or nays?  This is mostly asking: 

 - is the (.ini) format adequate?
 - currently the manifests just run through all tests not marked as disabled.  Is this okay for 1.5.2?
Is it ok to use a different format of manifests, especially for m-c? The output which gets produced by ManifestDestiny is different to what we currently have in other test suites.

Further, is it possible to add more meta info? IMO it would make sense to have an extra boolean flag (similar to existing manifests for reftests etc.) to mark a test as enabled/disabled in the manifest itself. Removing a file can cause a situation that the test will get forgotten.

Also, how would we be able to differentiate between disabled tests on buildbot and in our repository? It would be really helpful to maintain the state of a test in a single location.
(In reply to comment #17)
> Is it ok to use a different format of manifests, especially for m-c? The output
> which gets produced by ManifestDestiny is different to what we currently have
> in other test suites.

I'm not really sure what this is asking.  Please stop by #ateam if you want to discuss the manifest format, or I guess we could do it in #mozmill too.  This bug is not about the universal manifest format.

> Further, is it possible to add more meta info? 

Yes, you can add arbitrary key, value pairs to any test.

> IMO it would make sense to have
> an extra boolean flag (similar to existing manifests for reftests etc.) to mark
> a test as enabled/disabled in the manifest itself. Removing a file can cause a
> situation that the test will get forgotten.

This already exists:  use the `disabled` key in a test.

> Also, how would we be able to differentiate between disabled tests on buildbot
> and in our repository? It would be really helpful to maintain the state of a
> test in a single location.

I assume this means: 'disabled tests on m-c' vs buildbot.  

This isn't a question I can particularly answer, as there are multiple ways of solving the problem.  I would probably keep a branch/new repo of m-c tests that would be the state of things on m-c.  Alternatively, we can have different manifests in the same tree.  I don't know.  There are several solutions.  This is mostly a policy decision, I think
(In reply to comment #18)
> > Is it ok to use a different format of manifests, especially for m-c? The output
> > which gets produced by ManifestDestiny is different to what we currently have
> > in other test suites.
> 
> I'm not really sure what this is asking.  Please stop by #ateam if you want to
> discuss the manifest format, or I guess we could do it in #mozmill too.  This
> bug is not about the universal manifest format.

There is nothing to discuss. I just wanted to know if it is ok to introduce a new format of manifests. I haven't read about about it on this bug yet.

> > Further, is it possible to add more meta info? 
> 
> Yes, you can add arbitrary key, value pairs to any test.

Do you mean test or manifest? My question was more targeted to manifests, as what this bug is about.

> This already exists:  use the `disabled` key in a test.

So it exists in the test that an execution via "-t" can also take advantage of the disabled state? The example, you have pointed out above, doesn't show how such a test would look like. Would be nice to get a short snippet. If we don't handle this flag in the manifest file itself, we really have to take care when copying files over to m-c.

> I assume this means: 'disabled tests on m-c' vs buildbot.  

Nope. mozmill-tests vs. buildbot.

> This isn't a question I can particularly answer, as there are multiple ways of
> solving the problem.  I would probably keep a branch/new repo of m-c tests that
> would be the state of things on m-c.  Alternatively, we can have different
> manifests in the same tree.  I don't know.  There are several solutions.  This
> is mostly a policy decision, I think

A second manifest sounds good, as long as the disabled state could be handled in the manifest and not the test.
> Do you mean test or manifest? My question was more targeted to manifests, as
> what this bug is about.

To any test section in the manifest.  The README and tests in http://k0s.org/mozilla/hg/ManifestDestiny illustrate how this is done.
 
> So it exists in the test that an execution via "-t" can also take advantage of
> the disabled state? 

There are two ways to run tests in the patch.  There is -t for a single test file/directory. This will add a very minimal set of metadata for the test for processing (for internal consumption).  And there is -m which will run tests from a manifest.

> The example, you have pointed out above, doesn't show how
> such a test would look like. Would be nice to get a short snippet. If we don't
> handle this flag in the manifest file itself, we really have to take care when
> copying files over to m-c.

To disable the `testSomething.js` test in the manifest, do

[testSomething.js]
disabled = 

> Nope. mozmill-tests vs. buildbot.

Then I am not sure what the question is asking.
 
> A second manifest sounds good, as long as the disabled state could be handled
> in the manifest and not the test.

It can.  If useful an e.g. `--run-disabled` could be added to mozmill.
(In reply to comment #17)
> Is it ok to use a different format of manifests, especially for m-c? The output
> which gets produced by ManifestDestiny is different to what we currently have
> in other test suites.
> 
The format is experimental.  It can be easily changed due to the way that we wrote the parser and is also being used as the auto-generated manifest for xpcshell.  The only other test suite we have that uses manifests are reftests which use a very specific and highly customized manifest format.

> Further, is it possible to add more meta info? IMO it would make sense to have
> an extra boolean flag (similar to existing manifests for reftests etc.) to mark
> a test as enabled/disabled in the manifest itself. Removing a file can cause a
> situation that the test will get forgotten.
I don't understand what you're asking or what problem you're trying to solve.
> 
> Also, how would we be able to differentiate between disabled tests on buildbot
> and in our repository? It would be really helpful to maintain the state of a
> test in a single location.
You don't. That kind of distinction is wildly out of scope for a manifest. This is a manifest for files on disk.  To the manifest, it simply should not ever care whether a file is a buildbot file or a mozmill-test file.  They are just locations on disk.
Sorry, apologies for bug spam.  And yes, this is something we are certainly taking for 1.5.2.
Whiteboard: [mozmill-2.0?][mozmill-1.5.2?] → [mozmill-2.0?][mozmill-1.5.2+]
The much more limited patch for mozmill 1.5
Attachment #495719 - Flags: review?(ctalbert)
Assignee: nobody → jhammel
Comment on attachment 495626 [details] [diff] [review]
manifests on master

The Review for the master changes...
I like it quite a bit, I made some nits, r+ with those.
>+class TestResults(object):
I like this class a lot.  
>+
>+    def events(self):
>+        """events the MozMill class will dispatch to"""
>+        return {'mozmill.endTest': self.endTest_listener}
>+
>+    def stop(self, handlers, fatal=False):
>+        """do final reporting and such"""
>+        self.endtime = datetime.utcnow()
>+
>+        # handle stop events
>+        for handler in handlers:
>+            if hasattr(handler, 'stop'):
>+                handler.stop(self, fatal)
>+
>+    ### event listener
>+
>+    def endTest_listener(self, test):
>+        self.alltests.append(test)
>+        if test.get('skipped', False):
>+            self.skipped.append(test)
>+        elif test['failed'] > 0:
>+            self.fails.append(test)
>+        else:
>+            self.passes.append(test)
Why don't we have a "startTest listener? Is this something we need to add to jsbridge?  Can you file a follow-on bug for it?

>-        # test parameters: filled in from event system
>-        self.passes = [] ; self.fails = [] ; self.skipped = []
>-        self.alltests = []
>-
>+        #
No blank comments ^, please remove

>@@ -269,8 +294,11 @@ class MozMill(object):
>         }]
>         test['passed'] = 0
>         test['failed'] = 1
>-        self.alltests.append(test)
>-        self.fails.append(test)
>+
>+        # send to self.results
>+        # XXX bad touch
>+        self.results.alltests.append(test)
>+        self.results.fails.append(test)
Why can't we do this in callbacks registered in self.results?  We don't have to fix it here but let's be sure we've got bugs filed for that. And make a note that the bug to add in this ability fixes this case.

>         # Reset the profile.
>+        # XXX this is awful logic; profile should have a method just to
>+        # clone :(
Isn't this mozprofile? Make a clone method or file a follow-on bug.  This patch is pretty long, I'd recommend a new bug and separate patch. I don't think these comments are really worthwhile other than placeholders and I think a bug is a far better placeholder, so let's do that instead.

>-        # create a mozmill
>-        self.mozmill = self.mozmill_class(jsbridge_port=int(self.options.port),                                  
>-                                          jsbridge_timeout=self.options.timeout,
>-                                          handlers=event_handlers
>-                                          )
>+        # read tests from manifests (if any)
>+        self.manifest = manifests.TestManifest(manifests=self.options.manifests)
What does this return if you don't have a manifest?
And how does self.manifest.tests get created? (see next comment)
I've also seen self.manifest.active_tests...how are those created?  I only see it being used in run, but it is never initialized and no tests are ever added to it.  How does that work?
> 
>         # expand user directory and check existence for the test
>-        self.tests = []
>         for test in self.options.tests:
>-            test = os.path.abspath(os.path.expanduser(test))
>+            test = os.path.expanduser(test)
>             if not os.path.exists(test):
>                 raise Exception("Not a valid test file/directory")
>-            self.tests.append(test)
>+
>+            # construct metadata for test
>+            test_dict = { 'test': test, 'path': os.path.abspath(test) }
>+            if self.options.restart:
>+                test_dict['type'] = 'restart'
>+            
>+            self.manifest.tests.append(test_dict)
Because if self.manifest is None, then this ^ is going to throw.

>     def run(self):
>+
>+        # find the tests to run
>+        normal_tests = []
>+        restart_tests = []
>+        for test in self.manifest.active_tests():
where is active_tests being set?  How does it know what to return?

>+            if test.get('type') == 'restart':
>+                restart_tests.append(test['path'])
>+            else:
>+                normal_tests.append(test['path'])
>+            # TODO: should probably pass the whole test, maybe?
>+
>+        # create a place to put results
>+        results = TestResults()
There is a TestResults object on the mozmill class isn't there?  Why aren't we using that?  Why create a new one?
   
>+            try:
>+                if normal_tests:
>+                    self.run_tests(MozMill, normal_tests, runner, results)
>+                
>+                if restart_tests:
>+                    self.run_tests(MozMillRestart, restart_tests, runner)
>+
>+            except Exception, e:
>+                runner.cleanup() # cleanly shutdown
>+                raise
>+
>+            # do whatever reporting you're going to do
>+            results.stop(self.event_handlers)
>+
>+            # exit
>+            if e:
>+                raise
>+            if results.fails:
>+                sys.exit(1)
>+
>+            # TODO: could return results
What are you referring to?  Do you mean that this function should return the results object?  That would make sense.  Why not do it?
> 
    
>+            raise Exception("no friggin tests")
Cute, but no.  Raise an exception with a real statement.
>+
>+#         else:
>+#             # TODO: document use case
>+#             # probably take out of this function entirely
>+#             if self.options.shell:
>+#                 self.start_shell(runner)
>+#             else:
>+#                 try:
>+#                     if not hasattr(runner, 'process_handler'):
>+#                         runner.start()
>+#                     runner.wait()
>+#                 except KeyboardInterrupt:
>+#                     runner.stop()
>+
Just do it, remove this code.

> def enum(*sequential, **named):
>     # XXX to deprecate
>     enums = dict(zip(sequential, range(len(sequential))), **named)
>diff --git a/mozmill/mozmill/logger.py b/mozmill/mozmill/logger.py
>index ddbb3ba..20d51ca 100644
>--- a/mozmill/mozmill/logger.py
>+++ b/mozmill/mozmill/logger.py

>   def report_type(self):
>+    return 'NotImplementedError'
>+    # XXX NOT SURE WHAT TO DO HERE
>+    # if you're reporting across mozmill and mozmill-restart tests ...
>+    # maybe this should live with the test metadata? ::shrug::
I think that where we want to go is that there will not be a difference between a mozmill test type (as far as we are concerned for reporting). We will no longer have a separate entry point for calling a mozmill-restart test and therefore we can simply run all the tests in one fell pass that includes both types.  The results will still be accumulated and will all be reported as one. I think that there is no real use case for reporting on test type.  If there is, then the caller can use two different manifests or can implement their own results handler and split the data themselves.  Perhaps leave the method on this interface and have this method always return NotImplementedError?  (This way it's there and available should anyone extending the code want to use it in the future).  I wonder if that's even necessary for python though...
Attachment #495626 - Flags: review?(ctalbert) → review+
Comment on attachment 495719 [details] [diff] [review]
manifests on hotfix-1.5.2

Review for 1.5.2...

>diff --git a/mozmill/mozmill/__init__.py b/mozmill/mozmill/__init__.py
>index e00be65..a4d197a 100644
>--- a/mozmill/mozmill/__init__.py
>+++ b/mozmill/mozmill/__init__.py
>@@ -49,6 +49,8 @@ import traceback
> import urllib
> import urlparse
> 
> 
>+    def find_tests(self, tests, files=None):
>+        if files is None:
>+            files = []
>+        for test in tests:
>+
>+            # tests have to be absolute paths, for some reason
>+            test = os.path.abspath(test)
I believe this is due to the way that the tests are loaded by javascript - js requires us to use an absolute path.  Please update the comment and the other one along the same line.
>     def run_tests(self, tests, sleeptime=4):
>         """
>         run test files or directories
>@@ -253,9 +278,7 @@ class MozMill(object):
>         - sleeptime : initial time to sleep [s] (not sure why the default is 4)
What happens if we remove the 4?

r+ once we understand the impact of the 4.  If it has no effect we should eliminate it.  If it does have some effect then we can leave it and fix it with the refactoring in 2.0 (indeed, I think it is already fixed).
Attachment #495719 - Flags: review?(ctalbert) → review+
(In reply to comment #24)
> Comment on attachment 495626 [details] [diff] [review]

> Why don't we have a "startTest listener? Is this something we need to add to
> jsbridge?  Can you file a follow-on bug for it?

Do you mean for the TestResults class or for mozmill in general?  We do have a setTest event, which is in effect the startTest event (see https://github.com/mozautomation/mozmill/blob/master/mozmill/mozmill/logger.py#L56 ).  But currently nothing consumes it besides the reporter, so there are no other listeners.

> >+        #
> No blank comments ^, please remove

I've added additional comments noting the persistent data and the shutdown conditions:

        # persisted data
        self.persisted = {}

        # shutdown parameters
        self.shutdownModes = enum('default', 'user_shutdown', 'user_restart')
        self.currentShutdownMode = self.shutdownModes.default
        self.userShutdownEnabled = False

Acceptable?

> Why can't we do this in callbacks registered in self.results?  We don't have to
> fix it here but let's be sure we've got bugs filed for that. And make a note
> that the bug to add in this ability fixes this case.

Currently, events are only sent from JS bridge -> python.  The report_disconnect function is called by python when it receives a JSBridgeDisconnect exception.  I'm not sure what a good fix is.

Do we want to file a bug on sending events from python?  I don't know how this would work, but would be glad to file it if that's something worth considering.
 
> >         # Reset the profile.
> >+        # XXX this is awful logic; profile should have a method just to
> >+        # clone :(
> Isn't this mozprofile? Make a clone method or file a follow-on bug.  This patch
> is pretty long, I'd recommend a new bug and separate patch. I don't think these
> comments are really worthwhile other than placeholders and I think a bug is a
> far better placeholder, so let's do that instead.

Filed: https://bugzilla.mozilla.org/show_bug.cgi?id=617790

> >+        self.manifest = manifests.TestManifest(manifests=self.options.manifests)
> What does this return if you don't have a manifest?

It will still return a manifest, just an empty one: http://k0s.org/mozilla/hg/ManifestDestiny/file/2c81af75ebc5/manifestdestiny/manifests.py#l177

> And how does self.manifest.tests get created? (see next comment)

See above link.  It is initially an empty list.  

> I've also seen self.manifest.active_tests...how are those created?  I only see
> it being used in run, but it is never initialized and no tests are ever added
> to it.  How does that work?

Currently, active_tests only filters out tests with a `disabled` key: http://k0s.org/mozilla/hg/ManifestDestiny/file/2c81af75ebc5/manifestdestiny/manifests.py#l291  In the future, as said in the comment, it can/should filter out e.g. only the tests that will run on your machine.
     
> >+            self.manifest.tests.append(test_dict)
> Because if self.manifest is None, then this ^ is going to throw.
Nope, it is not None, see above (I've also tested this and it works fine).

> where is active_tests being set?  How does it know what to return?

See above and http://k0s.org/mozilla/hg/ManifestDestiny/file/2c81af75ebc5/manifestdestiny/manifests.py#l291


> >+        # create a place to put results
> >+        results = TestResults()
> There is a TestResults object on the mozmill class isn't there?  Why aren't we
> using that?  Why create a new one?

No, TestResults doesn't live on the mozmill class.   The purpose of TestResults, as we see here, is to gather data from multiple runs.  With this patch, you can run restart and non-restart tests, as denoted by `type=restart` in the manifest, but since previously the results were per class (e.g. lived on MozMill or MozMillRestart instances), there was no good way to aggregate them.  With TestResults, you can run however many MozMills you want and have all of your results in the same place.
 
> >+            # TODO: could return results
> What are you referring to?  Do you mean that this function should return the
> results object?  That would make sense.  Why not do it?

Done.

> 
> >+            raise Exception("no friggin tests")
> Cute, but no.  Raise an exception with a real statement.

Done.  If we're happy with deleting the case below, which I'd be happy to implement as a different console_script (just couldn't get consensus around this...also, I never use mozmill without tests), I'll put the test checking above with a better message.

> Just do it, remove this code.

Done (gladly).


> >   def report_type(self):
> >+    return 'NotImplementedError'
> >+    # XXX NOT SURE WHAT TO DO HERE
> >+    # if you're reporting across mozmill and mozmill-restart tests ...
> >+    # maybe this should live with the test metadata? ::shrug::
> I think that where we want to go is that there will not be a difference between
> a mozmill test type (as far as we are concerned for reporting). We will no
> longer have a separate entry point for calling a mozmill-restart test and
> therefore we can simply run all the tests in one fell pass that includes both
> types.  The results will still be accumulated and will all be reported as one.
> I think that there is no real use case for reporting on test type.  If there
> is, then the caller can use two different manifests or can implement their own
> results handler and split the data themselves.  Perhaps leave the method on
> this interface and have this method always return NotImplementedError?  (This
> way it's there and available should anyone extending the code want to use it in
> the future).  I wonder if that's even necessary for python though...

K. I'll leave it.  IIRC, :whimboo says its not used.

I'll upload another patch
cleanup from comment 24 ; carrying forward r+
Attachment #495626 - Attachment is obsolete: true
Attachment #496344 - Flags: review+
(In reply to comment #26)
> (In reply to comment #24)
> > Comment on attachment 495626 [details] [diff] [review] [details]
> 
> > Why don't we have a "startTest listener? Is this something we need to add to
> > jsbridge?  Can you file a follow-on bug for it?
> 
> Do you mean for the TestResults class or for mozmill in general?  We do have a
> setTest event, which is in effect the startTest event (see
> https://github.com/mozautomation/mozmill/blob/master/mozmill/mozmill/logger.py#L56
> ).  But currently nothing consumes it besides the reporter, so there are no
> other listeners.
I was thinking that if TestResults is a true result handler, then it would have a "start" test or a current test so that it could gather test timing.  It sort of has that with the two date.now() bits you put in there, which was where I got the idea from.
>         # shutdown parameters
>         self.shutdownModes = enum('default', 'user_shutdown', 'user_restart')
>         self.currentShutdownMode = self.shutdownModes.default
>         self.userShutdownEnabled = False
> 
> Acceptable?
looks good

> 
> > Why can't we do this in callbacks registered in self.results?  We don't have to
> > fix it here but let's be sure we've got bugs filed for that. And make a note
> > that the bug to add in this ability fixes this case.
> 
> Currently, events are only sent from JS bridge -> python.  The
> report_disconnect function is called by python when it receives a
> JSBridgeDisconnect exception.  I'm not sure what a good fix is.
I don't think that's true in this case.  Python knows exactly when the test starts because python sends the call to jsbridge to start a test.  When Python does that, it could also register something with testresults so test results could log the time the test started and then it could also log the time the test stopped.  I think being able to track timings like that is going to need to be a part of a results class, and maybe even a generic handler class.
> 
> Do we want to file a bug on sending events from python?  I don't know how this
> would work, but would be glad to file it if that's something worth considering.

We already send commands from python to js.  But in this case I don't think we really need to.
> 
> > >         # Reset the profile.
> > >+        # XXX this is awful logic; profile should have a method just to
> > >+        # clone :(
> > Isn't this mozprofile? Make a clone method or file a follow-on bug.  This patch
> > is pretty long, I'd recommend a new bug and separate patch. I don't think these
> > comments are really worthwhile other than placeholders and I think a bug is a
> > far better placeholder, so let's do that instead.
> 
> Filed: https://bugzilla.mozilla.org/show_bug.cgi?id=617790
> 
Thanks, (nit) now remove the comment.

> > >+        self.manifest = manifests.TestManifest(manifests=self.options.manifests)
> > What does this return if you don't have a manifest?
> 
> It will still return a manifest, just an empty one:
> http://k0s.org/mozilla/hg/ManifestDestiny/file/2c81af75ebc5/manifestdestiny/manifests.py#l177
> 
Ah, I haven't looked at all that code yet.

> > >+        # create a place to put results
> > >+        results = TestResults()
> > There is a TestResults object on the mozmill class isn't there?  Why aren't we
> > using that?  Why create a new one?
> 
> No, TestResults doesn't live on the mozmill class.   The purpose of
> TestResults, as we see here, is to gather data from multiple runs.  With this
> patch, you can run restart and non-restart tests, as denoted by `type=restart`
> in the manifest, but since previously the results were per class (e.g. lived on
> MozMill or MozMillRestart instances), there was no good way to aggregate them. 
> With TestResults, you can run however many MozMills you want and have all of
> your results in the same place.
Ah ok, that makes sense then.  For some reason I thought that mozmill had a self.results object that it was using to hold results.

Looks good.
(In reply to comment #28)

> I was thinking that if TestResults is a true result handler, then it would have
> a "start" test or a current test so that it could gather test timing.  It sort
> of has that with the two date.now() bits you put in there, which was where I
> got the idea from.

I just did a functional port of existing results functionality.  I've ticketed the start/stop per test in bug 617824 (not sure if this is a dupe).  In general we can extend the TestResults class to gather more data, I just took what was already there.

> I don't think that's true in this case.  Python knows exactly when the test
> starts because python sends the call to jsbridge to start a test.  When Python
> does that, it could also register something with testresults so test results
> could log the time the test started and then it could also log the time the
> test stopped.  I think being able to track timings like that is going to need
> to be a part of a results class, and maybe even a generic handler class.

Again, see bug 617824. If we're talking about python talking to python though...this can of course be done.  Currently MozMill is just an eventdispatcher for JSBridge events (with results handling the special .stop event and passing itself).  We could do more.  I'm pretty sure its out of scope for here, but I'd encourage a bug for this.

> > Filed: https://bugzilla.mozilla.org/show_bug.cgi?id=617790
> > 
> Thanks, (nit) now remove the comment.

Done.  I reference the bug in the comment.
 
Will push shortly.
Whiteboard: [mozmill-2.0?][mozmill-1.5.2+] → [mozmill-2.0+][mozmill-1.5.2+]
filed followup bug 618113 about restoring the functionality of running mozmill with no tests
(In reply to comment #25)
> >@@ -253,9 +278,7 @@ class MozMill(object):
> >         - sleeptime : initial time to sleep [s] (not sure why the default is 4)
> What happens if we remove the 4?
> 
> r+ once we understand the impact of the 4.  If it has no effect we should
> eliminate it.  If it does have some effect then we can leave it and fix it with
> the refactoring in 2.0 (indeed, I think it is already fixed).

I've run it on my computer with sleeptime=0 for restart + non-restart tests with no issue (just using the m-c tests, but the tests themselves should be incidental to this issue). Then again, I have a fast computer, so I'm not sure if there would be any difference on a slower computer (even if anything*0 = 0).

I don't know if I understand the issue, or why it was 4 to begin with.  Neither the 4 nor the comment were part of this patch.  Should I push with sleep=0? Or should I push as-is and file a follow-up bug?
Pushed the 1.5.2 patch as-is; filed follow-up bug 618920 for the sleep=4 issue
Status: NEW → RESOLVED
Closed: 9 years ago
Resolution: --- → FIXED
(In reply to comment #33)
> Pushed the 1.5.2 patch as-is; filed follow-up bug 618920 for the sleep=4 issue

Jeff, can you please always mention the correct changeset id? thanks.
Sorry, I forgot to rebase when I merged, so that it is in several changesets.  I didn't realize this until I pushed, otherwise I would have fixed it. If we could do a push hook on github (not sure if this is possible), it might be a good idea to collapse multiple changesets if we want 1 bug == 1 push.  It is hard to remember otherwise.
(In reply to comment #35)
> good idea to collapse multiple changesets if we want 1 bug == 1 push.  It is
> hard to remember otherwise.

Don't worry for this time, but simply check the documentation we have on the wiki for pushing patches. I normally have to do it each time. :)
Oh, and we definitely need documentation for this new feature, which will explain all the nitty-gritty details.
Whiteboard: [mozmill-2.0+][mozmill-1.5.2+] → [mozmill-2.0+][mozmill-1.5.2+][mozmill-doc-needed]
Marking as verified fixed. Tested and works great. I have filed some follow-ups to better integrate the data into our reports.

Jeff, can you also please move the documentation over to developer.mozilla.org? Thanks.
Status: RESOLVED → VERIFIED
Product: Testing → Testing Graveyard
You need to log in before you can comment on or make changes to this bug.