Closed Bug 1121139 Opened 9 years ago Closed 9 years ago

Allow data transfer between runner and MarionetteTestCase

Categories

(Remote Protocol :: Marionette, defect, P1)

defect

Tracking

(Not tracked)

RESOLVED WONTFIX

People

(Reporter: erahm, Unassigned)

References

Details

(Keywords: pi-marionette-runner)

Attachments

(1 file, 1 obsolete file)

There are two use cases I have in mind. I've included some pseudo-code of what I'd like to accomplish which hopefully makes things clearer.

#1 - Pass configuration data to the test case

In my test harness:
> test_confg = { settleTime: 10, iterations: 2 }
> runner.run_tests("my_test", test_config)

# In my_test:
> def setUp(self, options):
>    self._settleTime = options.settleTime
>    self._iterations = options.iterations

#2 - Retrieve data generated by the test

One option:

In my test harness:
> runner.run_tests("my_test")
> test_data = runner.test_data

In my_test:
> self.marionette.set_data( { foo: "bar" } )

Or possibly:

In my test harness:
> def test_data_handler(self, obj):
>   self.results += obj
>
> runner.add_test_data_handler(test_data_handler)
> runner.run_tests("my_test")

In my_test:
> self.marionette.add_test_data( { foo: "bar" } )
The feature as requested here is also blocking myself for update tests. So putting it up as blocker for bug 1080766. One thing we have to discuss is the naming of those properties. But maybe a real POC would be good to have first.
Blocks: m21s
Priority: -- → P1
I forgot to note: this is blocking the transition of the AWSY suite [1] to marionette.

[1] https://github.com/mozilla/areweslimyet
The basic facility has an implementation in the current runner as "testvars", which is a command line argument to the runner that is taken as file containing a JSON object to be read, becomes a dictionary as self.testvars within a test case that can be updated during a test run and read out of runner.testvars afterward. This looks like it's for general test configuration (not necessarily output), but it seems pretty close to what we need.
I wonder if we can change it in that way that it does not only accept a filename to read the JSON data from, but that it also can be modified when you have a runner instance. That would make it more flexible, so we can dynamically add additional configuration data for tests, which in most cases would have to be evaluated first inside the harness. Also using specific command line options is easier to handle as writing a JSON file first especially if you only handle stuff via bash scripts.

> runner = BaseMarionetteTestRunner()
> runner.testvars = { settleTime: 10, iterations: 2 } // or enhance the dict
> runner.run()

In regards of getting values out, what about a property like testdata or such? 

> runner.run()
> compute_whatever(runner.testdata)

The requirement will be that testdata keeps everything from former tests, and also gets pushed into every following test. Means it cannot be reset.

If using a handler, so the data can directly be processed after each test, is helpful needs to be evaluated. For our Mozmill tests results we only do it at the very end.
/r/2523 - Bug 1121139 - Allow individual cases to pass data to handlers defined on the test runner or test class.

Pull down this commit:

hg pull review -r 9948c73baa381bed50917eeca4a478b719354db9
Comment 5 is a strawman in mozreview. Beyond testvars, I think we want to have a separate facility for outputs. :whimboo, can you point me towards how mozmill handles this currently? I'm trouble imagining how command line options would be used to pass arbitrary data to the runner.
https://reviewboard.mozilla.org/r/2521/#review1679

::: testing/marionette/client/marionette/tests/unit/test_testcasedata.py
(Diff revision 1)
> +            "two" : {"Baz": 456},

The testcase as implemented here might not work given that for a real test you have to pass those variables via the runner into the testcase class instance. Here you define everything already inside the testcase class.

So it's not what has been requested by this bug.

::: testing/marionette/client/marionette/tests/unit/test_testcasedata.py
(Diff revision 1)
> +        self.handle_test_data(self.marionette.execute_script("return this.importantData",

Same here. The data we do not have to evaluate inside the test but via the runner. So it needs to be transferred from the testcase class instance to the runner. The handlers (if we want those) would have to be part of the runner then.

::: testing/marionette/client/marionette/tests/unit/test_testcasedata.py
(Diff revision 1)
> +class TestTestCaseData(MarionetteTestCase):

I don't think this test can be implemented via the MarionetteTestCase. It should be a plain Python unit test.

You asked how Mozmill handles that. So here some links to all the instances of `persisted`, which is an object passed forth and back between the harness and the tests. In this case it even happens between Python and JS, but that's not what we need for Marionette given that here we only live in Python land.

https://github.com/mozilla/mozmill/search?utf8=%E2%9C%93&q=persisted
Dave gave me some great links today, which might indeed be interested to check. Here one for the Gaia endurance tests, which simply adds additional command line options, and maps those to properties on the EnduranceTestCaseMixin. So they are available in the test.

https://dxr.mozilla.org/mozilla-central/source/testing/marionette/client/marionette/runner/mixins/endurance.py#10 (endurance mixins)

I think that this mixin would even be great for Eric.

In case of our Firefox UI tests we could create our own mixins, which may not all have to land in mozilla-central. At least not for now.

The case of passing out data to the test runner, might be another thing to implement. The Gaia tests don't really use it. As you can see in the above link the checkpoint data is stored as file. Not sure exactly where those are getting processed.
https://reviewboard.mozilla.org/r/2521/#review1687

> The testcase as implemented here might not work given that for a real test you have to pass those variables via the runner into the testcase class instance. Here you define everything already inside the testcase class.
> 
> So it's not what has been requested by this bug.

Sorry, it's sort of hard to write a test for, I should have said this works just as well if you put testvars in a file and do:

> runner = MarionetteTestRunner(options)
> runner.test_data_handlers.extend([h1, h2])
> runner.run_tests(tests)

Thanks, I'll take a look!
Given comment 10 is this a suitable equivalent to "persisted"? We could implement something in our own runner if we need to as well.
Flags: needinfo?(hskupin)
I think its somewhat related to bug 1122583 comment 4 where Jonathan mentions a new global object for having persisted data. I wonder if that could be combined and that we have access to that data for the Python and Javascript code. But maybe it's not wise because such a global would only exist in Firefox memory? 

Otherwise I wonder if we need those handlers as proposed in our WIP. Can't we access such a 'persisted' object directly from within the test?
Flags: needinfo?(hskupin)
I think keeping this in Python is desirable. Persisting things seems like a separate use case. The 'persisted' object can be implemented in terms of these handlers, but it was mentioned somewhere else that this isn't blocking update tests, so if that's the case I'm not going to pursue this.
Where has it been mentioned that it is not blocking update tests? If you refer to my comment during our this weeks call, it was just a note to Bob for his implementation of the software update library. He can workaround that, but for the real update tests to run we definitely need a way to pass data from the harness to the test case.

Also why would we necessarily need callbacks for that? Why can't it be a property on the FirefoxTestCase class? Also we have to keep in mind that data as passed back from test 1 should not be removed, but is passed into test 2, which itself can make updates to the object. At the very end the harness needs to be able to analyze the collected information.
(In reply to Henrik Skupin (:whimboo) from comment #14)
> Where has it been mentioned that it is not blocking update tests? If you
> refer to my comment during our this weeks call, it was just a note to Bob
> for his implementation of the software update library. He can workaround
> that, but for the real update tests to run we definitely need a way to pass
> data from the harness to the test case.
https://bugzilla.mozilla.org/show_bug.cgi?id=1133752#c11 I'll look at the tests themselves before proceeding.
> 
> Also why would we necessarily need callbacks for that? Why can't it be a
> property on the FirefoxTestCase class? Also we have to keep in mind that
> data as passed back from test 1 should not be removed, but is passed into
> test 2, which itself can make updates to the object. At the very end the
> harness needs to be able to analyze the collected information.
Callbacks are not necessary. I think it's a slightly better design because it encourages distinguishing between inputs and outputs, but for this use it probably doesn't matter.
Assignee: nobody → cmanchester
Blocks: 1129843
Status: NEW → ASSIGNED
I looked into this more this morning, and I'm starting to think we can implement our update tests without this functionality. The update tests use the persisted object for three purposes: providing inputs to tests, tracking progress of the tests (including determining which test to run next), and aggregating results for assertions (assertions are often a function of the inputs provided). I can see the need for this in mozmill where the tests are implemented inside Firefox and need to take measures to persist state between restarts, but this is not the case in marionette based automation. We can keep whatever sort of data we want in our software update library and in individual tests without risking it going away between restarts, and coordinating which test to run next can be factored into the control flow of a test class. If it is the case we want to coordinate between test methods we can use setUpClass and tearDownClass to set class variables.

I'd like to leave this for now in favor of other work for the update tests and return to it only if our work leads to a design that requires it.
Assignee: cmanchester → nobody
No longer blocks: 1129843
Status: ASSIGNED → NEW
This is necessary for update tests. It's not enough to only look at the tests but you also have to observe our former harness, which is not part of the mozmill-tests repository as mentioned a couple of times in the past. Given that a reference is missing here, I will add it now. So please check the UpdateTestRun class in details for everything we have to push from the harness to the testcase.

https://github.com/mozilla/mozmill-automation/blob/master/mozmill_automation/testrun.py#L622
Blocks: 1129843
(In reply to Henrik Skupin (:whimboo) from comment #17)
> This is necessary for update tests. It's not enough to only look at the
> tests but you also have to observe our former harness, which is not part of
> the mozmill-tests repository as mentioned a couple of times in the past.
> Given that a reference is missing here, I will add it now. So please check
> the UpdateTestRun class in details for everything we have to push from the
> harness to the testcase.
> 
> https://github.com/mozilla/mozmill-automation/blob/master/mozmill_automation/
> testrun.py#L622

This looks like the use case of inputs, which is covered well by testvars. I don't see anything this is providing we can't already achieve.
We found a way around this for update tests.
No longer blocks: 1129843
To give more details as what Chris replied... For our tests I subclass the MarionetteTestCase class and it will get a '**kwargs' argument for its __init__ method. It contains all the flags from the command line options. 

But something I don't know yet is how to pass data from the runner into the testcase. I don't think that doing it via kwargs is a reasonable approach. I'm continuing on that to figure it out, but if it doesn't work I will re-add it as blocker.
Henrik, is this still worth pursuing? Would it be helpful to firefox-ui-tests?
Flags: needinfo?(hskupin)
At the moment there is nothing which would improve our current situation. Arguments we have defined for the runner are perfectly passed-through to the testcase as it can be seen here for the update tests:

https://github.com/mozilla/firefox-ui-tests/search?utf8=%E2%9C%93&q=update_target_buildid

Eric, you filed the bug. So what about you?
Flags: needinfo?(hskupin) → needinfo?(erahm)
(In reply to Henrik Skupin (:whimboo) from comment #24)
> At the moment there is nothing which would improve our current situation.
> Arguments we have defined for the runner are perfectly passed-through to the
> testcase as it can be seen here for the update tests:
> 
> https://github.com/mozilla/firefox-ui-tests/
> search?utf8=%E2%9C%93&q=update_target_buildid
> 
> Eric, you filed the bug. So what about you?

I worked around this by using |testvars| [1], that's probably good enough.

[1] https://github.com/mozilla/areweslimyet/blob/master/benchtester/test_memory_usage.py#L363-L365
Flags: needinfo?(erahm)
I would tend to agree here. Lets give David the chance for the final decision.
Flags: needinfo?(dburns)
If the workarounds are good enough then I am happy to close
Status: NEW → RESOLVED
Closed: 9 years ago
Flags: needinfo?(dburns)
Resolution: --- → WONTFIX
Product: Testing → Remote Protocol
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: