Closed Bug 966441 Opened 10 years ago Closed 7 years ago

Run mozbase unit tests from test package

Categories

(Release Engineering :: General, defect)

defect
Not set
normal

Tracking

(Not tracked)

RESOLVED DUPLICATE of bug 1003417

People

(Reporter: dminor, Unassigned)

References

(Blocks 1 open bug)

Details

Attachments

(3 files)

We should run the mozbase unit tests from the test package instead of during "make check" to help avoid problems like Bug 963885.
Depends on: 967150
Depends on: 967515
The build system relies on mozbase too. Do you mean "in addition to" instead of "instead?"
(In reply to Gregory Szorc [:gps] from comment #1)
> The build system relies on mozbase too. Do you mean "in addition to" instead
> of "instead?"

The intention was "instead" but if we need to run these tests to validate that we can build, then maybe this is not worth pursuing.
(In reply to Dan Minor [:dminor] from comment #2)
> (In reply to Gregory Szorc [:gps] from comment #1)
> > The build system relies on mozbase too. Do you mean "in addition to" instead
> > of "instead?"
> 
> The intention was "instead" but if we need to run these tests to validate
> that we can build, then maybe this is not worth pursuing.

No, we don't need to run these tests to validate that we can build. Some of mozbase's functionality is used by the build system, but by no means all. We really want to seperate out the concerns here so that we can validate changes to mozbase are ok without bothering the sheriffs with hard-to-diagnose build failures. If a mozbase change is checked in that breaks the build, that should be obvious enough on its own.
(In reply to William Lachance (:wlach) from comment #3)
> (In reply to Dan Minor [:dminor] from comment #2)
> > (In reply to Gregory Szorc [:gps] from comment #1)
> > > The build system relies on mozbase too. Do you mean "in addition to" instead
> > > of "instead?"
> > 
> > The intention was "instead" but if we need to run these tests to validate
> > that we can build, then maybe this is not worth pursuing.
> 
> No, we don't need to run these tests to validate that we can build. Some of
> mozbase's functionality is used by the build system, but by no means all. We
> really want to seperate out the concerns here so that we can validate
> changes to mozbase are ok without bothering the sheriffs with
> hard-to-diagnose build failures. If a mozbase change is checked in that
> breaks the build, that should be obvious enough on its own.

Good, the mozharness work is done and I'll be putting it up for review shortly pending a run on Ash.
Well, you could make a case for needing to run some tests in make check since the build slaves are completely separate from and different than the test slaves - if the DeviceManager tests that I shut you off over only really matter on a device, then running them on the Linux build slave is pointless, but if a mozbase change makes us do something incorrectly on a Linux build slave building Fennec, testing that on Tegras and Pandas is pointless. You could also hypothetically introduce wrongness on Win2K8 but not WinXP or Win7 or Win8, or on 10.7 but not 10.6 or 10.8. Dunno how likely wrongness that doesn't break the build entirely is, though.
(In reply to Phil Ringnalda (:philor) from comment #5)
> Well, you could make a case for needing to run some tests in make check
> since the build slaves are completely separate from and different than the
> test slaves - if the DeviceManager tests that I shut you off over only
> really matter on a device, then running them on the Linux build slave is
> pointless, but if a mozbase change makes us do something incorrectly on a
> Linux build slave building Fennec, testing that on Tegras and Pandas is
> pointless. You could also hypothetically introduce wrongness on Win2K8 but
> not WinXP or Win7 or Win8, or on 10.7 but not 10.6 or 10.8. Dunno how likely
> wrongness that doesn't break the build entirely is, though.

In practice a change to mozbase is much more likely to break a test harness in some hard-to-diagnose way than the build. Or at least that's been my experience so far. Always open revisiting my assumptions with new data...

BTW, there are currently no mozbase/mozdevice tests that run against/on a tegra or panda (we simulate such a device on the machine running the test). Though now that I think about it, that might not be a bad thing to add...
I've been considering the possibility of running the build/Python tests in automation *before* we build. Otherwise, failures from building will mask test failures and it won't be clear we're dealing with detectable build system bustage because said tests won't run unless the build is successful. I would likely lump mozbase tests into this pre-build test step because mozbase regressions can regress the build in non-obvious manners.
(In reply to Gregory Szorc [:gps] from comment #7)
> I've been considering the possibility of running the build/Python tests in
> automation *before* we build. Otherwise, failures from building will mask
> test failures and it won't be clear we're dealing with detectable build
> system bustage because said tests won't run unless the build is successful.
> I would likely lump mozbase tests into this pre-build test step because
> mozbase regressions can regress the build in non-obvious manners.

If you want to run some subset of the mozbase tests before you build, that's fine with me. I still maintain we need a separate job for *all* the mozbase tests because of things like bug 963885.

It's important that we be able to continue to run things like the mozdevice tests, even though they are not currently 100% deterministic and strictly have nothing to do with the build. I don't see any other way of doing this while keeping the builds sheriffable.
Re-read comment 1: I'm not opposed to running mozbase tests on non-build slaves. I do have reservations about not running some of them on build slaves.
I've also switched to using the MozbaseMixin which we've been using on the pandas for several months now.

I did an Ash run here: https://tbpl.mozilla.org/?tree=Ash&rev=22a4b0c6dea4. It is noisy, but it looks like the desktop unittests are green where they ran.
Assignee: nobody → dminor
Status: NEW → ASSIGNED
Attachment #8374012 - Flags: review?(jlund)
(In reply to Gregory Szorc [:gps] from comment #9)
> Re-read comment 1: I'm not opposed to running mozbase tests on non-build
> slaves. I do have reservations about not running some of them on build
> slaves.

Ok, so I guess it makes sense to file some kind of followup/dependant bug to do what you suggest (run a restricted subset of mozbase unit tests before the build). Do we need to block moving the mozbase tests to a seperate job on that? Or should we just run the mozbase tests in both places for now?
Comment on attachment 8374012 [details] [diff] [review]
Add mozbase unit tests.

Review of attachment 8374012 [details] [diff] [review]:
-----------------------------------------------------------------

r+ once my Q's are explained are not cause for concern. :)

::: mozharness/mozilla/testing/unittest.py
@@ +90,5 @@
> +                        self.pass_count = int(summary_match_list[-1])
> +                    else:
> +                        # some suites do not report number of successes
> +                        self.pass_count = 1 
> +                        self.fail_count = 0

So I am not sure what's happening here. Do we add self.fail_count because we are not expecting a 'FAILED' line summary for mozbase tests? I'm just worried by setting fail_count to 0 here, it might overwrite/contradict the fail_group condition below.

::: scripts/desktop_unittest.py
@@ -195,5 @@
> -        'mozcrash', 'mozinstall', 'mozdevice', 'mozprofile', 'mozprocess',
> -        'mozrunner'):
> -            self.register_virtualenv_module(m, url=os.path.join(mozbase_dir,
> -                m))
> -

please help ignorant me out. The mock module replaces 'mozfile' 'mozlog' 'mozinfo' etc...? Just trying to figure out why we can now rm all the modules from being added to virtualenv now.

@@ +250,5 @@
>                               "\nIf you meant to have options for this suite, "
>                               "please make sure they are specified in your "
>                               "config under %s_options" %
>                               (suite_category, suite_category))
> +                return base_cmd

bah, sorry you had to add that. That look's like my fault. I think I missed it till now because we always had options for suites.
Attachment #8374012 - Flags: review?(jlund) → review+
(In reply to Jordan Lund (:jlund) from comment #12)
> Comment on attachment 8374012 [details] [diff] [review]
> Add mozbase unit tests.
> 
> Review of attachment 8374012 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> r+ once my Q's are explained are not cause for concern. :)
> 
> ::: mozharness/mozilla/testing/unittest.py
> @@ +90,5 @@
> > +                        self.pass_count = int(summary_match_list[-1])
> > +                    else:
> > +                        # some suites do not report number of successes
> > +                        self.pass_count = 1 
> > +                        self.fail_count = 0
> 
> So I am not sure what's happening here. Do we add self.fail_count because we
> are not expecting a 'FAILED' line summary for mozbase tests? I'm just
> worried by setting fail_count to 0 here, it might overwrite/contradict the
> fail_group condition below.
There will only ever be a 'OK' or 'FAILURE' line for the mozbase tests, so I need this for this suite. This won't cause problems with any of our existing suites, but could trip someone up in the future. Would you like me to comment on this or use a different approach?

> 
> ::: scripts/desktop_unittest.py
> @@ -195,5 @@
> > -        'mozcrash', 'mozinstall', 'mozdevice', 'mozprofile', 'mozprocess',
> > -        'mozrunner'):
> > -            self.register_virtualenv_module(m, url=os.path.join(mozbase_dir,
> > -                m))
> > -
> 
> please help ignorant me out. The mock module replaces 'mozfile' 'mozlog'
> 'mozinfo' etc...? Just trying to figure out why we can now rm all the
> modules from being added to virtualenv now.
>
 
The mozfile, mozlog, mozinfo stuff is replaced by using MozbaseMixin in the class. The MozbaseMixin contains this same code and is used by the android script already.
> There will only ever be a 'OK' or 'FAILURE' line for the mozbase tests, so I
> need this for this suite. This won't cause problems with any of our existing
> suites, but could trip someone up in the future. Would you like me to
> comment on this or use a different approach?

ahh, I see. I don't see a major issue since this is in a separate condition. A comment though does sound like a good idea. Just in case someone, as you said, adds a similar test that is this format.

> The mozfile, mozlog, mozinfo stuff is replaced by using MozbaseMixin in the
> class. The MozbaseMixin contains this same code and is used by the android
> script already.

:) I guess I should have looked at MOzbaseMixin more... lgtm r+
Depends on: 971687
Mozharness change live in production.
Missed defining all_mozbase_suites in the mac and windows config files first time round.
Attachment #8376441 - Flags: review?(jlund)
Comment on attachment 8376441 [details] [diff] [review]
Add all_mozbase_suites to mac and windows configs

whoops, we both missed it. lgtm :)
Attachment #8376441 - Flags: review?(jlund) → review+
There are some new mozrunner tests that will only run if a binary is passed into test.py, e.g python test.py -b path/to/firefox. We should configure platforms that have a simple binary (e.g not emulators/tegras) to pass this in.

Might be better to file a follow up for this though.
Note the mozrunner tests will land in bug 949600.
(In reply to Andrew Halberstadt [:ahal] from comment #19)
> There are some new mozrunner tests that will only run if a binary is passed
> into test.py, e.g python test.py -b path/to/firefox. We should configure
> platforms that have a simple binary (e.g not emulators/tegras) to pass this
> in.
> 
> Might be better to file a follow up for this though.

Thanks for the heads up. If 949600 lands before this is closed, I'll fix it here, otherwise I'll do a follow up.
(In reply to Jordan Lund (:jlund) from comment #18)
> Comment on attachment 8376441 [details] [diff] [review]
> Add all_mozbase_suites to mac and windows configs
> 
> whoops, we both missed it. lgtm :)

Thanks, pushed to https://hg.mozilla.org/build/mozharness/rev/4fc68a913f08
We have test failures on linux due to 'ifconfig' not being in the path on the linux test machines:
https://tbpl.mozilla.org/php/getParsedLog.php?id=34850784&tree=Cedar&full=1#error0

Looks green on windows and os x.
Depends on: 974355
something is in production
Depends on: 976268
As ahal mentioned in comment 19, some new tests have landed which require the -b option to be set.
Attachment #8381453 - Flags: review?(jlund)
Nice. The mozrunner tests should also be runnable on b2g desktop builds, but it's probably ok to just get them working on firefox desktop for now.
Comment on attachment 8381453 [details] [diff] [review]
Set -b option for mozbase unit tests

lgtm :)
Attachment #8381453 - Flags: review?(jlund) → review+
In production.
John, shouldn't the latest inbound landings run those mozrunner tests now? When I check the log files they are still skipped because the binary has not been specified. Is something missing here still?

https://tbpl.mozilla.org/php/getParsedLog.php?id=35587777&tree=Mozilla-Inbound&full=1

test_run_interactive (test_interactive.MozrunnerInteractiveTestCase)
Bug 965183: Run process in interactive mode and call wait() ... skipped 'No binary has been specified.'
Flags: needinfo?(jhopkins)
Henrik, this bug is about getting things running from the test package and is only active on the Cedar branch right now, so nothing here should affect inbound.
Flags: needinfo?(jhopkins)
Depends on: 1003401
Depends on: 1003405
Depends on: 1003408
Depends on: 1003412
Depends on: 1015232
Depends on: 1065994
Depends on: 1125276
Unassigning myself so someone else can pick it up if they're interested.
Assignee: dminor → nobody
Status: ASSIGNED → NEW
Status: NEW → RESOLVED
Closed: 7 years ago
Resolution: --- → DUPLICATE
Component: General Automation → General
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: