Open Bug 1067696 Opened 10 years ago Updated 2 years ago

Coverage reports for mozbase unittests

Categories

(Testing :: Mozbase, defect)

x86_64
Linux
defect

Tracking

(Not tracked)

People

(Reporter: chmanchester, Unassigned)

Details

Attachments

(1 file)

This came up when discussing tests with Armen today. I was curious so I took a look at what was available for statement coverage in python. The "coverage" package (https://pypi.python.org/pypi/coverage) was pretty easy to plug in to test.py. I put one of the reports on my people account: http://people.mozilla.org/~cmanchester/cov/

Does this seem useful?
That is exactly what I was meaning!
(In reply to Chris Manchester [:chmanchester] from comment #0)
> This came up when discussing tests with Armen today. I was curious so I took
> a look at what was available for statement coverage in python. The
> "coverage" package (https://pypi.python.org/pypi/coverage) was pretty easy
> to plug in to test.py. I put one of the reports on my people account:
> http://people.mozilla.org/~cmanchester/cov/
> 
> Does this seem useful?

The information certainly is interesting. I guess the question is how to make it actionable? Do we just want to make it a policy to run this occasionally and file bugs when we find stuff that looks uncovered?
My (hand-waving) assumption is that if we make it easy for people to have this information they can use it and make progress in the coverage. I assume that contributors can learn by trying to increase the coverage.

For instance, I could have used something like this to improve the coverage of device manager.
Assuming I knew how to do it without using tests that require a physical device.
Is there also way to show the lines of code, which are not covered by tests yet? That would be pretty helpful. Otherwise you stand in front of a forrest and don't see the trees, especially for huge modules.
Yep, just click the module name :)

I think this is very useful. I don't know if it's worth generating reports or anything though, it looks pretty easy to generate (was it just 'coverage test.py'?).

Note: if you want mozrunner tests, you need to pass in -b to test.py.
(In reply to Andrew Halberstadt [:ahal] from comment #6)
> Yep, just click the module name :)

Ah! Wonderful. I like that, and most likely will implement this via travis for our github hosted projects.

> Note: if you want mozrunner tests, you need to pass in -b to test.py.

That also applies to mozversion meanwhile. :)
I'm happy with just documenting it somewhere so we can point people to it.
This is the patch I used to generate the reports. I don't know if we'll ever want them running in automation, so should probably do some feature detection.
Assignee: nobody → cmanchester
Status: NEW → ASSIGNED
The value of this kind of coverage metric is debatable enough that I wouldn't want to make a policy around this (boosting the metrics with meaningless tests is pretty easy). I think it's a pretty neat way to understand the relationship between tests and code but doesn't mean much in a vacuum.
Surely we'd want this as an option in the mach command (I don't actually know what the mach command for running mozbase tests *is*, but I assume we have one, and the fact that I can't find it is just incompetence).

Something like 

mach mozbase-test --with-coverage --cover-html-dir=path

(those options were chosen to match http://nose.readthedocs.org/en/latest/plugins/cover.html)
So from the coverage docs, it looks like we could do:

$ coverage run test.py
$ coverage report
and/or
$ coverage html

If it's as simple as that, I don't think adding instrumentation gives us anything. I guess it might make it slightly easier to run from a mach command, but no, we don't currently have a mach command for mozbase.
FWIW when developing and wanting to use this, I have previously used nose to do test discovery because it is a lot easier than that, particularly when there are multiple tests involved.

I've filed bug 1068128 for adding a mach command for running these tests in general.
The instrumentation has the addition of adding some default exclusions, but it looks like this could just as well be added to a .coveragerc file.
See also bug 917363. That bug could probably be resumed if it would help here. Please consider rejecting my scope bloat bait :)

I'm not planning to pursue this. I'll leave this open, as I'm not sure what the state of coverage for python tests is these days.

Assignee: cmanchester → nobody
Status: ASSIGNED → NEW
Severity: normal → S3
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: