Open
Bug 1067696
Opened 10 years ago
Updated 2 years ago
Coverage reports for mozbase unittests
Categories
(Testing :: Mozbase, defect)
Tracking
(Not tracked)
NEW
People
(Reporter: chmanchester, Unassigned)
Details
Attachments
(1 file)
2.61 KB,
patch
|
Details | Diff | Splinter Review |
This came up when discussing tests with Armen today. I was curious so I took a look at what was available for statement coverage in python. The "coverage" package (https://pypi.python.org/pypi/coverage) was pretty easy to plug in to test.py. I put one of the reports on my people account: http://people.mozilla.org/~cmanchester/cov/ Does this seem useful?
Comment 1•10 years ago
|
||
That is exactly what I was meaning!
Comment 2•10 years ago
|
||
(In reply to Chris Manchester [:chmanchester] from comment #0) > This came up when discussing tests with Armen today. I was curious so I took > a look at what was available for statement coverage in python. The > "coverage" package (https://pypi.python.org/pypi/coverage) was pretty easy > to plug in to test.py. I put one of the reports on my people account: > http://people.mozilla.org/~cmanchester/cov/ > > Does this seem useful? The information certainly is interesting. I guess the question is how to make it actionable? Do we just want to make it a policy to run this occasionally and file bugs when we find stuff that looks uncovered?
Comment 3•10 years ago
|
||
My (hand-waving) assumption is that if we make it easy for people to have this information they can use it and make progress in the coverage. I assume that contributors can learn by trying to increase the coverage. For instance, I could have used something like this to improve the coverage of device manager. Assuming I knew how to do it without using tests that require a physical device.
Comment 4•10 years ago
|
||
Is there also way to show the lines of code, which are not covered by tests yet? That would be pretty helpful. Otherwise you stand in front of a forrest and don't see the trees, especially for huge modules.
Comment 5•10 years ago
|
||
Just click on the file path e.g. http://people.mozilla.org/~cmanchester/cov/mozinfo_mozinfo_mozinfo.html
Comment 6•10 years ago
|
||
Yep, just click the module name :) I think this is very useful. I don't know if it's worth generating reports or anything though, it looks pretty easy to generate (was it just 'coverage test.py'?). Note: if you want mozrunner tests, you need to pass in -b to test.py.
Comment 7•10 years ago
|
||
(In reply to Andrew Halberstadt [:ahal] from comment #6) > Yep, just click the module name :) Ah! Wonderful. I like that, and most likely will implement this via travis for our github hosted projects. > Note: if you want mozrunner tests, you need to pass in -b to test.py. That also applies to mozversion meanwhile. :)
Comment 8•10 years ago
|
||
I'm happy with just documenting it somewhere so we can point people to it.
Reporter | ||
Comment 9•10 years ago
|
||
This is the patch I used to generate the reports. I don't know if we'll ever want them running in automation, so should probably do some feature detection.
Reporter | ||
Updated•10 years ago
|
Assignee: nobody → cmanchester
Status: NEW → ASSIGNED
Reporter | ||
Comment 10•10 years ago
|
||
The value of this kind of coverage metric is debatable enough that I wouldn't want to make a policy around this (boosting the metrics with meaningless tests is pretty easy). I think it's a pretty neat way to understand the relationship between tests and code but doesn't mean much in a vacuum.
Comment 11•10 years ago
|
||
Surely we'd want this as an option in the mach command (I don't actually know what the mach command for running mozbase tests *is*, but I assume we have one, and the fact that I can't find it is just incompetence). Something like mach mozbase-test --with-coverage --cover-html-dir=path (those options were chosen to match http://nose.readthedocs.org/en/latest/plugins/cover.html)
Comment 12•10 years ago
|
||
So from the coverage docs, it looks like we could do: $ coverage run test.py $ coverage report and/or $ coverage html If it's as simple as that, I don't think adding instrumentation gives us anything. I guess it might make it slightly easier to run from a mach command, but no, we don't currently have a mach command for mozbase.
Comment 13•10 years ago
|
||
FWIW when developing and wanting to use this, I have previously used nose to do test discovery because it is a lot easier than that, particularly when there are multiple tests involved. I've filed bug 1068128 for adding a mach command for running these tests in general.
Reporter | ||
Comment 14•10 years ago
|
||
The instrumentation has the addition of adding some default exclusions, but it looks like this could just as well be added to a .coveragerc file.
Comment 15•10 years ago
|
||
See also bug 917363. That bug could probably be resumed if it would help here. Please consider rejecting my scope bloat bait :)
Reporter | ||
Comment 16•4 years ago
|
||
I'm not planning to pursue this. I'll leave this open, as I'm not sure what the state of coverage for python tests is these days.
Assignee: cmanchester → nobody
Status: ASSIGNED → NEW
Updated•2 years ago
|
Severity: normal → S3
You need to log in
before you can comment on or make changes to this bug.
Description
•