Open Bug 1243519 Opened 7 years ago Updated 8 months ago

find a method for determining method/function names in the js debugger api code coverage toolchain

Categories

(Testing :: Code Coverage, defect)

defect

Tracking

(Not tracked)

People

(Reporter: jmaher, Unassigned)

Details

right now we get line numbers of coverage.  It would be nice to have method names.  I fully understand that method names are not trivial to get and this might not be possible.

Please discuss here and outline methods for making this happen.
I think the place to do this will be in the CoverageUtils module [1]. After each test there's a point where we iterate over every script known to the debugger, where a script is an object conforming to the Debugger.Script interface [2]. This interface has a "dipslayName" property that appears to be what we want, although there may be other properties we're interested in as well.

[1] https://dxr.mozilla.org/mozilla-central/source/testing/modules/CoverageUtils.jsm
[2] https://developer.mozilla.org/en-US/docs/Tools/Debugger-API/Debugger.Script
Is it still the case that the 'findScripts' method is a future plan? In the Debugger.Source reference for the method it says (future plan) but it seems like we can all ready use it since it's being used in CoverageUtils.jsm.
I also think that it would be a good idea to have the source file name checked here while we iterate over all the scripts to determine if it's a test or not, and then skip it if it is. 'isTest(...)' from 'runtests.py' seems like the method to use for this but I'd like to test the method first and see if there are any test files that it misses.
isTest in runtests.py is a python function, this coverage code runs in javascript inside of the browser.  The data from the test files could be useful to answer other questions, but for what we want to determine with code coverage ignoring the test files is useful.  This has me leaning towards post processing.  Post processing will work with either .json files or lcov (.info) files :)
I agree with using it in the post processing stage. Like that, we could keep all the artifacts and use any of them later if we need to. I imagine that there could eventually be code coverage for test files as well (separate from the coverage data of the source files)?
(In reply to Greg Mierzwinski from comment #5)
> I imagine that
> there could eventually be code coverage for test files as well (separate
> from the coverage data of the source files)?

I don't think we're testing the tests, so there's no need to worry about code coverage of the test files.
actually testing the tests is an interesting project- but not as useful for developers and getting to completion of this push and project!  One use case is to find parts of tests we never execute, which would imply areas we have missing coverage or could clean up our tests.  Same in our harnesses- the more random code we have sitting around, the easier it is to think we need to update it or maintain it :)

Thanks for commenting on this!
Severity: normal → S3
You need to log in before you can comment on or make changes to this bug.