Closed
Bug 859589
Opened 11 years ago
Closed 11 years ago
Mutt tests should print the total amount of passed/failed/skipped tests at the end
Categories
(Testing Graveyard :: Mozmill, defect)
Testing Graveyard
Mozmill
Tracking
(Not tracked)
RESOLVED
FIXED
People
(Reporter: jfrench, Assigned: souradeep.2011)
References
()
Details
(Whiteboard: [mentor=whimboo][lang=py][good first bug][mozmill-2.0][ateamtrack: p=mozmill q=2013q2 m=2])
Attachments
(1 file, 7 obsolete files)
1.70 KB,
patch
|
whimboo
:
review+
|
Details | Diff | Splinter Review |
The current "All tests were successful. Ship it!" message for mutt test run completion, could be improved. I would go so far as so say it should be replaced with a simple "(n) Passed, (n) Failed" message, identical to what Mozmill produces, for consistency sake. Generally speaking, there are other considerations that go into shipping anything, even for the test harness itself, and even when they do all pass. Plus, in general code is not supposed to use exclamations in punctuation for output messages. I will open a separate bug on an example of Mutt tests missing failures while claiming the entire test run passes with that message. Unless I can find current open bugs for that.
Reporter | ||
Updated•11 years ago
|
Summary: Remove or improve the "All tests were successful. Ship it!" message for Mutt tests. → Mutt tests - Remove or improve the "All tests were successful. Ship it!" message.
Comment 1•11 years ago
|
||
Yeah that was just for fun for the initial landing of Mutt and we never replaced this output. Might be good to get updated.
Status: UNCONFIRMED → NEW
Component: Mozmill Tests → Mozmill
Ever confirmed: true
Product: Mozilla QA → Testing
Whiteboard: [mentor=whimboo][lang=py][good first bug]
Reporter | ||
Comment 2•11 years ago
|
||
If it's not an urgent bug, feel free to assign me and I'll have an attempt at it after all my other bugs have been resolved/landed on all branches. I think I have about 6 bugs in front of this one at present. So it could be a while. If you prefer to assign this to someone else though, no worries.
Assignee | ||
Comment 3•11 years ago
|
||
Hi, I want to work on this bug. I know python, c and java. I am keen to work on web development. I am new to the Mozilla code base, so found this bug interesting, can you please tell me as to how I can proceed further in order to fix this bug? This is my git profile: http://github.com/desouradeep
Comment 4•11 years ago
|
||
Hi Souradeep! Thanks for your interest to work on this bug. The easiest is to fork/clone the mozmill repository on github: https://github.com/mozilla/mozmill The code for mutt can be found under mutt/mutt. Given that you already have a github account I assume you are fine in working with git repositories. If you have further questions please ask or join us in #automation on irc.mozilla.org.
Assignee: nobody → souradeep.2011
Status: NEW → ASSIGNED
Reporter | ||
Comment 5•11 years ago
|
||
Hi Souradeep, also if this helps, here is a link on Mozmill specific process for code changes to the framework. Specifically, code returns made by contributors are by attaching a patch to this bug, rather than via pull requests from Github. This document describes that process briefly about 1/2way down. https://wiki.mozilla.org/Auto-tools/Projects/Mozmill/RepoSetup#Ready_for_a_review Beyond that, I defer to Henrik on the specifics of this Mutt bug fix. Whether it's simply a removal of the line, or an implementation of Mozmill's (n)Passed, (n)Failed, echoes for Mutt.
Assignee | ||
Comment 6•11 years ago
|
||
Ok, thanks for the information. I am working on it.
Assignee | ||
Comment 7•11 years ago
|
||
Hi, I had a query. What exactly does (n) mean? Do i simply need to include it in the print statement or does it denote something special?
Reporter | ||
Comment 8•11 years ago
|
||
I was just using (n) as a placeholder in the bug description to represent the number of tests that get echoed as either failures or passes, when the harness completes a test run. There is code in the Mozmill harness which accumulates that number count, and echoes it out at the end. But in Mutt that doesn't appear to exist. So I defer to Henrik on if that is what we also intend with this bug (other than simply removing the line described).
Assignee | ||
Comment 9•11 years ago
|
||
So, for failures, I should print the number of failures from pyresults + the no. of failures from jsersults. Right?
Assignee | ||
Comment 10•11 years ago
|
||
Attachment #737284 -
Flags: review?(hskupin)
Comment 11•11 years ago
|
||
Comment on attachment 737284 [details] [diff] [review] This patch improves the status message and also prints the number count of failures/passes. Review of attachment 737284 [details] [diff] [review]: ----------------------------------------------------------------- Thanks for this patch Souradeep. I have taken a look at and also tested it. I have some comments which you will find inline. Please let me know if those make sense to you. ::: mutt/mutt/mutt.py @@ +96,4 @@ > > > def report(fail, pyresults=None, jsresults=None, options=None): > + fails_count = len(pyresults.failures)+len(pyresults.errors)+len(jsresults.fails) With that code you will see an exception if you run only parts of the test like python tests or js tests only. It will result in: AttributeError: 'NoneType' object has no attribute 'fails' You would have to check first which results are available before accessing their properties. Best would be to do it in the code blocks below which already check for result details. @@ +100,3 @@ > if not fail: > + mp = TestManifest(manifests=(options.manifest,), strict=False) > + test_total = len(mp.active_tests(disabled=False)) In `run()` we already make use of the exact same code. It may be worth to have a class property which stores the active tests for py and js tests. Then we could re-use it where-ever we want. @@ +105,2 @@ > # Print the failures > + print "\n",fails_count,"Failed\n" Please add blank between operators. If you run the code against PEP8 it should spit out all the problems. You can install it via https://pypi.python.org/pypi/pep8
Attachment #737284 -
Flags: review?(hskupin) → review-
Assignee | ||
Comment 12•11 years ago
|
||
I guess I will have to use try - except blocks for computing the fails_count because, I need to print the Failed statement before the failures are printed. So, if i use the codes which check the result for details to count the no of failures, I will not be able to print the failure count along with the Failed message because the result details will be checked after the "(n) Failed" message.
Assignee | ||
Comment 13•11 years ago
|
||
Attachment #737284 -
Attachment is obsolete: true
Attachment #737670 -
Flags: review?(hskupin)
Comment 14•11 years ago
|
||
Comment on attachment 737670 [details] [diff] [review] Improves the Passed/Failed messages Review of attachment 737670 [details] [diff] [review]: ----------------------------------------------------------------- Feel free to request review from both Henrik and me. Thanks for working on this bug! :) ::: mutt/mutt/mutt.py @@ +96,5 @@ > > > def report(fail, pyresults=None, jsresults=None, options=None): > + fails_count = 0 > + try: fails_count = len(pyresults.failures) + len(pyresults.errors) You shouldn't need to do a try/except here. I would suggest using getattr with a default value, such as: py_failures = getattr(pyresults, 'failures', []) py_errors = getattr(pyresults, 'errors', []) js_fails = getattr(jsresults, 'fails', []) fail_count = len(py_failures) + len(py_errors) + len(js_fails) @@ +105,3 @@ > if not fail: > + test_total = len(tests) > + print "%d Passed" % (test_total - fails_count) You should be able to use pyresults.testsRun and jsresults.testsRun to get the total number of tests run, rather than setting a global. @@ +108,4 @@ > return 0 > > # Print the failures > + print "\n%d Failed\n" % (fails_count) Rather than having one print if we're successful, and another if we fail, I think it would be better to output both regardless, so we'd have: Total passed: (total passed) Total failed: (total failed)
Attachment #737670 -
Flags: review?(hskupin) → review-
Assignee | ||
Comment 15•11 years ago
|
||
Couldnt figure out how to get the total no of js tests. Had problems while accessing jsresults.testsRun. When I did dir(jsresults), i didnt find any attribute called testsRun. So, i got the total number of tests for js as 0. Although when i use testall, there are attributes like passed, skipped and fails, but they are empty. The pyresults.testsRun worked perfectly.
Comment 16•11 years ago
|
||
jsresults should have: alltests, fails, passes and skipped See: https://github.com/mozilla/mozmill/blob/master/mozmill/mozmill/__init__.py#L56
Assignee | ||
Comment 17•11 years ago
|
||
Attachment #737670 -
Attachment is obsolete: true
Attachment #738025 -
Flags: review?(tojonmz)
Attachment #738025 -
Flags: review?(hskupin)
Comment 18•11 years ago
|
||
Comment on attachment 738025 [details] [diff] [review] Improved the Passed/Failed messages Review of attachment 738025 [details] [diff] [review]: ----------------------------------------------------------------- A few style issues, but otherwise I really like these changes. I suspect you meant to ask review from me rather than Jonathan. If so, please use :davehunt in future when requesting review. You may also be interested in PEP 8 (http://www.python.org/dev/peps/pep-0008/), which is the basic style guide we respect in our Python files and explains a few of my comments. ::: mutt/mutt/mutt.py @@ +99,5 @@ > + py_failures = getattr(pyresults, 'failures', []) > + py_errors = getattr(pyresults, 'errors', []) > + js_fails = getattr(jsresults, 'fails', []) > + fail_count = len(py_failures) + len(py_errors) + len(js_fails) > + Could you remove the whitespace from any blank lines. It's here, and also on lines 107, 110, and 115. You might want to enable visible whitespace characters in your IDE or change the behaviour so new lines are not automatically indented. @@ +100,5 @@ > + py_errors = getattr(pyresults, 'errors', []) > + js_fails = getattr(jsresults, 'fails', []) > + fail_count = len(py_failures) + len(py_errors) + len(js_fails) > + > + py_test_count = getattr(pyresults,'testsRun',0) Please add a space between the arguments, immediately following the comma. This applies to this line and line 105. @@ +102,5 @@ > + fail_count = len(py_failures) + len(py_errors) + len(js_fails) > + > + py_test_count = getattr(pyresults,'testsRun',0) > + js_test_count = len(getattr(jsresults,'alltests',[])) > + test_total = py_test_count + js_test_count Remove the whitespace before the equals sign. These do not need to be aligned across lines.
Attachment #738025 -
Flags: review?(tojonmz)
Attachment #738025 -
Flags: review?(hskupin)
Attachment #738025 -
Flags: review-
Assignee | ||
Comment 19•11 years ago
|
||
Attachment #738067 -
Flags: review?(dave.hunt)
Updated•11 years ago
|
Attachment #738067 -
Flags: review?(hskupin)
Updated•11 years ago
|
Attachment #738025 -
Attachment is obsolete: true
Comment 20•11 years ago
|
||
Comment on attachment 738067 [details] [diff] [review] Deleted the unnecessary white spaces. Hope this one i good enough :) Review of attachment 738067 [details] [diff] [review]: ----------------------------------------------------------------- Thank you for the patch Souradeep. It looks fine but there are a few things I want to comment on to give it a better output and cleaner code. Please see my inline comments. ::: mutt/mutt/mutt.py @@ +97,5 @@ > > def report(fail, pyresults=None, jsresults=None, options=None): > + py_failures = getattr(pyresults, 'failures', []) > + py_errors = getattr(pyresults, 'errors', []) > + js_fails = getattr(jsresults, 'fails', []) I would completely drop those getters here and initialize those variables with 0 only. Later inside the if clauses for py and js tests you can retrieve the appropriate counts for failures and total and finally sum up. @@ +102,5 @@ > + fail_count = len(py_failures) + len(py_errors) + len(js_fails) > + > + py_test_count = getattr(pyresults, 'testsRun', 0) > + js_test_count = len(getattr(jsresults, 'alltests', [])) > + test_total = py_test_count + js_test_count Same with those. We could save us the getattr() calls. @@ +105,5 @@ > + js_test_count = len(getattr(jsresults, 'alltests', [])) > + test_total = py_test_count + js_test_count > + > + print "Total passed: %d" % (test_total - fail_count) > + print "Total failed: %d" % (fail_count) I would love if we could print out the the total numbers at the very end. With that patch each and every failure details for py and js tests will follow, which you can see in the if clauses. If there are a lot failures it's hard to find those numbers to get an overview.
Attachment #738067 -
Flags: review?(hskupin)
Attachment #738067 -
Flags: review?(dave.hunt)
Attachment #738067 -
Flags: review-
Comment 21•11 years ago
|
||
To clarify, I believe Henrik is suggesting setting py_failure_count, py_test_count, js_failure_count, and js_test_count to 0 at the start of the method. Then, within the 'if pyresults:' block, set the correct test and failure counts, and the same for the 'if jsresults:' block. This will be possible because he's also suggested outputting the final results after these blocks instead of before.
Assignee | ||
Comment 22•11 years ago
|
||
Attachment #738067 -
Attachment is obsolete: true
Attachment #739556 -
Flags: review?(hskupin)
Attachment #739556 -
Flags: review?(dave.hunt)
Comment 23•11 years ago
|
||
Comment on attachment 739556 [details] [diff] [review] Got rid of the getattr calls. Review of attachment 739556 [details] [diff] [review]: ----------------------------------------------------------------- We're almost there! I think with the next version we should be good to land it. Thanks Souradeep! ::: mutt/mutt/mutt.py @@ +98,5 @@ > def report(fail, pyresults=None, jsresults=None, options=None): > + py_fail = 0 > + js_fail = 0 > + py_test_count = 0 > + js_test_count = 0 These are all counts, so let's add _count suffix to them all. I would also list them alphabetically so py and js are grouped. @@ +110,2 @@ > for failure in pyresults.failures: > + py_fail += 1 Rather than increment within the loop, you can just use py_fail = pyresults.failures + pyresult.errors. @@ +123,3 @@ > for module in jsresults.fails: > for failure in module["fails"]: > + js_fail += 1 Same here, you can just do js_fail = len(jsresults.fails) @@ +137,4 @@ > else: > print "No Javascript Failures" > > + fail_count = py_fail + js_fail This is also a total, so we should name it fail_total. @@ +137,5 @@ > else: > print "No Javascript Failures" > > + fail_count = py_fail + js_fail > + test_total = py_test_count + js_test_count We could just get the pass total as we don't use test total for anything else. @@ +139,5 @@ > > + fail_count = py_fail + js_fail > + test_total = py_test_count + js_test_count > + print "Total passed: %d" % (test_total - fail_count) > + print "Total failed: %d" % (fail_count) No need for the () when there's a single value. @@ +143,5 @@ > + print "Total failed: %d" % (fail_count) > + > + if not fail: > + return 0 > + Now these lines are together, you can use an else statement.
Attachment #739556 -
Flags: review?(hskupin)
Attachment #739556 -
Flags: review?(dave.hunt)
Attachment #739556 -
Flags: review-
Assignee | ||
Comment 24•11 years ago
|
||
Attachment #739556 -
Attachment is obsolete: true
Attachment #739646 -
Flags: review?(hskupin)
Attachment #739646 -
Flags: review?(dave.hunt)
Comment 25•11 years ago
|
||
Comment on attachment 739646 [details] [diff] [review] Removed some variables to make the code cleaner. Review of attachment 739646 [details] [diff] [review]: ----------------------------------------------------------------- Looks way better since I have checked one of the patches the last time. There are some inline comments I would still like to see addressed. Looks like it's nearly ready to get landed. ::: mutt/mutt/mutt.py @@ +104,5 @@ > print "=" * 75 > if pyresults: > print "Python Failures:" > + fail_total = len(pyresults.failures) + len(pyresults.errors) > + test_total = pyresults.testsRun Similar to the jsresults I would suggest we make use of the '+=' operator here. That way we do not overwrite any value set before. When code changes that would be more obvious. @@ +134,5 @@ > else: > print "No Javascript Failures" > > + print "Total passed: %d" % (test_total - fail_total) > + print "Total failed: %d" % fail_total Something I missed before. Can you also please add the skip count, which should be listed after the number of failed tests? With that the output would be perfect. @@ +139,5 @@ > + > + if not fail: > + return 0 > + else: > + return 1 You can return ´int(!fail)´ here.
Attachment #739646 -
Flags: review?(hskupin)
Attachment #739646 -
Flags: review?(dave.hunt)
Attachment #739646 -
Flags: review-
Assignee | ||
Comment 26•11 years ago
|
||
Ok, so i got total_skipped from len(pyresults.skipped) and len(jsresults.skipped). And, i guess, you meant to return int(fail) rather than int(not fail). I will attach the patch as soon as you confirm. :)
Comment 27•11 years ago
|
||
Oh, yes. Thanks for catching that. That is what I meant.
Assignee | ||
Comment 28•11 years ago
|
||
Attachment #739646 -
Attachment is obsolete: true
Attachment #740982 -
Flags: review?(hskupin)
Comment 29•11 years ago
|
||
Comment on attachment 740982 [details] [diff] [review] Made the changes you requested Review of attachment 740982 [details] [diff] [review]: ----------------------------------------------------------------- ::: mutt/mutt/mutt.py @@ +106,5 @@ > if pyresults: > print "Python Failures:" > + fail_total += len(pyresults.failures) + len(pyresults.errors) > + test_total += pyresults.testsRun > + skipped_total += len(pyresults.skipped) Looks like we aren't complete here. :/ When python tests have been disabled in the manifest those most likely are not seen by the pytest harness because we only push the active ones. What is the definition of skipped tests for pytest? Where does it get the info from? I assume it is getting skipped when the test sets something? @@ +137,4 @@ > else: > print "No Javascript Failures" > > + print "Total passed: %d" % (test_total - fail_total) We should add a new line before the output so it's not right after the 'No Javascript Failures' line when running mutt via 'mutt testpy'.
Attachment #740982 -
Flags: review?(hskupin) → review-
Assignee | ||
Comment 30•11 years ago
|
||
Can you please elaborate more on what i need to do with the manifest file?
Comment 31•11 years ago
|
||
I have checked the manifestparser and how it handles skipped/disabled tests and I think we can safely push this out of this bug into a new one, once the patch here has been landed. I will file it once we are done here. Souradeep, would you mind to do the remaining update of the patch as given by my last review? Thanks.
Assignee | ||
Comment 32•11 years ago
|
||
Attachment #740982 -
Attachment is obsolete: true
Attachment #742988 -
Flags: review?(hskupin)
Comment 33•11 years ago
|
||
Comment on attachment 742988 [details] [diff] [review] Done with the changes you requested. Review of attachment 742988 [details] [diff] [review]: ----------------------------------------------------------------- That looks fantastic now! Thank you Souradeep. I will get this landed and the other bug filed. If you have interests to work on another bug please let me know or grab one of our mentored bugs on your own.
Attachment #742988 -
Flags: review?(hskupin) → review+
Comment 34•11 years ago
|
||
Landed on master: https://github.com/mozilla/mozmill/commit/fd25d7252fc5f1d560438b1019e2b7f4ee32b5e4
Status: ASSIGNED → RESOLVED
Closed: 11 years ago
Resolution: --- → FIXED
Summary: Mutt tests - Remove or improve the "All tests were successful. Ship it!" message. → Mutt tests should print the total amount of passed/failed/skipped tests at the end
Whiteboard: [mentor=whimboo][lang=py][good first bug] → [mentor=whimboo][lang=py][good first bug][mozmill-2.0][ateamtrack: p=mozmill q=2013q2 m=2]
Assignee | ||
Comment 35•11 years ago
|
||
Yeah i will work on more bugs, but only from June. Got my semester exams in between :( Thanks a lot for helping me out there. :)
Comment 36•11 years ago
|
||
Great. I'm looking forward to. For now good luck with your exams!
Updated•8 years ago
|
Product: Testing → Testing Graveyard
You need to log in
before you can comment on or make changes to this bug.
Description
•