Closed Bug 827867 Opened 12 years ago Closed 11 years ago

Add OutputParser to Gaia UI tests to print summary for tbpl's usage

Categories

(Release Engineering :: General, defect, P1)

x86
macOS
defect

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: armenzg, Assigned: armenzg)

References

Details

Attachments

(3 files, 1 obsolete file)

      No description provided.
What do you think?
Attachment #699244 - Flags: review?(aki)
Blocks: 802317
This is how the output looks like:

08:43:38     INFO -  passed: 14
08:43:38     INFO -  failed: 23
08:43:38     INFO -  todo: 0
08:43:38     INFO -  starting httpd
08:43:38    ERROR - Return code: 10
08:43:38     INFO - TinderboxPrint: results: 14/<em class="testfail">23</em>/0
08:43:38     INFO - # TBPL WARNING #
So I can point to it in my review comment.
Priority: -- → P1
Comment on attachment 699244 [details] [diff] [review]
refactor + add output parser for gaia ui tests

Overall, I like this patch.

From pyflakes:
scripts/b2g_panda.py:9: 're' imported but unused
scripts/marionette.py:228: undefined name 'failed'

(I got this from running ./unit.sh)

The latter is serious, since it would have broken all marionette emulator jobs.

>+    def parse_single_line(self, line):
>+        super(TestSummaryOutputParserHelper, self).parse_single_line(line)
>+        m = self.regex.match(line)
>+        if m:
>+            try:
>+                setattr(self, m.group(1), int(m.group(2)))
>+            except ValueError:
>+                # ignore bad values
>+                pass
>+
>+        # generate the TinderboxPrint line for TBPL
>+        emphasize_fail_text = '<em class="testfail">%s</em>'
>+        failed = "0"
>+        if self.passed == 0 and self.failed == 0:
>+            self.tsummary = emphasize_fail_text % "T-FAIL"
>+        else:
>+            if self.failed > 0:
>+                failed = emphasize_fail_text % str(self.failed)
>+            self.tsummary = "%d/%s/%d" % (self.passed, failed, self.todo)

As written, we're going to generate self.tsummary for every single line of output from the test (when parse_single_line() is called).  I haven't counted, but this could be upwards of hundreds or thousands of iterations.

I attached a patch to the bug that creates an OutputParser.finish() method that run_command() calls when we're done sending it output to parse; you can generate the tbpl line there.

Alternately, you can put the line generation inside the try: portion of the try/except, below the setattr().  That will generate the tbpl line three times every test run.

> # BaseLogger {{{1
> class BaseLogger(object):
>diff --git a/scripts/b2g_panda.py b/scripts/b2g_panda.py
>--- a/scripts/b2g_panda.py
>+++ b/scripts/b2g_panda.py
>@@ -6,6 +6,7 @@
> # ***** END LICENSE BLOCK *****
> 
> import os
>+import re

Unused.

>+        #string = "\npassed: 11\nfailed: 25\ntodo: 0\n"
>+        #cmd = ["echo", string]

Clean up on aisle 7 :)

>diff --git a/scripts/marionette.py b/scripts/marionette.py
>--- a/scripts/marionette.py
>+++ b/scripts/marionette.py
>@@ -14,7 +14,7 @@ import sys
> sys.path.insert(1, os.path.dirname(sys.path[0]))
> 
> from mozharness.base.errors import PythonErrorList, TarErrorList
>-from mozharness.base.log import INFO, ERROR, OutputParser
>+from mozharness.base.log import INFO, ERROR, TestSummaryOutputParserHelper 

Nit: trailing whitespace

> class MarionetteTest(TestingMixin, TooltoolMixin, EmulatorMixin, BaseScript):
>     config_options = [
>@@ -236,20 +224,8 @@ class MarionetteTest(TestingMixin, Toolt
>             level = ERROR
>             tbpl_status = TBPL_FAILURE
> 
>-        # generate the TinderboxPrint line for TBPL
>-        emphasize_fail_text = '<em class="testfail">%s</em>'
>-        failed = "0"
>-        if marionette_parser.passed == 0 and marionette_parser.failed == 0:
>-            tsummary = emphasize_fail_text % "T-FAIL"
>-        else:
>-            if marionette_parser.failed > 0:
>-                failed = emphasize_fail_text % str(marionette_parser.failed)
>-            tsummary = "%d/%s/%d" % (marionette_parser.passed,
>-                                     failed,
>-                                     marionette_parser.todo)
>-
>         # dump logcat output if there were failures
>-        if self.config.get('emulator') and (failed != "0" or 'T-FAIL' in tsummary):
>+        if self.config.get('emulator') and (failed != "0" or 'T-FAIL' in marionette_parser.tsummary):

No such thing as 'failed' here.  Maybe marionette_parser.failed ?


Minusing for that and generating the tbpl line too many times, but you're close.
Attachment #699244 - Flags: review?(aki) → review-
Oh, also, could you put TestSummaryOutputParser in mozharness.mozilla.testing.unittest or something?

It definitely doesn't belong in mozharness.base.log (mozharness.base.* should avoid anything mozilla-specific).
I'm still fighting to be able to run unit.sh

Do you have any documentation about it?

For instance, you knew that I need to create a virtualenv and install a specific version of Mercurial. Do you prefer if I file a bug to create venv for the Mercurial tests?
Attachment #699244 - Attachment is obsolete: true
Attachment #699407 - Flags: feedback?(aki)
Attachment #699407 - Flags: feedback?(aki) → review?(aki)
(In reply to Armen Zambrano G. [:armenzg] from comment #6)
> I'm still fighting to be able to run unit.sh
> 
> Do you have any documentation about it?

Just the script header comments right now.

> For instance, you knew that I need to create a virtualenv and install a
> specific version of Mercurial. Do you prefer if I file a bug to create venv
> for the Mercurial tests?

Sure.  I'd rather the mercurial stuff work with more versions of mercurial, or maybe we should tear out the mercurial code and just use hgtool.  But I was thinking about a bug either way.
Comment on attachment 699407 [details] [diff] [review]
refactor + add output parser for gaia ui tests

Awesome.

You may want to coordinate with jgriffin since you both have decent-sized changes to land to these scripts, either to make sure to go green between landings, or to land at the same time to minimize churn.  He's waiting til morning to land his so he can make sure everything goes green.
Attachment #699407 - Flags: review?(aki) → review+
Attachment #699407 - Flags: checked-in+
tbpl now shows the gaia ui test results (see the 3rd job):
https://tbpl.mozilla.org/?tree=Cedar&jobname=b2g_panda cedar%20opt%20test%20gaia-ui-test 
> from tbpl: results: 14/22/0
> from log:  10:27:22     INFO - TinderboxPrint: results: 14/<em class="testfail">22</em>/0

I will close as soon as I see marionette working as well.
Marionette jobs report properly as well.

> from tbpl: results: 132/2/0
> from log:  10:54:19     INFO - TinderboxPrint: results: 132/<em class="testfail">2</em>/0
Status: NEW → RESOLVED
Closed: 11 years ago
Resolution: --- → FIXED
Might as well.
Attachment #700007 - Flags: review?(aki)
Attachment #700007 - Flags: review?(aki) → review+
Attachment #700007 - Flags: checked-in+
This is causing red desktop Mn tests on inbound:  https://tbpl.mozilla.org/?tree=Mozilla-Inbound&jobname=marionette
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
From a failing log:

Traceback (most recent call last):
  File "scripts/scripts/marionette.py", line 22, in <module>
    from mozharness.mozilla.testing.unittest import EmulatorMixin, TestSummaryOutputParserHelper
  File "/builds/slave/talos-slave/test/scripts/mozharness/mozilla/testing/unittest.py", line 51
    class DesktopUnittestOutputParser(OutputParser):
        ^
SyntaxError: invalid syntax
Landed a bustage fix.
Status: REOPENED → RESOLVED
Closed: 11 years ago11 years ago
Resolution: --- → FIXED
Product: mozilla.org → Release Engineering
Component: General Automation → General
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: