Closed Bug 523758 Opened 14 years ago Closed 13 years ago

run mobile specific browser-chrome tests on maemo


(Release Engineering :: General, defect, P3)



(Not tracked)



(Reporter: jmaher, Assigned: mozilla)



(Whiteboard: [mobile][unittest])


(3 files, 5 obsolete files)

We have put an effort into getting our browser-chrome tests for Fennec passing and cleaned up.  Now we want to see these on tinderbox, especially since they should all be green!

We will need to create another package from the objdir:
make package-mobile-tests

This will create a fennec-1.0.*tests.tar.bz2 file (similar to what we currently have for a xulrunner.*tests.tar.bz2).  This will have to be copied to the device and unpackaged just like the xulrunner tests file.

I need to add support to maemkit so we could run:
python --testtype=browser

It is still to be sorted out how well this will play if you unpack both the fennec*tests.tar.bz2 and the xulrunner*tests.tar.bz2 into the same directory since there will be all the firefox browser-chrome tests as well.

If there is anything else that needs to be done, please add it to the bug or create dependent bugs.
ooc, is there a reason why these tests can't live inside the same tests tarball?
Fennec is 2 pieces: xulrunner and a mobile UI.  Right now the build structure is setup such that these are segmented into separate build directories.  So without modifying the script that makes the xulrunner tests package or replicating all of that logic into the mobile tree, we have a solution like this.

I actually like this better because a developer doesn't have the bulk of test data that xulrunner has but can get all the mobile related tests and just run them.
Component: Release Engineering → Release Engineering: Future
Does moving this to Future mean this is delayed?  If so what would the timeline be?  Maybe we should revisit this when we have the fennec*tests.tar.bz2 file generated.
Mass move of bugs from Release Engineering:Future -> Release Engineering. See for more details.
Component: Release Engineering: Future → Release Engineering
Priority: -- → P3
Whiteboard: [mobile][unittest]
Depends on: 535922
now that we have clean builds of the browser-chrome test package, we should run
these tests sooner than later.

we keep finding bugs in Firefox Makefiles that break our mobile packaging so
running this on try server would be very beneficial.  bug 575607 is a dup here.

to run browser chrome tests:
1) download tests to device:
2) download fennec to device:
3) unpack in the same directory
4) run 'python mochitest/ --appname=fennec/fennec --xre-path=fennec
--certificate-path=certs --utility-path=bin --browser-chrome --autorun

Alternatively you can run this with the remote testing infrastructure that we
have for Android, or we could add an option to maemkit which would be just like
'chrome', but would change out the --chrome cli flag for --browser-chrome.
Putting it in maemkit with --test-type=brower-chrome would be the best in terms of our 'for' loop.  However, I believe I have a patch that will run this properly without the maemkit patch... testing in staging.
Hm.  Does this look right?

python fennec/mochitest/ --appname=fennec/fennec --xre-path=fennec --certificate-path=certs --utility-path=bin --browser-chrome --autorun --close-when-done
 in dir /builds (timeout 3600 secs)

Traceback (most recent call last):
  File "fennec/mochitest/", line 703, in <module>
  File "fennec/mochitest/", line 700, in main
  File "fennec/mochitest/", line 545, in runTests
  File "fennec/mochitest/", line 394, in startWebServer
  File "fennec/mochitest/", line 273, in start
    self._process = self._automation.Process([xpcshell] + args, env = env)
  File "/builds/fennec/mochitest/", line 173, in __init__
    universal_newlines, startupinfo, creationflags)
  File "/usr/lib/python2.6/", line 633, in __init__
    errread, errwrite)
  File "/usr/lib/python2.6/", line 1139, in _execute_child
    raise child_exception
OSError: [Errno 2] No such file or directory
there is a 'python fennec/mochitest/' and it should be 'python mochitest/', try removing the prepending fennec/
Ah, we extract the tests tarball inside of /builds/fennec/ for everything else -- do I need to create an exception for this one test?
Attached patch wip browser-chrome configs (obsolete) — Splinter Review
staging only.
Works, though it crashes:

TEST-UNEXPECTED-FAIL | chrome://mochikit/content/browser/mobile/chrome/browser_FormAssistant.js | application timed out after 330 seconds with no output
INFO | | Application ran for: 0:05:46.281311
INFO | | Reading PID log: /var/tmp/tmpf_2AAnpidlog
PROCESS-CRASH | chrome://mochikit/content/browser/mobile/chrome/browser_FormAssistant.js | application crashed (minidump found)
No symbols path given, can't process dump.
Neither MINIDUMP_STACKWALK nor MINIDUMP_STACKWALK_CGI is set, can't process dump.
WARNING | automationutils.processLeakLog() | refcount logging is off, so leaks can't be detected!
Attached patch wip custom patch (obsolete) — Splinter Review
jhford would prefer to land the changes in maemkit that would reduce these patches to just enabling a unit test called "browser-chrome" and otherwise keeping everything the same... that definitely feels cleaner to me too.

Looks like we need to create a BrowserChrometest class that inherits Chrometest (like Crashtest) that overrides specific settings?
This is untested, so if you want me to run this first, that's perfectly valid.
Sorry for the large patch; I changed to unix fileformat (to remove ^M line endings) so the entire file diffs.  I really only changed a few lines in there.
Assignee: nobody → aki
Attachment #456623 - Flags: review?(jmaher)
Attached patch buildbot-configs change. (obsolete) — Splinter Review
In theory this should be all that's needed once we have the maemkit patch checked in.
Attachment #454941 - Attachment is obsolete: true
Attachment #454946 - Attachment is obsolete: true
Comment on attachment 456624 [details] [diff] [review]
same patch, but with fileformat dos, so we don't get a huge whitespace diff

>diff --git a/ b/

>-    self.addOptions(mkit.config_options["chrome"])
>+    self.addOptions(mkit.config_options[testname])
>+    self.testname = testname

I would add a self.testdir as well and it could look something like this:
if (self.testname == "browser-chrome"):
  self.testdir = "browser"
  self.testdir = "chrome"

>   def getTests(self):
>     fromdir = os.path.normpath(self.options["backupdir"])
>-    todir = os.path.normpath(os.path.join(self.options["testroot"], "chrome"))
>+    todir = os.path.normpath(os.path.join(self.options["testroot"], self.testname))

change this from self.testname to self.testdir

>     todir = os.path.normpath(self.options["backupdir"])
>-    fromdir = os.path.normpath(os.path.join(self.options["testroot"], "chrome"))
>+    fromdir = os.path.normpath(os.path.join(self.options["testroot"], self.testname))

change this from self.testname to self.testdir

>diff --git a/maemkit-maemo.cfg b/maemkit-maemo.cfg
>+autorun = True
>+close-when-done = True
>+utility-path = /builds/fennec/bin
>+certificate-path = /builds/fennec/certs
>+backupdir = browser-chrome.bak

maybe make this 'browser.bak' instead of 'browser-chrome.bak'

>diff --git a/maemkit.cfg b/maemkit.cfg
>+autorun = True
>+close-when-done = True
>+utility-path = /builds/fennec/bin
>+certificate-path = /builds/fennec/certs
>+backupdir = browser-chrome.bak

same here: browser.bak vs browser-chrome.bak

Overall, I think this is pretty straightforward.  r=me with the issues above fixed.
Attachment #456624 - Flags: review+
Attachment #456623 - Flags: review?(jmaher)
Comment on attachment 456624 [details] [diff] [review]
same patch, but with fileformat dos, so we don't get a huge whitespace diff
Checked in with Joel's changes + -> unix fileformat.
Attachment #456624 - Flags: checked-in+
We evidently have to set a pref, "browser.tabs.remote" = false, to run these successfully on e10s-enabled builds (which includes m-c atm)... should add a decent amount of complexity to our configs.

I'm probably going to shift focus to bug 555892 a bit then circle back to this one.
Attached patch weird (obsolete) — Splinter Review
So this is my configs patch that I tested on smm2, that in theory was supposed to disable ipc.  I got 2/3 green tests, one orange.

Then, looking at it, I'm not disabling ipc at all.

Not sure if we should leave it as is, or remove the |default_platform['disable_ipc'] = False| or what.
Attachment #456635 - Attachment is obsolete: true
Attachment #457242 - Flags: review?(jhford)
Attached patch should work, in theory (obsolete) — Splinter Review
I can run this through testing again tomorrow. Or, if we rip disable_ipc out entirely, this patch isn't needed.
Attachment #457243 - Flags: review?(jhford)
Comment on attachment 457242 [details] [diff] [review]

>-#default_platform['test_suites']['chrome']['test_type'] = 'mochitest'
>+#default_platform['test_suites']['chrome']['test_type'] = 'chrome'

I was under the impression that we ran chrome as a mochitest as it uses the mochitest framework?
(In reply to comment #21)
> I was under the impression that we ran chrome as a mochitest as it uses the
> mochitest framework?

Maybe, though there's a maemkit chrome driver that gets used if you use a test type of chrome instead of mochitest.  Joel?
Attachment #457243 - Flags: review?(jhford)
Attachment #457242 - Flags: review?(jhford)
Yup, should be chrome.
I'm holding off on this until e10s testing works... the disable-ipc stuff I've tried hasn't worked properly.
This worked in staging (where worked == launched and ran tests; 754 tests passed, 164 failed.)
That's much better than before, when it wasn't launching.
Attachment #457242 - Attachment is obsolete: true
Attachment #457243 - Attachment is obsolete: true
Attachment #470638 - Flags: review?(jhford)
hey, that is good progress!  When I run this locally, I get 308 passed and 7 failed.  That seems like a big difference.
Locally, meaning linux desktop? or locally, meaning n900?
good point.  I am unable to get it to run with a build on my device (build from last week), so my data point is linux desktop.
Yeah, >800 tests seems too high. I'd love to have that many tests, but we should have a number closer to Joel's.
As I see it, we can a) block on landing until we figure out the test # disparity, or b) continue with review, land, and figure out test # disparity after it lands.

Do you have a preference?
I think the problem is that there are >800 tests that you are seeing and locally there are ~350.  I vote for landing this.  

Seeing logs will help determine what is going on and I am sure just a few small patches will get us to green.
Attachment #470638 - Flags: review?(jhford) → review+
Comment on attachment 470638 [details] [diff] [review]
enable browser-chrome tests for n900s/n810s
Attachment #470638 - Flags: checked-in+
Landed, and running. We're only running 12 tests due to the Awesomescreen test hanging.
Depends on: 593463
This is running -- should we use specific bugs, like bug 593463, to track any issues that keep this from going green?
I'm going to mark this as resolved, per comment 33.
Once bug 593463 is resolved we can either file more bugs to get this running to completion & green.  Or maybe it'll Just Work after bug 593463.
Closed: 13 years ago
Resolution: --- → FIXED
Product: → Release Engineering
You need to log in before you can comment on or make changes to this bug.