Closed Bug 523758 Opened 13 years ago Closed 12 years ago
run mobile specific browser-chrome tests on maemo
28.70 KB, patch
|Details | Diff | Splinter Review|
7.95 KB, patch
|Details | Diff | Splinter Review|
2.46 KB, patch
|Details | Diff | Splinter Review|
We have put an effort into getting our browser-chrome tests for Fennec passing and cleaned up. Now we want to see these on tinderbox, especially since they should all be green! We will need to create another package from the objdir: $(objdir)/mobile/mobile/installer make package-mobile-tests This will create a fennec-1.0.*tests.tar.bz2 file (similar to what we currently have for a xulrunner.*tests.tar.bz2). This will have to be copied to the device and unpackaged just like the xulrunner tests file. I need to add support to maemkit so we could run: python maemkit-chunked.py --testtype=browser It is still to be sorted out how well this will play if you unpack both the fennec*tests.tar.bz2 and the xulrunner*tests.tar.bz2 into the same directory since there will be all the firefox browser-chrome tests as well. If there is anything else that needs to be done, please add it to the bug or create dependent bugs.
ooc, is there a reason why these tests can't live inside the same tests tarball?
Fennec is 2 pieces: xulrunner and a mobile UI. Right now the build structure is setup such that these are segmented into separate build directories. So without modifying the script that makes the xulrunner tests package or replicating all of that logic into the mobile tree, we have a solution like this. I actually like this better because a developer doesn't have the bulk of test data that xulrunner has but can get all the mobile related tests and just run them.
Component: Release Engineering → Release Engineering: Future
Does moving this to Future mean this is delayed? If so what would the timeline be? Maybe we should revisit this when we have the fennec*tests.tar.bz2 file generated.
Mass move of bugs from Release Engineering:Future -> Release Engineering. See http://coop.deadsquid.com/2010/02/kiss-the-future-goodbye/ for more details.
Component: Release Engineering: Future → Release Engineering
Priority: -- → P3
13 years ago
now that we have clean builds of the browser-chrome test package, we should run these tests sooner than later. we keep finding bugs in Firefox Makefiles that break our mobile packaging so running this on try server would be very beneficial. bug 575607 is a dup here. to run browser chrome tests: 1) download tests to device: http://ftp.mozilla.org/pub/mozilla.org/firefox/nightly/latest-mobile-trunk/fennec-2.0a1pre.multi.linux-gnueabi-arm.tests.zip 2) download fennec to device: http://ftp.mozilla.org/pub/mozilla.org/firefox/nightly/latest-mobile-trunk/fennec-2.0a1pre.multi.linux-gnueabi-arm.tar.bz2 3) unpack in the same directory 4) run 'python mochitest/runtests.py --appname=fennec/fennec --xre-path=fennec --certificate-path=certs --utility-path=bin --browser-chrome --autorun --close-when-done' Alternatively you can run this with the remote testing infrastructure that we have for Android, or we could add an option to maemkit which would be just like 'chrome', but would change out the --chrome cli flag for --browser-chrome.
Putting it in maemkit with --test-type=brower-chrome would be the best in terms of our 'for' loop. However, I believe I have a patch that will run this properly without the maemkit patch... testing in staging.
Hm. Does this look right? python fennec/mochitest/runtests.py --appname=fennec/fennec --xre-path=fennec --certificate-path=certs --utility-path=bin --browser-chrome --autorun --close-when-done in dir /builds (timeout 3600 secs) Traceback (most recent call last): File "fennec/mochitest/runtests.py", line 703, in <module> main() File "fennec/mochitest/runtests.py", line 700, in main sys.exit(mochitest.runTests(options)) File "fennec/mochitest/runtests.py", line 545, in runTests self.startWebServer(options) File "fennec/mochitest/runtests.py", line 394, in startWebServer self.server.start() File "fennec/mochitest/runtests.py", line 273, in start self._process = self._automation.Process([xpcshell] + args, env = env) File "/builds/fennec/mochitest/automation.py", line 173, in __init__ universal_newlines, startupinfo, creationflags) File "/usr/lib/python2.6/subprocess.py", line 633, in __init__ errread, errwrite) File "/usr/lib/python2.6/subprocess.py", line 1139, in _execute_child raise child_exception OSError: [Errno 2] No such file or directory
there is a 'python fennec/mochitest/runtests.py' and it should be 'python mochitest/runtests.py', try removing the prepending fennec/
Ah, we extract the tests tarball inside of /builds/fennec/ for everything else -- do I need to create an exception for this one test?
staging only. Works, though it crashes: TEST-UNEXPECTED-FAIL | chrome://mochikit/content/browser/mobile/chrome/browser_FormAssistant.js | application timed out after 330 seconds with no output INFO | automation.py | Application ran for: 0:05:46.281311 INFO | automation.py | Reading PID log: /var/tmp/tmpf_2AAnpidlog PROCESS-CRASH | chrome://mochikit/content/browser/mobile/chrome/browser_FormAssistant.js | application crashed (minidump found) No symbols path given, can't process dump. Neither MINIDUMP_STACKWALK nor MINIDUMP_STACKWALK_CGI is set, can't process dump. WARNING | automationutils.processLeakLog() | refcount logging is off, so leaks can't be detected!
jhford would prefer to land the changes in maemkit that would reduce these patches to just enabling a unit test called "browser-chrome" and otherwise keeping everything the same... that definitely feels cleaner to me too. Looks like we need to create a BrowserChrometest class that inherits Chrometest (like Crashtest) that overrides specific settings?
This is untested, so if you want me to run this first, that's perfectly valid. Sorry for the large patch; I changed maemkit.py to unix fileformat (to remove ^M line endings) so the entire file diffs. I really only changed a few lines in there.
Assignee: nobody → aki
This might help.
In theory this should be all that's needed once we have the maemkit patch checked in.
Comment on attachment 456624 [details] [diff] [review] same patch, but with fileformat dos, so we don't get a huge whitespace diff >diff --git a/chromedriver.py b/chromedriver.py >- self.addOptions(mkit.config_options["chrome"]) >+ self.addOptions(mkit.config_options[testname]) >+ self.testname = testname I would add a self.testdir as well and it could look something like this: if (self.testname == "browser-chrome"): self.testdir = "browser" else: self.testdir = "chrome" > def getTests(self): > fromdir = os.path.normpath(self.options["backupdir"]) >- todir = os.path.normpath(os.path.join(self.options["testroot"], "chrome")) >+ todir = os.path.normpath(os.path.join(self.options["testroot"], self.testname)) change this from self.testname to self.testdir > todir = os.path.normpath(self.options["backupdir"]) >- fromdir = os.path.normpath(os.path.join(self.options["testroot"], "chrome")) >+ fromdir = os.path.normpath(os.path.join(self.options["testroot"], self.testname)) change this from self.testname to self.testdir >diff --git a/maemkit-maemo.cfg b/maemkit-maemo.cfg >+[browser-chrome] >+autorun = True >+close-when-done = True >+utility-path = /builds/fennec/bin >+certificate-path = /builds/fennec/certs >+backupdir = browser-chrome.bak maybe make this 'browser.bak' instead of 'browser-chrome.bak' >diff --git a/maemkit.cfg b/maemkit.cfg >+[browser-chrome] >+autorun = True >+close-when-done = True >+utility-path = /builds/fennec/bin >+certificate-path = /builds/fennec/certs >+backupdir = browser-chrome.bak same here: browser.bak vs browser-chrome.bak Overall, I think this is pretty straightforward. r=me with the issues above fixed.
Attachment #456624 - Flags: review+
Comment on attachment 456624 [details] [diff] [review] same patch, but with fileformat dos, so we don't get a huge whitespace diff http://hg.mozilla.org/qa/maemkit/rev/174c51814c77 Checked in with Joel's changes + maemkit.py -> unix fileformat.
Attachment #456624 - Flags: checked-in+
We evidently have to set a pref, "browser.tabs.remote" = false, to run these successfully on e10s-enabled builds (which includes m-c atm)... should add a decent amount of complexity to our configs. I'm probably going to shift focus to bug 555892 a bit then circle back to this one.
So this is my configs patch that I tested on smm2, that in theory was supposed to disable ipc. I got 2/3 green tests, one orange. Then, looking at it, I'm not disabling ipc at all. Not sure if we should leave it as is, or remove the |default_platform['disable_ipc'] = False| or what.
I can run this through testing again tomorrow. Or, if we rip disable_ipc out entirely, this patch isn't needed.
Comment on attachment 457242 [details] [diff] [review] weird >-#default_platform['test_suites']['chrome']['test_type'] = 'mochitest' >+#default_platform['test_suites']['chrome']['test_type'] = 'chrome' I was under the impression that we ran chrome as a mochitest as it uses the mochitest framework?
(In reply to comment #21) > I was under the impression that we ran chrome as a mochitest as it uses the > mochitest framework? Maybe, though there's a maemkit chrome driver that gets used if you use a test type of chrome instead of mochitest. Joel?
Yup, should be chrome. I'm holding off on this until e10s testing works... the disable-ipc stuff I've tried hasn't worked properly.
This worked in staging (where worked == launched and ran tests; 754 tests passed, 164 failed.) That's much better than before, when it wasn't launching. http://staging-mobile-master.build.mozilla.org:8020/builders/n900-gtk%20mozilla-central%20unit%20browser-chrome/builds/14/steps/run_browser-chrome/logs/stdio
hey, that is good progress! When I run this locally, I get 308 passed and 7 failed. That seems like a big difference.
Locally, meaning linux desktop? or locally, meaning n900?
good point. I am unable to get it to run with a build on my device (build from last week), so my data point is linux desktop.
Yeah, >800 tests seems too high. I'd love to have that many tests, but we should have a number closer to Joel's.
As I see it, we can a) block on landing until we figure out the test # disparity, or b) continue with review, land, and figure out test # disparity after it lands. Do you have a preference?
I think the problem is that there are >800 tests that you are seeing and locally there are ~350. I vote for landing this. Seeing logs will help determine what is going on and I am sure just a few small patches will get us to green.
Attachment #470638 - Flags: review?(jhford) → review+
Comment on attachment 470638 [details] [diff] [review] enable browser-chrome tests for n900s/n810s http://hg.mozilla.org/build/buildbot-configs/rev/e37e1cb54ae0
Attachment #470638 - Flags: checked-in+
Landed, and running. We're only running 12 tests due to the Awesomescreen test hanging.
Status: NEW → ASSIGNED
This is running -- should we use specific bugs, like bug 593463, to track any issues that keep this from going green?
I'm going to mark this as resolved, per comment 33. Once bug 593463 is resolved we can either file more bugs to get this running to completion & green. Or maybe it'll Just Work after bug 593463.
Status: ASSIGNED → RESOLVED
Closed: 12 years ago
Resolution: --- → FIXED
Product: mozilla.org → Release Engineering
You need to log in before you can comment on or make changes to this bug.