Closed
Bug 1341466
Opened 8 years ago
Closed 7 years ago
Android marionette test jobs do not retry when emulator becomes unresponsive
Categories
(Remote Protocol :: Marionette, defect, P3)
Remote Protocol
Marionette
Tracking
(firefox59 fixed)
RESOLVED
FIXED
mozilla59
Tracking | Status | |
---|---|---|
firefox59 | --- | fixed |
People
(Reporter: gbrown, Assigned: impossibus)
References
(Blocks 1 open bug)
Details
Attachments
(2 files)
All of our Android emulator test jobs intermittently and infrequently fail when the emulator becomes unresponsive, for no known reason. Since we have been unable to find a way to run the emulator 100% reliably, android_emulator_unittest.py turns the job blue and retries whenever the test log contains text associated with this condition, like "Timeout exceeded for _runCmd": https://dxr.mozilla.org/mozilla-central/rev/d0462b0948e0b1147dcce615bddcc46379bdadb2/testing/mozharness/mozharness/mozilla/testing/errors.py#101 However, it seems the Android Marionette jobs do not retry when the emulator becomes unresponsive, typically resulting in a failure reported in bug 1204281.
Reporter | ||
Comment 1•8 years ago
|
||
https://treeherder.mozilla.org/logviewer.html#?repo=mozilla-inbound&job_id=78213449&lineNumber=1924 https://treeherder.mozilla.org/logviewer.html#?repo=graphics&job_id=77389782&lineNumber=1957
Reporter | ||
Comment 2•8 years ago
|
||
One issue is that the harness keeps trying to run tests, each of which takes a long time to fail, until the taskcluster job times out, so no mozharness log parsing ever happens.
Comment 3•8 years ago
|
||
are there any next steps here?
Reporter | ||
Comment 4•8 years ago
|
||
The first sign of trouble seems to be: mozdevice Timeout exceeded for _runCmd call '/home/worker/workspace/build/android-sdk18/platform-tools/adb -s emulator-5554 shell ps' I think someone should determine where that's coming from -- what code is querying the process list, and why, and look at the error handling for that. Could we abort the test run when the first (or perhaps Nth) such failure is seen? (I ask "why" because android mochitest/reftest harnesses use getTopActivity()/dumpsys rather than ps to check that the browser is active: https://dxr.mozilla.org/mozilla-central/rev/e1135c6fdc9bcd80d38f7285b269e030716dcb72/build/mobile/remoteautomation.py#410 / bug 865944).
Comment 5•8 years ago
|
||
I suspect this is in the new marionette for android jobs, I see :ato, :whimboo, :maja_zf, :automatedtester as the primary authors reviewers of geckoinstance.py: https://dxr.mozilla.org/mozilla-central/source/testing/marionette/client/marionette_driver/geckoinstance.py I am not sure if this is the root cause of the failures, but I do recall a lot of work to get marionette running on android and we should hear from the team that got it working as to where the query for 'ps' is coming from and if they can switch to getTopActivity()/dumpsys instead. for reference, here is some failures: https://treeherder.mozilla.org/logviewer.html#?repo=mozilla-inbound&job_id=78213449&lineNumber=1924
Flags: needinfo?(hskupin)
Comment 6•8 years ago
|
||
I did close to no work for the harness. That was all done by Maja. But not sure if she knows about those details. Lets ask her first before forwarding the ni? to eg. Andrew or William who both should better understand the mozdevice.
Flags: needinfo?(hskupin) → needinfo?(mjzffr)
Assignee | ||
Comment 7•8 years ago
|
||
Why not getTopActivity? Sorry, I don't know. FennecRunner inherits is_running from DeviceRunner, which uses DeviceManagerADB.processExists (which calls ps), but I suppose we could override is_running to use getTopActivity if that's better. I think this failure is due to a combination of bugs in Marionette's cleanup, in the exception handling in MarionetteTestCase and in BaseMarionetteTestRunner' use of cleanup [0]. While investigating this, I also looked into Bug 1322993, which is related. Here's what I see based on the logs in Comment 5: * The Marionette client tries to send a message to Fennec and sees socket-related errors, presumably because the emulator is dead or unresponsive? * _send_message is wrapped with @do_process_check, which handles the socket errors by checking whether Fennec is alive, and killing it if necessary. [1] * BaseRunner.wait calls |adb shell ps| over and over via processExist in testing/mozbase/mozdevice/mozdevice/devicemanager.py, which is why we see those _runCmd warnings in the log over and over. * If the emulator is unresponsive, we don't get a return code from BaseRunner.wait in [1], so the Marionette Client tries to clean up the Fennec process itself. However, clean up on the emulator fails. * The resulting exception in the testcase setUp gets caught in CommonTestCase [2], an error is recorded and then the next test runs. I'm happy to dig into this further, but it will have to wait until later next week. [0] https://dxr.mozilla.org/mozilla-central/rev/b23d6277acca34a4b1be9a4c24efd3b999e47ec3/testing/marionette/harness/marionette_harness/runner/base.py#875 [1] https://dxr.mozilla.org/mozilla-central/rev/b23d6277acca34a4b1be9a4c24efd3b999e47ec3/testing/marionette/client/marionette_driver/marionette.py#796-805 [2] https://dxr.mozilla.org/mozilla-central/rev/b23d6277acca34a4b1be9a4c24efd3b999e47ec3/testing/marionette/harness/marionette_harness/marionette_test/testcases.py#154
Flags: needinfo?(mjzffr)
Reporter | ||
Comment 8•8 years ago
|
||
Sorry, I probably shouldn't have brought up ps vs getTopActivity -- it's a side issue. The central issue is that once the emulator becomes unresponsive, adb commands will keep timing out and if a harness doesn't handle that carefully, it's easy to spend too much time retrying and waiting. If Marionette can give up and exit in a timely manner, the existing mozharness mechanisms should notice the adb error and trigger a retry. Thanks for looking at this Maja; it sounds like you are on the right track.
Comment 9•8 years ago
|
||
this seems to have lost momentum, what are the next steps here?
Assignee | ||
Comment 11•8 years ago
|
||
I think this problem will show up a lot less once Bug 1322993 lands. Nevertheless, this needs to be fixed. Next steps are to tweak the error handling in do_process_check and CommonTestCase (see Comment 7 for context). This bug is on my to-do list, but so are a lot of other things. I'll do my best to prioritize in a sensible way.
Assignee: nobody → mjzffr
Assignee | ||
Updated•7 years ago
|
Assignee: mjzffr → nobody
Updated•7 years ago
|
Priority: -- → P3
Assignee | ||
Comment 15•7 years ago
|
||
Taking a look at this again. Using the attached log as a focus for the investigation.
Assignee | ||
Comment 16•7 years ago
|
||
Henrik, I'm looking at adding additional error handling around Marionette.cleanup or _handle_socket_failure. Asking for your input since you've worked a lot on handling socket failures and cleanup in the past months. One thing I'm not clear on is under what circumstances does the harness stop attempting to run more tests? For example, when we close/cleanup the browser instance and session, it seems the runner will try to start a new Marionette session for the next test [1]. Under that assumption, I think we should change CommonTestCase.run to stop running tests after many consecutive start-up/cleanup errors (e.g. after 3 tests in a row hit the same error). [1] https://searchfox.org/mozilla-central/rev/a662f122c37704456457a526af90db4e3c0fd10e/testing/marionette/harness/marionette_harness/marionette_test/testcases.py#93
Flags: needinfo?(hskupin)
Comment 17•7 years ago
|
||
We only run `_handle_socket_failure` if Marionette is managing the application. I don't know if that is the case for Fennec. Does Marionette start the binary there, or simply connect to the open port? IFAIR we are doing the latter. I'm not sure if it is a clever idea to completely stop all the tests. I'm not aware that other harnesses are doing that. Instead those skip the current folder (mochitest), and continue with the next one. Maybe we can discuss this in our chitchat today?
Flags: needinfo?(hskupin)
Assignee | ||
Comment 18•7 years ago
|
||
(In reply to Henrik Skupin (:whimboo) from comment #17) > We only run `_handle_socket_failure` if Marionette is managing the > application. I don't know if that is the case for Fennec. Does Marionette > start the binary there, or simply connect to the open port? IFAIR we are > doing the latter. Mozharness starts the emulator and the Marionette runner starts Fennec (and therefore manages the application). Henrik, Andreas and I discussed this intermittent today. The main ideas that came up were: 1. Break Marionette tests into more directories and process one directory at a time, such that mozharness can parse the logs after each directory of tests is run. 2. Change how mozharness works so it can parse logs as the tests are being run. 3. Raise a custom exception in Marionette driver when the same error (say, DMError) is detected n times in a row to indicate that the application under test is unresponsive, thus allowing the test runner to interrupt the run. Joel, I was wondering if you have any advice on these or other possible options based on your sense of how other harnesses deal with this kind of issue. If not, I'm leaning toward Option 3.
Flags: needinfo?(jmaher)
Comment 19•7 years ago
|
||
I think #3 is a good route. I imagine #1 would produce more errors as we would be cycling fennec more often.
Flags: needinfo?(jmaher)
Comment 20•7 years ago
|
||
Oh, so that wasn't that clear to me. Do we really get a very specific DMError if the emulator is not responsive anymore? If that is the case we could special-case it, as suggested in the call yesterday. But if not and we have to somehow rely on the IOError, we should not do 3). I suggested 1) because that is what Mochitest is using, right? It runs instances of Firefox per directory, and starts Firefox again for the next folder. As such mozharness can parse the logs in intervals and will notice emulator issues way earlier. With 3) we always have the risk that some tests might want to test for the exception, and that we abort inappropriately too early. That's something I don't want to see happen. So Joel or Geoff, can you please explain how Mochitests for Fennec are getting executed? I would feel better if we can design Marionette tests similar.
Flags: needinfo?(jmaher)
Flags: needinfo?(gbrown)
Comment 21•7 years ago
|
||
I don't believe android runs tests in a loop like we do for desktop- instead we just run the entire set of tests chunked, vs desktop where we chunk the number of directories and run the set of directories in a loop. I know in the past (years ago) there were issues with fennec starting/stopping, repeating in a log- this might not be an issue with more recent fennec and tooling.
Flags: needinfo?(jmaher)
Assignee | ||
Comment 22•7 years ago
|
||
No, Option 3 does not rely on checking the IOError.
Comment hidden (mozreview-request) |
Reporter | ||
Comment 24•7 years ago
|
||
(In reply to Henrik Skupin (:whimboo) from comment #20) > So Joel or Geoff, can you please explain how Mochitests for Fennec are > getting executed? I would feel better if we can design Marionette tests > similar. Fennec mochitests are chunked, but not run by directory or manifest. For a given chunk, the python harness - runtestsremote.py - launches the browser on the remote device and initiates a mochitest run. Then runtestsremote.py polls the device to check that firefox is the "top activity" (firefox is alive and running in the foreground); I think it checks maybe once per minute. If firefox is running in the foreground, the harness pulls the latest log data from the device and reports it, and continues to wait; otherwise, the test run ends, summary reported, files cleaned up, etc. If the emulator becomes unresponsive during a test run, that poll for top activity fails. It very likely raises DMError, but that particular DMError is handled, because we have seen intermittent, non-permanent conditions where DMError is raised specifically when getting the activity list to determine the top activity. We wait a bit, try to get the top activity again, and if that fails again, we end the test run and cleanup. Cleanup involves talking to the device and that process leads to more DMError, but we try not to retry during cleanup, so that ends quickly. During all this, at least one DMError is reported to the log, which mozharness and taskcluster notice, turn the job blue, and automatically retry the job. Here's a recent log: https://public-artifacts.taskcluster.net/H0WyWmuhTXWvhjWucrGjHQ/0/public/logs/live_backing.log [task 2017-11-16T11:11:39.748Z] 11:11:39 INFO - 399 INFO TEST-START | gfx/layers/apz/test/mochitest/test_bug982141.html [task 2017-11-16T11:11:50.348Z] 11:11:50 INFO - 400 INFO TEST-OK | gfx/layers/apz/test/mochitest/test_bug982141.html | took 9175ms [task 2017-11-16T11:11:50.349Z] 11:11:50 INFO - 401 INFO TEST-START | gfx/layers/apz/test/mochitest/test_frame_reconstruction.html [task 2017-11-16T11:12:10.378Z] 11:12:10 INFO - Failed to get top activity, retrying, once... [task 2017-11-16T11:12:10.394Z] 11:12:10 INFO - INFO | automation.py | Application ran for: 0:23:50.910483 [task 2017-11-16T11:12:10.394Z] 11:12:10 INFO - INFO | zombiecheck | Reading PID log: /tmp/tmpXJDfIcpidlog [task 2017-11-16T11:12:10.400Z] 11:12:10 INFO - /data/anr/traces.txt not found [task 2017-11-16T11:12:10.424Z] 11:12:10 INFO - Traceback (most recent call last): [task 2017-11-16T11:12:10.424Z] 11:12:10 INFO - File "/builds/worker/workspace/build/tests/mochitest/runtests.py", line 2744, in doTests [task 2017-11-16T11:12:10.425Z] 11:12:10 INFO - marionette_args=marionette_args, [task 2017-11-16T11:12:10.425Z] 11:12:10 INFO - File "/builds/worker/workspace/build/tests/mochitest/runtestsremote.py", line 296, in runApp [task 2017-11-16T11:12:10.429Z] 11:12:10 INFO - ret, _ = self._automation.runApp(*args, **kwargs) [task 2017-11-16T11:12:10.429Z] 11:12:10 INFO - File "/builds/worker/workspace/build/tests/mochitest/automation.py", line 561, in runApp [task 2017-11-16T11:12:10.429Z] 11:12:10 INFO - crashed = self.checkForCrashes(os.path.join(profileDir, "minidumps"), symbolsPath) [task 2017-11-16T11:12:10.429Z] 11:12:10 INFO - File "/builds/worker/workspace/build/tests/mochitest/remoteautomation.py", line 215, in checkForCrashes [task 2017-11-16T11:12:10.429Z] 11:12:10 INFO - logcat = self._devicemanager.getLogcat(filterOutRegexps=fennecLogcatFilters) [task 2017-11-16T11:12:10.429Z] 11:12:10 INFO - File "/builds/worker/workspace/build/venv/lib/python2.7/site-packages/mozdevice/devicemanager.py", line 176, in getLogcat [task 2017-11-16T11:12:10.429Z] 11:12:10 INFO - timeout=self.short_timeout) [task 2017-11-16T11:12:10.429Z] 11:12:10 INFO - File "/builds/worker/workspace/build/venv/lib/python2.7/site-packages/mozdevice/devicemanager.py", line 417, in shellCheckOutput [task 2017-11-16T11:12:10.429Z] 11:12:10 INFO - "(output: '%s', retval: '%s')" % (cmd, output, retval)) [task 2017-11-16T11:12:10.430Z] 11:12:10 CRITICAL - DMError: Non-zero return code for command: ['/system/bin/logcat', '-v', 'time', '-d', 'dalvikvm:I', 'ConnectivityService:S', 'WifiMonitor:S', 'WifiStateTracker:S', 'wpa_supplicant:S', 'NetworkStateTracker:S'] (output: '', retval: 'None') [task 2017-11-16T11:12:10.430Z] 11:12:10 INFO - 402 ERROR Automation Error: Received unexpected exception while running application [task 2017-11-16T11:12:10.430Z] 11:12:10 INFO - Stopping web server [task 2017-11-16T11:12:10.436Z] 11:12:10 INFO - Stopping web socket server [task 2017-11-16T11:12:10.456Z] 11:12:10 INFO - Stopping ssltunnel [task 2017-11-16T11:12:10.476Z] 11:12:10 INFO - leakcheck | refcount logging is off, so leaks can't be detected! [task 2017-11-16T11:12:10.477Z] 11:12:10 INFO - runtests.py | Running tests: end. [task 2017-11-16T11:12:10.485Z] 11:12:10 INFO - Unable to retrieve log file (/storage/sdcard/tests/logs/mochitest.log) from remote device [task 2017-11-16T11:12:10.496Z] 11:12:10 INFO - mozdevice ERROR | Non-zero return code (1) from ['adb', 'shell', 'rm', '-r', '/storage/sdcard/tests/chrome'] [task 2017-11-16T11:12:10.497Z] 11:12:10 INFO - mozdevice ERROR | Output: ['error: no devices/emulators found'] [task 2017-11-16T11:12:10.505Z] 11:12:10 INFO - mozdevice ERROR | Non-zero return code (1) from ['adb', 'shell', 'rm', '-r', '/storage/sdcard/tests/profile/'] [task 2017-11-16T11:12:10.506Z] 11:12:10 INFO - mozdevice ERROR | Output: ['error: no devices/emulators found'] [task 2017-11-16T11:12:10.514Z] 11:12:10 INFO - mozdevice ERROR | Non-zero return code (1) from ['adb', 'shell', 'rm', '-r', '/storage/sdcard/tests/cache/'] [task 2017-11-16T11:12:10.515Z] 11:12:10 INFO - mozdevice ERROR | Output: ['error: no devices/emulators found'] [task 2017-11-16T11:12:10.526Z] 11:12:10 INFO - 403 ERROR Automation Error: Exception caught while running tests [task 2017-11-16T11:12:10.530Z] 11:12:10 INFO - Traceback (most recent call last): [task 2017-11-16T11:12:10.530Z] 11:12:10 INFO - File "/builds/worker/workspace/build/tests/mochitest/runtestsremote.py", line 377, in run_test_harness [task 2017-11-16T11:12:10.530Z] 11:12:10 INFO - retVal = mochitest.runTests(options) [task 2017-11-16T11:12:10.530Z] 11:12:10 INFO - File "/builds/worker/workspace/build/tests/mochitest/runtests.py", line 2540, in runTests [task 2017-11-16T11:12:10.530Z] 11:12:10 INFO - return self.runMochitests(options, [t['path'] for t in tests]) [task 2017-11-16T11:12:10.530Z] 11:12:10 INFO - File "/builds/worker/workspace/build/tests/mochitest/runtests.py", line 2356, in runMochitests [task 2017-11-16T11:12:10.530Z] 11:12:10 INFO - result = self.doTests(options, testsToRun) [task 2017-11-16T11:12:10.530Z] 11:12:10 INFO - File "/builds/worker/workspace/build/tests/mochitest/runtests.py", line 2780, in doTests [task 2017-11-16T11:12:10.530Z] 11:12:10 INFO - self.cleanup(options) [task 2017-11-16T11:12:10.530Z] 11:12:10 INFO - File "/builds/worker/workspace/build/tests/mochitest/runtestsremote.py", line 72, in cleanup [task 2017-11-16T11:12:10.530Z] 11:12:10 INFO - self._dm.getDirectory(self.remoteMozLog, blobberUploadDir) [task 2017-11-16T11:12:10.530Z] 11:12:10 INFO - File "/builds/worker/workspace/build/venv/lib/python2.7/site-packages/mozdevice/devicemanagerADB.py", line 543, in getDirectory [task 2017-11-16T11:12:10.530Z] 11:12:10 INFO - dir_util.copy_tree(localDir, originalLocal) [task 2017-11-16T11:12:10.530Z] 11:12:10 INFO - File "/usr/lib/python2.7/distutils/dir_util.py", line 128, in copy_tree [task 2017-11-16T11:12:10.530Z] 11:12:10 INFO - "cannot copy tree '%s': not a directory" % src [task 2017-11-16T11:12:10.531Z] 11:12:10 INFO - DistutilsFileError: cannot copy tree '/tmp/tmpTBSEPG/mozlog': not a directory [task 2017-11-16T11:12:10.531Z] 11:12:10 INFO - Stopping web server [task 2017-11-16T11:12:10.531Z] 11:12:10 INFO - Failed to stop web server on http://127.0.0.1:8854/server/shutdown [task 2017-11-16T11:12:10.531Z] 11:12:10 INFO - Traceback (most recent call last): [task 2017-11-16T11:12:10.532Z] 11:12:10 INFO - File "/builds/worker/workspace/build/tests/mochitest/runtests.py", line 497, in stop [task 2017-11-16T11:12:10.532Z] 11:12:10 INFO - with closing(urllib2.urlopen(self.shutdownURL)) as c: [task 2017-11-16T11:12:10.532Z] 11:12:10 INFO - File "/usr/lib/python2.7/urllib2.py", line 154, in urlopen [task 2017-11-16T11:12:10.533Z] 11:12:10 INFO - return opener.open(url, data, timeout) [task 2017-11-16T11:12:10.533Z] 11:12:10 INFO - File "/usr/lib/python2.7/urllib2.py", line 429, in open [task 2017-11-16T11:12:10.533Z] 11:12:10 INFO - response = self._open(req, data) [task 2017-11-16T11:12:10.533Z] 11:12:10 INFO - File "/usr/lib/python2.7/urllib2.py", line 447, in _open [task 2017-11-16T11:12:10.534Z] 11:12:10 INFO - '_open', req) [task 2017-11-16T11:12:10.534Z] 11:12:10 INFO - File "/usr/lib/python2.7/urllib2.py", line 407, in _call_chain [task 2017-11-16T11:12:10.534Z] 11:12:10 INFO - result = func(*args) [task 2017-11-16T11:12:10.535Z] 11:12:10 INFO - File "/usr/lib/python2.7/urllib2.py", line 1228, in http_open [task 2017-11-16T11:12:10.535Z] 11:12:10 INFO - return self.do_open(httplib.HTTPConnection, req) [task 2017-11-16T11:12:10.535Z] 11:12:10 INFO - File "/usr/lib/python2.7/urllib2.py", line 1198, in do_open [task 2017-11-16T11:12:10.535Z] 11:12:10 INFO - raise URLError(err) [task 2017-11-16T11:12:10.536Z] 11:12:10 INFO - URLError: <urlopen error [Errno 111] Connection refused> [task 2017-11-16T11:12:10.536Z] 11:12:10 INFO - Stopping web socket server [task 2017-11-16T11:12:10.536Z] 11:12:10 INFO - Stopping ssltunnel [task 2017-11-16T11:12:10.539Z] 11:12:10 INFO - Unable to retrieve log file (/storage/sdcard/tests/logs/mochitest.log) from remote device [task 2017-11-16T11:12:10.551Z] 11:12:10 INFO - mozdevice ERROR | Non-zero return code (1) from ['adb', 'shell', 'rm', '-r', '/storage/sdcard/tests/chrome'] [task 2017-11-16T11:12:10.551Z] 11:12:10 INFO - mozdevice ERROR | Output: ['error: no devices/emulators found'] [task 2017-11-16T11:12:10.560Z] 11:12:10 INFO - mozdevice ERROR | Non-zero return code (1) from ['adb', 'shell', 'rm', '-r', '/storage/sdcard/tests/profile/'] [task 2017-11-16T11:12:10.561Z] 11:12:10 INFO - mozdevice ERROR | Output: ['error: no devices/emulators found'] [task 2017-11-16T11:12:10.570Z] 11:12:10 INFO - mozdevice ERROR | Non-zero return code (1) from ['adb', 'shell', 'rm', '-r', '/storage/sdcard/tests/cache/'] [task 2017-11-16T11:12:10.570Z] 11:12:10 INFO - mozdevice ERROR | Output: ['error: no devices/emulators found'] [task 2017-11-16T11:12:10.581Z] 11:12:10 INFO - Traceback (most recent call last): [task 2017-11-16T11:12:10.581Z] 11:12:10 INFO - File "/builds/worker/workspace/build/tests/mochitest/runtestsremote.py", line 405, in <module> [task 2017-11-16T11:12:10.584Z] 11:12:10 INFO - sys.exit(main()) [task 2017-11-16T11:12:10.584Z] 11:12:10 INFO - File "/builds/worker/workspace/build/tests/mochitest/runtestsremote.py", line 401, in main [task 2017-11-16T11:12:10.584Z] 11:12:10 INFO - return run_test_harness(parser, options) [task 2017-11-16T11:12:10.584Z] 11:12:10 INFO - File "/builds/worker/workspace/build/tests/mochitest/runtestsremote.py", line 383, in run_test_harness [task 2017-11-16T11:12:10.584Z] 11:12:10 INFO - mochitest.cleanup(options) [task 2017-11-16T11:12:10.584Z] 11:12:10 INFO - File "/builds/worker/workspace/build/tests/mochitest/runtestsremote.py", line 72, in cleanup [task 2017-11-16T11:12:10.584Z] 11:12:10 INFO - self._dm.getDirectory(self.remoteMozLog, blobberUploadDir) [task 2017-11-16T11:12:10.584Z] 11:12:10 INFO - File "/builds/worker/workspace/build/venv/lib/python2.7/site-packages/mozdevice/devicemanagerADB.py", line 543, in getDirectory [task 2017-11-16T11:12:10.584Z] 11:12:10 INFO - dir_util.copy_tree(localDir, originalLocal) [task 2017-11-16T11:12:10.584Z] 11:12:10 INFO - File "/usr/lib/python2.7/distutils/dir_util.py", line 128, in copy_tree [task 2017-11-16T11:12:10.584Z] 11:12:10 INFO - "cannot copy tree '%s': not a directory" % src [task 2017-11-16T11:12:10.584Z] 11:12:10 INFO - distutils.errors.DistutilsFileError: cannot copy tree '/tmp/tmp_RQ7YK/mozlog': not a directory [task 2017-11-16T11:12:10.618Z] 11:12:10 ERROR - Return code: 1 [task 2017-11-16T11:12:10.618Z] 11:12:10 ERROR - No tests run or test summary not found [task 2017-11-16T11:12:10.619Z] 11:12:10 INFO - TinderboxPrint: mochitest<br/><em class="testfail">T-FAIL</em> [task 2017-11-16T11:12:10.619Z] 11:12:10 INFO - ##### mochitest log ends
Flags: needinfo?(gbrown)
Assignee | ||
Comment 25•7 years ago
|
||
Try run to simulate recurring DMErrors -- gets retried now: https://treeherder.mozilla.org/#/jobs?repo=try&revision=be9e643016c585e5778d4fd809b305fc23146ae3
Comment hidden (mozreview-request) |
Assignee | ||
Updated•7 years ago
|
Assignee: nobody → mjzffr
Comment 27•7 years ago
|
||
mozreview-review |
Comment on attachment 8929052 [details] Bug 1341466 - Stop Marionette test run when Android emulator is unresponsive; https://reviewboard.mozilla.org/r/200360/#review215768
Attachment #8929052 -
Flags: review?(dburns) → review+
Comment 28•7 years ago
|
||
Pushed by mjzffr@gmail.com: https://hg.mozilla.org/integration/autoland/rev/99914fd76852 Stop Marionette test run when Android emulator is unresponsive; r=automatedtester
Comment 29•7 years ago
|
||
Pushed by apavel@mozilla.com: https://hg.mozilla.org/mozilla-central/rev/08fbe54daea3 Stop Marionette test run when Android emulator is unresponsive; r=automatedtester
Comment 30•7 years ago
|
||
This should be fixed now! Thank you Maja!
Status: NEW → RESOLVED
Closed: 7 years ago
status-firefox59:
--- → fixed
Resolution: --- → FIXED
Target Milestone: --- → mozilla59
Comment 31•7 years ago
|
||
bugherder |
https://hg.mozilla.org/mozilla-central/rev/08fbe54daea3
Comment 32•7 years ago
|
||
bugherder |
https://hg.mozilla.org/mozilla-central/rev/99914fd76852
Updated•2 years ago
|
Product: Testing → Remote Protocol
You need to log in
before you can comment on or make changes to this bug.
Description
•