Closed
Bug 1418553
Opened 7 years ago
Closed 7 years ago
Intermittent Automation Error: mozprocess timed out after 1000 seconds running ['python', '-u', '/builds/worker/workspace/build/tests/mochitest/runtests.py'] due to dom/html/test/test_bug1260704.html
Categories
(Core :: DOM: Events, defect, P5)
Core
DOM: Events
Tracking
()
RESOLVED
DUPLICATE
of bug 1416929
People
(Reporter: intermittent-bug-filer, Unassigned)
References
Details
(Keywords: intermittent-failure)
Comment hidden (Intermittent Failures Robot) |
Comment 2•7 years ago
|
||
The hang here always happens for dom/html/test/test_bug1260704.html, and is not Marionette releated.
https://dxr.mozilla.org/mozilla-central/rev/b056526be38e96b3e381b7e90cd8254ad1d96d9d/dom/html/test/test_bug1260704.html
If the registered event listener in that test doesn't fire, shouldn't the Mochikit harness cause the test to abort after a given amount of seconds?
Component: Marionette → DOM: Events
Flags: needinfo?(gbrown)
Product: Testing → Core
Summary: Intermittent Automation Error: mozprocess timed out after 1000 seconds running ['/builds/worker/workspace/build/venv/bin/python', '-u', '/builds/worker/workspace/build/tests/mochitest/runtests.py', '--disable-e10s', '--total-chunks', '10', ' → Intermittent Automation Error: mozprocess timed out after 1000 seconds running ['python', '-u', '/builds/worker/workspace/build/tests/mochitest/runtests.py'] due to dom/html/test/test_bug1260704.html
Version: Version 3 → unspecified
![]() |
||
Comment 3•7 years ago
|
||
Right now there are 2 failures in OF that do not occur after test_bug1260704 -- inevitable mis-stars.
The other 2 failures in OF which do occur after test_bug1260704 are certainly interesting. Why didn't the harness time-out and abort before we hit 1000 seconds? I can't think of a reason, but I notice that both happened on linux64-jsdcov. I wonder if coverage affects harness timeouts and/or if there's some coverage operation happening outside of the scope of the harness timeouts which is hanging or taking too long? :gmierz -- Any ideas?
https://treeherder.mozilla.org/logviewer.html#?repo=mozilla-central&job_id=145750005
https://treeherder.mozilla.org/logviewer.html#?repo=mozilla-central&job_id=145915782
Flags: needinfo?(gbrown) → needinfo?(gmierz2)
Comment 4•7 years ago
|
||
I'm not too sure about this but coincidentally, I was looking at this error yesterday and tried changing where finalize() was called for the coverageCollector and found that it was failing at a different test now [1]. If it's a significant we can change where finalize is called.
Going off your thought that something in code coverage is hamging, I would imagine it would have something to do with the js debugger we are using to collect coverage with [2]. I don't think I've never seen the actual collection (creating the jsons) process hanging before but I don't know about the debugger.
Another possibility is one of the imports causing havoc again (osfile.jsm has in the past).
[1]: https://treeherder.mozilla.org/#/jobs?repo=try&revision=372e721a3091ee4b87a385bd92286ba2dd023645
[2]: https://dxr.mozilla.org/mozilla-central/source/testing/modules/CoverageUtils.jsm#25
Flags: needinfo?(gmierz2)
Comment hidden (Intermittent Failures Robot) |
Updated•7 years ago
|
Status: NEW → RESOLVED
Closed: 7 years ago
Resolution: --- → DUPLICATE
You need to log in
before you can comment on or make changes to this bug.
Description
•