Intermittent TEST-UNEXPECTED-TIMEOUT | comm/mail/base/test/browser/browser_mailContext.js | waiting for Gloda to finish indexing
Categories
(Thunderbird :: General, defect)
Tracking
(thunderbird143 fixed)
Tracking | Status | |
---|---|---|
thunderbird143 | --- | fixed |
People
(Reporter: intermittent-bug-filer, Assigned: darktrojan)
Details
(Keywords: intermittent-failure, intermittent-testcase, test-disabled)
Attachments
(4 files)
Filed by: mkmelin [at] iki.fi
Parsed log: https://treeherder.mozilla.org/logviewer?job_id=458376509&repo=comm-central
Full log: https://firefox-ci-tc.services.mozilla.com/api/queue/v1/task/PDydVPTmQRamwSgzSlQajg/runs/0/artifacts/public/logs/live_backing.log
Comment 1•1 year ago
|
||
Comment 2•1 year ago
|
||
Oh, reading up a bit higher in the log, it looks like there was just some sort of test-timeout, and so we got force-killed to interrupt that. So the crash was an intentional crash, from the harness killing us.
Here's the paper trail (notice "Killing process: 1051" followed by "PROCESS-CRASH" for PID 1051):
[task 2024-05-16T00:15:35.335Z] 00:15:35 INFO - TEST-UNEXPECTED-TIMEOUT | comm/mail/base/test/browser/browser_mailContext.js | application timed out after 370 seconds with no output
[task 2024-05-16T00:15:35.335Z] 00:15:35 INFO - TEST-INFO took 390830ms
[task 2024-05-16T00:15:35.335Z] 00:15:35 INFO - Buffered messages finished
[task 2024-05-16T00:15:35.336Z] 00:15:35 WARNING - Force-terminating active process(es).
[...]
[task 2024-05-16T00:15:35.465Z] 00:15:35 INFO - Killing process: 1051
[...]
[task 2024-05-16T00:15:46.072Z] 00:15:46 INFO - PROCESS-CRASH | application crashed [@ mozilla::nsDisplayListCollection::~nsDisplayListCollection] | comm/mail/base/test/browser/browser.ini
[task 2024-05-16T00:15:46.072Z] 00:15:46 INFO - Process type: main
[task 2024-05-16T00:15:46.072Z] 00:15:46 INFO - Process pid: 1051
So this was really just a test-timeout, not a crash.
One thing I notice in the screenshot is a popup saying "Your disk is almost full":
https://firefoxci.taskcluster-artifacts.net/PDydVPTmQRamwSgzSlQajg/0/public/test_info/mozilla-test-fail-screenshot_45izsj11.png
Not sure how close we were to being full (and I do see "disk_size": "233.47 GiB",
at the top of the log which seems large). But I wonder if that might be associated with what made us timeout?
Comment 3•1 year ago
•
|
||
--> Bouncing over to Thunderbird|General, ni=mkmelin for any further classification he wants to do here.
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Assignee | ||
Comment 14•1 year ago
|
||
I'm going to disable this test on macOS debug, as it fails there far too often. For some reason the whole program locks up if we open too many context menus in quick succession.
Comment 15•1 year ago
|
||
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment 40•7 months ago
|
||
Down on line 678 we do this. Maybe follow-up to bug 1933104 (which added that).
Updated•7 months ago
|
Comment 41•7 months ago
|
||
Pushed by mkmelin@iki.fi:
https://hg.mozilla.org/comm-central/rev/05953ce3ebe2
try to fix intermittent comm/mail/base/test/browser/browser_mailContext.js failure. r=tobyp
Comment hidden (Intermittent Failures Robot) |
Assignee | ||
Comment 43•7 months ago
|
||
The current spate of failures is happening because we get here without a row and with retry true. At that point it just gives up.
I don't know why it's happening, the scrollToIndex
call should result in a row being ready before the end of the event loop, i.e. before the requestAnimationFrame
callback runs. I've tried replacing it with setTimeout
and that fails too sometimes.
Comment hidden (Intermittent Failures Robot) |
Assignee | ||
Comment 45•7 months ago
|
||
Assignee | ||
Updated•7 months ago
|
Comment 46•7 months ago
|
||
Pushed by mkmelin@iki.fi:
https://hg.mozilla.org/comm-central/rev/45efc0f59964
Fix contextmenu event when the current row is out of view. r=mkmelin
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Assignee | ||
Comment 58•3 months ago
|
||
"waiting for Gloda to finish indexing" seems to be the only failure mode happening regularly, so I'm changing the bug title to that to avoid missing new failures.
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Comment hidden (Intermittent Failures Robot) |
Assignee | ||
Comment 67•21 days ago
|
||
I know what's causing Gloda indexing to be so incredibly slow on macOS 14. It's called "macOS 14".
Assignee | ||
Comment 68•21 days ago
|
||
This is a bizarre bug. The tests I've changed here (mostly the first one) are failing when run
on macOS 14.70. I've tracked it down to this part of the MIME parser where it goes to the OS and
asks for a file extension for a given content type. The OS is responding very slowly and since
it's getting asked the same thing many times, it's taking too long and the test is timing out.
I've hard-coded in the extension for "text/plain" is "txt". Because it is.
Assignee | ||
Comment 69•18 days ago
|
||
Let's open a new bug if we get any more intermittent failures from this test. We've already made three attempts at fixing various things.
Comment 70•18 days ago
|
||
Pushed by corey@thunderbird.net:
https://hg.mozilla.org/comm-central/rev/0ef2997da726
Improve performance of Gloda indexing in tests. r=aleca
Updated•18 days ago
|
Comment hidden (Intermittent Failures Robot) |
Description
•