Closed Bug 584589 Opened 14 years ago Closed 14 years ago

Talos orange: intermittent browser frozen error in ts_cold_generated_med

Categories

(Core Graveyard :: File Handling, defect)

x86
Windows Server 2003
defect
Not set
normal

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: ehsan.akhgari, Unassigned)

References

Details

(Keywords: intermittent-failure, Whiteboard: [same underlying cause of bug 584613])

http://tinderbox.mozilla.org/showlog.cgi?log=Firefox/1280967867.1280971318.22573.gz Rev3 WINNT 6.1 mozilla-central talos dirty on 2010/08/04 17:24:27 Running test ts_cold_generated_med: Started Wed, 04 Aug 2010 17:43:31 Screen width/height:1280/1024 colorDepth:24 Browser inner width/height: 1008/655 Browser outer width/height: 1024/768 NOISE: NOISE: __FAILbrowser frozen__FAIL NOISE: NOISE: __FAILbrowser frozen__FAIL Failed ts_cold_generated_med: Stopped Wed, 04 Aug 2010 18:03:55 FAIL: Busted: ts_cold_generated_med FAIL: browser frozen Completed test ts_cold_generated_med:
Blocks: 438871
Whiteboard: [orange]
Is there some reason to believe this isn't a code problem ?
No!
Component: Release Engineering → Places
Product: mozilla.org → Toolkit
QA Contact: release → places
Version: other → Trunk
um, why is this a places problem?
I guess because nobody knows the source of the problem, Nick decided to put it on your plate without any evidence that this is not in fact a releng problem. ;-)
Because we're seeing it on the talos medium and max profiles, but not the normal ts.
http://tinderbox.mozilla.org/showlog.cgi?log=Firefox/1280967092.1280970706.20258.gz Personally, I wouldn't be at all surprised to find that this is actually bug 584613, and talos is blaming failing to open some file on the browser being frozen.
I might buy that if it was the first startup attempt, but it's not: [snippety snip snip] NOISE: __startTimestamp1280969411726__endTimestamp NOISE: __startSecondTimestamp1280969412734__endSecondTimestamp NOISE: __start_report671__end_report NOISE: NOISE: __startTimestamp1280969427769__endTimestamp NOISE: __startSecondTimestamp1280969428788__endSecondTimestamp NOISE: __start_report674__end_report NOISE: NOISE: __startTimestamp1280969443815__endTimestamp NOISE: __startSecondTimestamp1280969444859__endSecondTimestamp NOISE: NOISE: __FAILbrowser frozen__FAIL NOISE: NOISE: __FAILbrowser frozen__FAIL That's from the log in comment #6.
Sure, but so far there's been no sign I've seen of the failure to open files being anything but random - http://tinderbox.mozilla.org/showlog.cgi?log=Firefox/1280977559.1280978879.24126.gz is mochitest-a11y failing to find httpd.js and server.js after multiple other test suites had run with them.
Could component registration be failing intermittently ?
http://tinderbox.mozilla.org/showlog.cgi?log=Firefox/1280982935.1280988619.32613.gz (I'm sort of ignoring the "timeout exceeded" in ts, kind of.)
did this start about at the same time as Bug 584613? It's a strange coincidence.
(In reply to comment #7) > I might buy that if it was the first startup attempt, but it's not: that bug is random, rerunning the same test will find the missing file, doh!
the backout in bug 584613 seems to have reduced these a lot, or completely solved this.
Depends on: 584613
Resolving since it has disappeared after the backout. Ended up having same cause as bug 584613
Status: NEW → RESOLVED
Closed: 14 years ago
Component: Places → File Handling
Product: Toolkit → Core
QA Contact: places → file-handling
Resolution: --- → FIXED
Whiteboard: [orange] → [orange][same underlying cause of bug 584613]
Blocks: 286382
Whiteboard: [orange][same underlying cause of bug 584613] → [same underlying cause of bug 584613]
Product: Core → Core Graveyard
You need to log in before you can comment on or make changes to this bug.