Closed Bug 909734 Opened 11 years ago Closed 11 years ago

Sometimes Firefox needs more than 5s to display the main window after a restart

Categories

(Core :: General, defect)

26 Branch
defect
Not set
major

Tracking

()

RESOLVED FIXED
mozilla26
Tracking Status
firefox25 --- unaffected
firefox26 - fixed

People

(Reporter: whimboo, Assigned: bhackett1024)

References

Details

(Keywords: regression)

This has started lately as you can see in the discussion on bug 906591. It looks like that bug 897655 has caused this on mozilla-central. That's the only affected branch.

What I can see is that for some of our Mozmill tests for Firefox the browser window opens very late (>5s) while Mozmill already started to run a test. Given that it cannot find the browser window our tests will fail.

I don't have a testcase yet, but will work on it.

This should block Firefox 26.
Flags: needinfo?
Component: XUL → General
Flags: needinfo?
Flags: needinfo?
Any STR in a normal browser environment?  Except for any more straightforward bug, the only thing about off thread parsing that would cause the browser to run much slower than usual is if the parsing thread is being starved by the OS.
Flags: needinfo?
I will work on that early tomorrow. I have to head out for the rest of the day soon. First thing I will have is most likely a Mozmill test.
Flags: needinfo?(hskupin)
Flags: needinfo?(hskupin)
Version: 24 Branch → 26 Branch
I don't have a minimized testcase yet. But if you want to reproduce it at least on Linux and OS X you might want to run the following steps:

1. Create a new virtualenv for Python
2. Install ´mozmill-automation==2.0rc5´
3. Run ´testrun_functional %path_to_firefox_nightly%´

At least for one of the restart tests you should see the failure: '"message": "controller(): Window has been initialized.",'. You will see that the main window will not be displayed for a while.
As per comment 1, with no evidence that there's a user-facing impact here this is not a release blocking issue.  Please renominate if something comes to light that indicates we'd see a user perf hit here that we can actually track to a regressing landing.
Hm, I cannot reproduce this anymore on latest Nightly. Also we haven't seen those failures in the last two days. Mario, can you please check if that has been fixed in between? I would like to know a possible changeset. Thanks.
Flags: needinfo?(mario.garbi)
This is the pushlog responsible for the fix and I suspect that Bug 904147 handled that issue:

http://hg.mozilla.org/integration/mozilla-inbound/pushloghtml?fromchange=610f5f410e3b&tochange=954a7a7a5051
Flags: needinfo?(mario.garbi)
Mario, I don't think thta this is right. The dates in that pushlog show Aug 23rd, but we first noticed the problem on Aug 27th.
Indeed I checked now and managed to reproduce it with a newer build, I was using 5 runs of the restart tests to try and reproduce since we don't have a single bug that fails everytime. I will double check in the future so that I avoid false positives.

I will provide the changeset as soon as possible as there have been more than 100 changesets between the last failing build and the first passing one.
(In reply to mario garbi from comment #8)
> I will provide the changeset as soon as possible as there have been more
> than 100 changesets between the last failing build and the first passing one.

Mario, this bug is waiting for over a week now for that changeset. When can we expect that information?
Flags: needinfo?(mario.garbi)
QA Contact: mario.garbi
I started investigating the issue but I had little time last week as I had Build Master duty and we had a few P1 that had priority. I will look into this today and will come back with results.
Flags: needinfo?(mario.garbi)
Please do not cancel needinfo as long as the requested information hasn't been given. Thanks.
Flags: needinfo?(mario.garbi)
Whiteboard: [qa-automation-blocked]
Sorry it took so long but over the last week, due to false positive testing results and failures being intermittent, I had to test each GOOD build a couple of times to be sure it's really not failing anymore. I managed to find the last Bad build and the first Good build.
Last BAD:
https://ftp.mozilla.org/pub/mozilla.org/firefox/tinderbox-builds/mozilla-inbound-macosx64/1377625664/ 
First GOOD:
https://ftp.mozilla.org/pub/mozilla.org/firefox/tinderbox-builds/mozilla-inbound-macosx64/1377625784/ 

It seems that the issue was fixed in bug 908301.
Pushlog:
http://hg.mozilla.org/integration/mozilla-inbound/pushloghtml?fromchange=39ee92c06d6b&tochange=ca06d27f049f
Changeset:
http://hg.mozilla.org/integration/mozilla-inbound/rev/ca06d27f049f

I had some problems with false positive builds so I tested the first Good build over 10 times to be sure.
Dashboard for first Good build - 20130827104944:
http://mozmill-crowd.blargon7.com/#/functional/reports?branch=26.0&platform=Mac&from=2013-09-09&to=2013-09-09
Flags: needinfo?(mario.garbi)
That sounds way better. Thanks Mario. Given that we haven't seen this failure anymore since the patch landed on mozilla-central, we can fairly be sure it has been fixed.
Status: NEW → RESOLVED
Closed: 11 years ago
Depends on: 908301
Resolution: --- → FIXED
Resolution: FIXED → WORKSFORME
We know which checkin fixed this issue. So by our definition this is fixed and not worksforme.
Resolution: WORKSFORME → FIXED
Assignee: nobody → bhackett1024
Target Milestone: --- → mozilla26
You need to log in before you can comment on or make changes to this bug.