Closed Bug 1200504 Opened 7 years ago Closed 7 years ago

Performance regression in many applications


(Firefox OS Graveyard :: Performance, defect)

Gonk (Firefox OS)
Not set


(blocking-b2g:2.5+, firefox43 fixed)

FxOS-S7 (18Sep)
blocking-b2g 2.5+
Tracking Status
firefox43 --- fixed


(Reporter: mlien, Unassigned)



(Keywords: perf, regression)


(1 file)

On Tue Aug. 28 2015, Raptor detected possible regressions in multiple applications. This could point to an issue in the System or the platform.


Device: Aries-kk
Memory: 2048
Branch: master


Regressing applications:

Clock: regressed by 135ms
Contacts: regressed by 88ms
E-Mail: regressed by 86ms
Gallery: regressed by 131ms
Messages: regressed by 202ms
Music: regressed by 53ms
Phone: regressed by 87ms
Video: regressed by 296ms


Gaia revision: 9cad0f6b5e42cb79
Previous Gaia revision: fa15462b29258fde

Gecko revision: b7c4ba3b0981
Previous Gecko revision: ed5a8aad52e8
Are we sure the numbers are correct? In the 'Test Startup Limit' for example I see missing numbers for 8-30.
We were already chasing false regressions once and it turned out that failed tests caused a hickup in the reporting system and the numbers showed up in the wrong chart.
Yes, these numbers are correct, the differences you see from dashboard and here are dashboard shows p95 values and comment 0 shows median values. About missing numbers, that is due to recently Raptor test failed frequently and it won't upload data to even the number of test runs is almost finished.
I ran b2gbisectbot [1] with command:

  $ node index.js ed5a8aad52e8 b7c4ba3b0981 fa15462b29258fde 9cad0f6b5e42cb79 communications/dialer 910.000

and it reports:

  The first bad revision is:
  changeset:   259709:dab532549095
  user:        Cervantes Yu <>
  date:        Fri Aug 28 17:57:44 2015 +0800
  summary:     Bug 1166207 - Load preload.js in the Nuwa process. r=khuey

I have double confirmed it's the culprit manually.

Flags: needinfo?(cyu)
Depends on: 1166207
Flags: needinfo?(cyu)
Attachment #8658074 - Flags: review?(khuey)
This profile tells the story of this perf regression:,1,2,3,4,5,6,3,4,7,8,9,10,16

It's the profile of the preallocated app process, inside nsBaseAppShell::OnProcessNextEvent(), we spend ~30% time in clock_gettime() because mFavorPerf = 0 (hasn't been bumped):

Each time we enter this branch we need to spend at least 10 ms to break the loop. If there is no native events being dispatched, we will busy looping in it. In theory it should allow a dispatched runnable to run ASAP, but actually it's not. The IPC messages are received with a much larger latency.

To fix this problem the presentation shell for about:blank is moved after we create the its content viewer after we fork the content process. Deep inside the initialization of PresShell, the content sink will bump nsBaseAppShell::mFavorPerf, and we won't enter the branch of busy looping for native events and spend time in clock_gettime().
changeset:  4d520d9c6b850e3476cbff22319cae050a5e2fd6
user:       Cervantes Yu <>
date:       Tue Sep 08 16:11:00 2015 +0800
Bug 1200504: Initialize the PresShell for about:blank after fork to fix the app launch performance regression. r=khuey
This has caused Mulet Mochitests and Reftests to go perma-orange.
(In reply to Cervantes Yu [:cyu] [:cervantes] from comment #12)

Retesting with a fix on non-Nuwa platforms.
changeset:  288378746b475e75f43e3ab69abfae169764fe73
user:       Cervantes Yu <>
date:       Wed Sep 09 18:04:59 2015 +0800
Bug 1200504: Initialize the PresShell for about:blank after fork to fix the app launch performance regression. r=khuey
Closed: 7 years ago
Resolution: --- → FIXED
Target Milestone: --- → FxOS-S7 (18Sep)
You need to log in before you can comment on or make changes to this bug.