On Tue Aug. 28 2015, Raptor detected possible regressions in multiple applications. This could point to an issue in the System or the platform. --- Device: Aries-kk Memory: 2048 Branch: master --- Regressing applications: Clock: regressed by 135ms Contacts: regressed by 88ms E-Mail: regressed by 86ms Gallery: regressed by 131ms Messages: regressed by 202ms Music: regressed by 53ms Phone: regressed by 87ms Video: regressed by 296ms --- Gaia revision: 9cad0f6b5e42cb79 Previous Gaia revision: fa15462b29258fde Gecko revision: b7c4ba3b0981 Previous Gecko revision: ed5a8aad52e8
Performance numbers  on Raptor.  http://raptor.mozilla.org/#/dashboard/script/apps.js?device=aries&branch=master&memory=2048&series=coldlaunch.visuallyLoaded
blocking-b2g: --- → 2.5+
Are we sure the numbers are correct? In the 'Test Startup Limit' for example I see missing numbers for 8-30. We were already chasing false regressions once and it turned out that failed tests caused a hickup in the reporting system and the numbers showed up in the wrong chart.
Yes, these numbers are correct, the differences you see from dashboard and here are dashboard shows p95 values and comment 0 shows median values. About missing numbers, that is due to recently Raptor test failed frequently and it won't upload data to raptor.mozilla.org even the number of test runs is almost finished.
I ran b2gbisectbot  with command: $ node index.js ed5a8aad52e8 b7c4ba3b0981 fa15462b29258fde 9cad0f6b5e42cb79 raptor.sh communications/dialer 910.000 and it reports: The first bad revision is: changeset: 259709:dab532549095 user: Cervantes Yu <email@example.com> date: Fri Aug 28 17:57:44 2015 +0800 summary: Bug 1166207 - Load preload.js in the Nuwa process. r=khuey I have double confirmed it's the culprit manually.  https://github.com/janus926/b2gbisectbot
Attachment #8658074 - Flags: review?(khuey)
This profile tells the story of this perf regression: https://people.mozilla.org/~bgirard/cleopatra/#report=532e520bcfa111272b7f54f03fdb9aa215b5ae8d&selection=0,1,2,3,4,5,6,3,4,7,8,9,10,16 It's the profile of the preallocated app process, inside nsBaseAppShell::OnProcessNextEvent(), we spend ~30% time in clock_gettime() because mFavorPerf = 0 (hasn't been bumped): http://hg.mozilla.org/mozilla-central/file/a13c1f26e351/widget/nsBaseAppShell.cpp#l271 Each time we enter this branch we need to spend at least 10 ms to break the loop. If there is no native events being dispatched, we will busy looping in it. In theory it should allow a dispatched runnable to run ASAP, but actually it's not. The IPC messages are received with a much larger latency. To fix this problem the presentation shell for about:blank is moved after we create the its content viewer after we fork the content process. Deep inside the initialization of PresShell, the content sink will bump nsBaseAppShell::mFavorPerf, and we won't enter the branch of busy looping for native events and spend time in clock_gettime().
Attachment #8658074 - Flags: review?(khuey) → review+
url: https://hg.mozilla.org/integration/b2g-inbound/rev/4d520d9c6b850e3476cbff22319cae050a5e2fd6 changeset: 4d520d9c6b850e3476cbff22319cae050a5e2fd6 user: Cervantes Yu <firstname.lastname@example.org> date: Tue Sep 08 16:11:00 2015 +0800 description: Bug 1200504: Initialize the PresShell for about:blank after fork to fix the app launch performance regression. r=khuey
This has caused Mulet Mochitests and Reftests to go perma-orange.
(In reply to Cervantes Yu [:cyu] [:cervantes] from comment #12) > https://treeherder.mozilla.org/#/jobs?repo=try&revision=d8abf9ca6401 Retesting with a fix on non-Nuwa platforms.
url: https://hg.mozilla.org/integration/b2g-inbound/rev/288378746b475e75f43e3ab69abfae169764fe73 changeset: 288378746b475e75f43e3ab69abfae169764fe73 user: Cervantes Yu <email@example.com> date: Wed Sep 09 18:04:59 2015 +0800 description: Bug 1200504: Initialize the PresShell for about:blank after fork to fix the app launch performance regression. r=khuey
You need to log in before you can comment on or make changes to this bug.