If you think a bug might affect users in the 57 release, please set the correct tracking and status flags for Release Management.

Windows chrome seems to be a 25% pageload hit

ASSIGNED
Unassigned

Status

()

Firefox
General
--
major
ASSIGNED
7 years ago
3 years ago

People

(Reporter: bz, Unassigned)

Tracking

({perf})

Firefox Tracking Flags

(Not tracked)

Details

Attachments

(7 attachments, 9 obsolete attachments)

28.24 KB, image/png
Details
34.83 KB, image/png
Details
34.30 KB, image/png
Details
47.12 KB, image/png
Details
44.44 KB, image/png
Details
70.96 KB, image/png
Details
43.96 KB, application/vnd.openxmlformats-officedocument.spreadsheetml.sheet
Details
When the talos changes including the fix for bug 651659 happened, Windows Tp4 went up 25% or so on both Win7 and WinXP.

The corresponding change on Linux was closer to 7%.

7% is not great, but 25% is just bad.  We need to investigate this.  Can we get a per-page breakdown and see what pages regressed, then try to reproduce outside the harness?
compare-talos can do a per-page breakdown

Comment 2

7 years ago
Platform -> Windows?
OS: Mac OS X → Windows 7
Apparently we had no regression testing here for a year and nine months.
Since we haven't had this data since bug 509124 landed, I think it might be a good idea to do a bunch of builds starting from then and then doing a few tp(4|5) talos runs in steps to figure out what may have regressed it along the way.  Month increments will give us a rough idea, and then we can drill down from there.

Does this seem reasonable to folks?  If so, I can file the bug to get a project branch under way to do this.
Yes, I think that's a good idea.
Shawn, that sounds like a good plan.

Updated

7 years ago
Depends on: 656339
Going with the try server approach instead.  First batch of changes I'm going to push are the following changesets (in one month increments):
"5798118100e1",
"6119349e864e",
"f668f71d3a22",
"0d98ff9aef32",
"f4e5b9438d17",
"aa6dac5f1d10",
"73f49689630a",
"389f18660b76",
"0b3c38dad8a7",
"2c63457bc391",
"b69e999098ce",
"13c6e0ecf8a7",
"796b87a413fb",
"df5f653ea413",
"43dc221c45ad",
"b062f63a98fb",
"abe884259481",
"4e0501a0c5e5",
"d429b038ccea",
"d66b9df51eff",
"ee377a1a5e31",
"77c181f69a0a",

That's a lot, so I'm going to write a script to help me automate this...
Tiny bit 'o JS to make the magical command line:
let changes = [
/* changes go here */
];
let commands = [];
changes.forEach(function(cset) {
  commands.push("hg qpop -a");
  commands.push("hg update -C " + cset);
  commands.push("hg qpush");
  commands.push("hg push -f try >> pushes.txt");
});
commands.join(" && ");
The changesets that I care about on the try server:
"5f39932f196b",
"bf685f38c9b5",
"cf56e4f324ad",
"729dff6c0b2b",
"01345ca1bff0",
"2092ab1e51f3",
"809585433048",
"b945665780ed",
"245499284308",
"75bdd27f6cca",
"9172a29e7c3c",
"be6750d0d62a",
"a3e953a7385c",
"65f14c2ed46f",
"6107b1ba8f4a",
"c0dbd0eb1b37",
"ecd6229af903",
"75e3e72a8668",
"685c7ee565a3",
"d09814d089d8",
"2ee4eae021cf",
"bcae74095eea",
73f49689630a (pushed with 809585433048) appears to be broken (sadfaces): https://ftp.mozilla.org/pub/mozilla.org/firefox/try-builds/sdwilsh@shawnwilsher.com-809585433048/try-win32/try-win32-build362.txt.gz

I'm going to ignore it for now, but that does mean we have a two month window if stuff jumps there.
Assignee: nobody → sdwilsh
Status: NEW → ASSIGNED
Only one bad changeset out of the bunch.  I just finished triggering the rest of the additionally talos runs I'll need, so we should have results in a couple of hours.  Going to write a script to pull all the data I need out of the graph server...
I'm using this gist to track the mappings of mozilla-central changesets to try changesets: https://gist.github.com/969299
I'm learning that our tp4 numbers for windows 7 used to be really noisy, so I'm only looking at XP numbers.  During our 1 year 9 month time frame, we managed to regress XP tp4 times by 15% which would put is close to our linux numbers:
http://perf.snarkfest.net/compare-talos/index.html?oldRevs=5f39932f196b&newRev=bcae74095eea&tests=tp4&submit=true

Interestingly enough, we actually started off good with having performance wins in tp4 on XP all the way through aa6dac5f1d10:
http://perf.snarkfest.net/compare-talos/index.html?oldRevs=5f39932f196b&newRev=2092ab1e51f3&tests=tp4&submit=true

Then, I lose a month because it didn't build, but during that two month span, we lost everything we gained:
http://perf.snarkfest.net/compare-talos/index.html?oldRevs=2092ab1e51f3&newRev=b945665780ed&tests=tp4&submit=true
That set of pushed in that regression range is https://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=aa6dac5f1d10&tochange=389f18660b76

The next month we regress it even more:
http://perf.snarkfest.net/compare-talos/index.html?oldRevs=b945665780ed&newRev=245499284308&tests=tp4&submit=true
The range of pushes on that one is https://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=389f18660b76&tochange=0b3c38dad8a7

We then gain back 7%, but our numbers start to get noisy, which is sadfaces:
http://perf.snarkfest.net/compare-talos/index.html?oldRevs=245499284308&newRev=75bdd27f6cca&tests=tp4&submit=true

Our numbers start to get less noisy, and we get another win (~9%):
http://perf.snarkfest.net/compare-talos/index.html?oldRevs=be6750d0d62a&newRev=a3e953a7385c&tests=tp4&submit=true

We lose all that and more the following month though:
http://perf.snarkfest.net/compare-talos/index.html?oldRevs=a3e953a7385c&newRev=65f14c2ed46f&tests=tp4&submit=true
The pushes then were https://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=796b87a413fb&tochange=df5f653ea413

The next month also regresses things pretty badly:
http://perf.snarkfest.net/compare-talos/index.html?oldRevs=65f14c2ed46f&newRev=6107b1ba8f4a&tests=tp4&submit=true
https://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=df5f653ea413&tochange=43dc221c45ad

The numbers start to go up and down more, so I'm going to invest some time making a graph of all this and figure out what builds I need to start getting more data for.
Created attachment 532342 [details]
tp4 times

Here is a chart of the tp4 times that I generated on the try server.  There are some areas where I'm going to start breaking it down on a week-by-week basis based on this data.

The code to get the data from the graph server lives here: https://gist.github.com/971090
Based on the graph in attachment 532342 [details], I'm going to do weekly builds from aa6dac5f1d10 to 0b3c38dad8a7, 796b87a413fb to 43dc221c45ad, and 4e0501a0c5e5 to d429b038ccea.  That's a lot of changes...
from aa6dac5f1d10 to 0b3c38dad8a7:
"a2945f365208",
"edb10c83d7ce",
"31cf1c8d1672",
"d43741a452c8",
"4218e786e430",
"633a33a635f3",
"ab85a037d0a6",
"23f377267915",
"389f18660b76",
"f4f2f895891e",
"d37f64402601",
"fb6fa1f88790",
"94a52b6b6d4e",

from 796b87a413fb to 43dc221c45ad:
"cbf6e0a17783",
"01fa971e62ee",
"1730f1358f13",
"dd0de36fc6f4",
"a73c063e52cb",
"842d82ff333c",
"9c2484dac245",
"2e5c5d72f4fc",

from 4e0501a0c5e5 to d429b038ccea:
"ac1ddab6de59",
"bfd144a54f0a",
"6a78c8b01e9b",
"0f7eea1692b2",

All 24 changesets have been pushed to the try server, and I've updated the gist tracking the mapping of m-c to try server revs (https://gist.github.com/969299).  I sure do hope nobody wanted to use the try server after me today for Windows results.
Created attachment 532713 [details]
tp4 times (with additional runs as noted in comment 16)
With my new data, there are three very serious looking regressions that stand out.

fb6fa1f88790 to 94a52b6b6d4e: https://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=fb6fa1f88790&tochange=94a52b6b6d4e

842d82ff333c to 9c2484dac245: https://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=842d82ff333c&tochange=9c2484dac245

bfd144a54f0a to 6a78c8b01e9b: https://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=bfd144a54f0a&tochange=6a78c8b01e9b

I can go day by day for these changes (which is another 21 pushes), but I'm not looking forward to that.
Can you just bisect instead (still using try to build+test the bisection changesets)?
(In reply to comment #19)
> Can you just bisect instead (still using try to build+test the bisection
> changesets)?
I could, but then I really need to wait until I get results for each step, which means I have to wait at least five hours between steps.  Instead of going every day though, I am going to go through the push log and see things that I think might change the numbers, and we can go from there.

For fb6fa1f88790 to 94a52b6b6d4e, I am pushing the following m-c changes:
"dd135c502a95",
"c8f0660250e0",
"008e74cf781b",
"0b2e87e6b84e",
"d31c87f6d202",
"4918e122e6eb",
"466f5249ac33",
"aa2b262d938a",
"4b67362ac4cb",
For 842d82ff333c to 9c2484dac245, I am pushing the following m-c changes:
"c257bfb8cad0",
"4d7110bb65ec",
"3fd2c9ce4e9c",
"f26eacbf7a3b",
"8dcc5e960d5c",
"d6e2fc48375c",
"c5c3d5c78727",
"b5af89c610e2",
For bfd144a54f0a to 6a78c8b01e9b, I am pushing the following m-c changes:
"ba3fe7ee56b9",
"059044e44314",
"d384e2adf22e",
"a12d11c8912d",
"274e546e9da9",
"3c87074d5f50",
"019e47cc11e2",
"847a825087f2",
"d49d590321e9",
I'm still waiting for all those talos runs that I triggered this morning to finish up before I post more graphs.
Created attachment 533369 [details]
fb6fa1f88790 to 94a52b6b6d4e

Very small regression here for xp.  Changes in the range:
https://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=008e74cf781b&tochange=0b2e87e6b84e
I suspect this is bug 147777
Created attachment 533380 [details]
842d82ff333c to 9c2484dac245

Only one regression here, and it's range is https://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=d6e2fc48375c&tochange=c5c3d5c78727

That's going to be bug 596812 which we ended up backing out.
Created attachment 533383 [details]
bfd144a54f0a to 6a78c8b01e9b

Two regressions here.  The first range is https://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=059044e44314&tochange=d384e2adf22e, which I strongly suspect is bug 541656.  I'm going to push ec65b5c5f68a to confirm or deny that.

The second range is https://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=3c87074d5f50&tochange=019e47cc11e2, and I'm not exactly sure what it is, so I'm going to do some more runs here.
(In reply to comment #26)
> Two regressions here.  The first range is
> https://hg.mozilla.org/mozilla-central/
> pushloghtml?fromchange=059044e44314&tochange=d384e2adf22e, which I strongly
> suspect is bug 541656.  I'm going to push ec65b5c5f68a to confirm or deny
> that.
Saving some effort here and pushing a few more too.  The changesets I'm pushing are:
"ec65b5c5f68a",
"b0325a41a167",
"8590bb1f1104",

> The second range is
> https://hg.mozilla.org/mozilla-central/
> pushloghtml?fromchange=3c87074d5f50&tochange=019e47cc11e2, and I'm not
> exactly sure what it is, so I'm going to do some more runs here.
Only see two that might be an issue here, so pushing these two:
"64cb344a0145",
"9c815db836e3",
It should be noted that all this work I'm doing is just looking for a tp4 regression.  It's quite possible that we've just been this bad on windows because it slowly crept up and it's going to be hard to identify what the cause(s) were.
I started to look more closely at some of the other regressions that are on attachment 532713 [details].  Going to push more changes for 1730f1358f13 to dd0de36fc6f4:
"a592e44b492a",
"fd13b6ce36bd",
"f5cbf5252653",
"7b4ebf471dd0",
"75d2145d1bd3",
"4bb022d84a31",
"c1bb86ae655a",
"b96de58efb5d",
"8e0fce7d5b49",
"ce4dbcbc75a2",

df5f653ea413 to a73c063e52cb:
"875f1912a091",
"cfa340639ce6",
"7cf62a2a821e",
"8b4dc40a4138",
"22c4d4151710",
"cdb90b48f19f",
"191d6bb957b0",
"b47978b94fc9",
"bb235f96b9af",
"8b2cfb269187",
"46310d87f848",

(these two week long spans had a ton of pushes...)
Created attachment 534016 [details]
bfd144a54f0a to 6a78c8b01e9b

This is the same as attachment 533383 [details] (bfd144a54f0a to 6a78c8b01e9b) but with a few more data points to give us better resolution.  There are two big regressions here:
ec65b5c5f68a to d384e2adf22e which is just bug 541656
3c87074d5f50 to 9c815db836e3 which is https://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=3c87074d5f50&tochange=9c815db836e3

I'm surprised by the second one, not the first.  The first is responsible for a 10% tp4 hit on xp: http://tinyurl.com/3l6szdd.  The second is responsible for a 5% tp4 regression on xp (and a win on 7 for whatever reason): http://tinyurl.com/3aq2noe

Boris - you should probably look at the second one and see if anything jumps out at you.
Attachment #533383 - Attachment is obsolete: true
Er... bug 541656 is a 10% Tp4 hit??

For the 5% hit, I wonder whether we end up in a situation where we have more session history and session saving takes longer or something.... would session saving not happen in the nochrome case?

The other patches in the 5% regression range wouldn't affect Windows-only Tp only-when-chrome-is-involved stuff, I would think.  Except if the safe output stream thing is used from somewhere in chrome and using the Unicode names is way slower for some reason?
Created attachment 534052 [details]
1730f1358f13 to dd0de36fc6f4

attachment 532713 [details] had two other regressions that I decided to dig into in the second box.  The first was between 1730f1358f13 and dd0de36fc6f4, which is what this graph looks at.  Windows 7 data was useless here, so I'm only showing xp.

Two show up here, the first regression is https://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=a592e44b492a&tochange=fd13b6ce36bd which I suspect is bug 581212 (Direct3D).  The second regression is https://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=8e0fce7d5b49&tochange=ce4dbcbc75a2 which has a whole bunch of layout changes from dbaron.
Created attachment 534055 [details]
df5f653ea413 to a73c063e52cb

The second regression I investigated further in the second box of attachment 532713 [details] was from df5f653ea413 to a73c063e52cb.  There's only one thing that I think is worth looking at here, and that's https://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=7cf62a2a821e&tochange=8b4dc40a4138 which is nearly an 8% tp4 regression (http://tinyurl.com/44vyych), but I don't really know which changeset is at fault here.
The second range in comment 32 also has a Windows-only smoothscrolling change.  And the layout changes might matter if the Windows theme uses a lot more border-radius than the others...

The range from comment 33 has a Windows-only theme CSS change and some Windows-only widgetry changes that should only matter for fullscreen.
(In reply to comment #32)
> Two show up here, the first regression is
> https://hg.mozilla.org/mozilla-central/
> pushloghtml?fromchange=a592e44b492a&tochange=fd13b6ce36bd which I suspect is
> bug 581212 (Direct3D).
no chrome: http://tinyurl.com/3npl64k ~6% regression on xp
chrome: http://tinyurl.com/43qasjc 10.4% regression on xp
(In reply to comment #34)
Yeah, I think I need to push some more revisions here to get a better picture (still).  Will do that shortly.
In order to rule out or in the direct3d bug, I'm pushing these changes:
"37f65e179e42",
"b57dda2ee56d",

(In reply to comment #34)
> The second range in comment 32 also has a Windows-only smoothscrolling
> change.  And the layout changes might matter if the Windows theme uses a lot
> more border-radius than the others...
The smooth scrolling change was backed out in the next push though, so it isn't a factor here.  margaret said we use border-radius on Windows and Linux, but use svg filters on OS X.  I'm pushing these changesets (right before the border radius change, and that change) to see if it really is the cause:
"94a0c347256d",
"8adb2f64c138",

> The range from comment 33 has a Windows-only theme CSS change and some
> Windows-only widgetry changes that should only matter for fullscreen.
I'd be surprised if that was it, but I'm going to push these changes to figure out what exactly is at fault here:
"10257eea7533",
"3406ae8889f9",
"69dd0ebbd3bc",
Created attachment 534139 [details]
xlsx file

I've put a number of hours into this file at this point, so I'd like to have it somewhere other than my own computer...

Updated

6 years ago
Depends on: 658852
(In reply to comment #37)
> The smooth scrolling change was backed out in the next push though, so it
> isn't a factor here.  margaret said we use border-radius on Windows and
> Linux, but use svg filters on OS X.  I'm pushing these changesets (right
> before the border radius change, and that change) to see if it really is the
> cause:
> "94a0c347256d",
> "8adb2f64c138",
Sadly, 94a0c347256d (08b058b35ae9) doesn't build on windows (the build hangs).
Created attachment 535099 [details]
1730f1358f13 to dd0de36fc6f4

This clearly shows a regression in either bug 581212 or bug 593618 (https://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=b57dda2ee56d&tochange=fd13b6ce36bd).  Compare-talos says it's a a 9.44% regression (http://tinyurl.com/3jk4ydq)

As for the second regression in this range, it looks like we actually had two regressions (sadfaces!).  Like I said in comment 39, 94a0c347256d (which is right before dbaron's stuff) didn't build, but pretty much everything before it landed and was then backed out, so I really think it is that stuff: https://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=8e0fce7d5b49&tochange=8adb2f64c138.  This accounts for a 2.22% tp regression (http://tinyurl.com/3v58e7z).

The second range is right after the first one and contains one backout: https://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=8adb2f64c138&tochange=ce4dbcbc75a2 which I am highly skeptical that it caused a regression here.
Attachment #534052 - Attachment is obsolete: true
(cc'ing joe and dbaron to look at comment 40)
Created attachment 535105 [details]
df5f653ea413 to a73c063e52cb

I think this pretty clearly fingers bug 594882 or bug 556734 (Honza pushed both at the same time) (https://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=7cf62a2a821e&tochange=10257eea7533).  This is responsible for a 9.49% regression on XP (http://tinyurl.com/3r4shsb)
Attachment #534055 - Attachment is obsolete: true
(In reply to comment #40)
> Created attachment 535099 [details]
> 1730f1358f13 to dd0de36fc6f4
> 
> This clearly shows a regression in either bug 581212 or bug 593618
> (https://hg.mozilla.org/mozilla-central/
> pushloghtml?fromchange=b57dda2ee56d&tochange=fd13b6ce36bd).  Compare-talos
> says it's a a 9.44% regression (http://tinyurl.com/3jk4ydq)

Bug 593618 alone wouldn't make a difference without bug 581212. We should set layers.acceleration.disabled to true, push it to try and see if layers acceleration still accounts for a tp regression.
(In reply to comment #40)
> As for the second regression in this range, it looks like we actually had
> two regressions (sadfaces!).  Like I said in comment 39, 94a0c347256d (which
> is right before dbaron's stuff) didn't build, but pretty much everything
> before it landed and was then backed out, so I really think it is that
> stuff:
> https://hg.mozilla.org/mozilla-central/
> pushloghtml?fromchange=8e0fce7d5b49&tochange=8adb2f64c138.  This accounts
> for a 2.22% tp regression (http://tinyurl.com/3v58e7z).

tn later made some significant performance improvements to the rounded clipping stuff (bug 459144 / bug 485501), which I suspect is the most likely culprit here.
(In reply to comment #42)
> Created attachment 535105 [details]
> df5f653ea413 to a73c063e52cb
> 
> I think this pretty clearly fingers bug 594882 or bug 556734 (Honza pushed
> both at the same time)
> (https://hg.mozilla.org/mozilla-central/
> pushloghtml?fromchange=7cf62a2a821e&tochange=10257eea7533).  This is
> responsible for a 9.49% regression on XP (http://tinyurl.com/3r4shsb)

Is this a verified information?

bug 556734 - XSS change is not the cause, for sure

bug 594882 - this re-lands async open cache entry (bug 513008) + a little fix for a crash introduced by that bug ; both these changes may introduce regression, each one

Shawn, could you please check the regression with changeset http://hg.mozilla.org/mozilla-central/rev/72d2863f43c7 where only the async cache open bug is landed?  I would like to try to separate what is the regression amount for the async read it self and what it is for my crash fix that delays network operation for a short time.

Anyway, probably both needs to be somehow fixed (probably by some parallelism or prewarming in this area).
Created attachment 535135 [details]
tp4 times

This is getting hard to keep track of everything, so I'm summarizing everything to date now.

The first major regression (xp: 8% http://tinyurl.com/3sxdgn3) was bug 147777 (dbaron).
The second major regression (xp: 9% http://tinyurl.com/3jk4ydq) was bug 581212 (joe).
The third major regression (xp: 9% http://tinyurl.com/3r4shsb) was either bug 594882 or bug 556734 (honza).
The forth major regression was bug 596812, but it was backed out.
The fifth major regression (xp: 10% http://tinyurl.com/3l6szdd) was bug 541656 (dao).  Bug 658852 was filed to deal with this, but it only won us back 22 of the 36 milliseconds we lost.
The sixth and final major regression we had is currently unclear.  I've pushed three more changesets to narrow it down to one commit in that push (http://tinyurl.com/432k66d) by Mossop:
"3c87074d5f50",
"f00b81064d57",
"4b90fd0c1c4d",
Attachment #532342 - Attachment is obsolete: true
Attachment #532713 - Attachment is obsolete: true
...and my summary totally missed the stuff I figured out today.  After I get the data mentioned in comment 46, I'll post the updated version...
(In reply to comment #44)
> (In reply to comment #40)
> > As for the second regression in this range, it looks like we actually had
> > two regressions (sadfaces!).  Like I said in comment 39, 94a0c347256d (which
> > is right before dbaron's stuff) didn't build, but pretty much everything
> > before it landed and was then backed out, so I really think it is that
> > stuff:
> > https://hg.mozilla.org/mozilla-central/
> > pushloghtml?fromchange=8e0fce7d5b49&tochange=8adb2f64c138.  This accounts
> > for a 2.22% tp regression (http://tinyurl.com/3v58e7z).
> 
> tn later made some significant performance improvements to the rounded
> clipping stuff (bug 459144 / bug 485501), which I suspect is the most likely
> culprit here.

Bug 628745 is what sped the rounded clipping stuff up. Bug 626536 is another optimization that sped up the same area, but was applicable before bug 459144 / bug 485501. If we wanted to see if we won back this regression.
Created attachment 535140 [details]
xlsx file
Attachment #534139 - Attachment is obsolete: true
(In reply to comment #43)
> Bug 593618 alone wouldn't make a difference without bug 581212. We should
> set layers.acceleration.disabled to true, push it to try and see if layers
> acceleration still accounts for a tp regression.
Agreed about this being bug 581212 only.  You want to see if pushing to try just m-c tip (known good, of course) with that pref on and off has a regression, right?

(In reply to comment #44 and comment #48)
> tn later made some significant performance improvements to the rounded
> clipping stuff (bug 459144 / bug 485501), which I suspect is the most likely
> culprit here.
So, should I push bug 626536 and bug 628745 on top of your push to see if that makes the regression go away?

(In reply to comment #45)
> Is this a verified information?
Not 100% sure what you are asking here.

> bug 594882 - this re-lands async open cache entry (bug 513008) + a little
> fix for a crash introduced by that bug ; both these changes may introduce
> regression, each one
> 
> Shawn, could you please check the regression with changeset
> http://hg.mozilla.org/mozilla-central/rev/72d2863f43c7 where only the async
> cache open bug is landed?  I would like to try to separate what is the
> regression amount for the async read it self and what it is for my crash fix
> that delays network operation for a short time.
I'm not 100% sure what you want me to do here either.
(In reply to comment #50)
> (In reply to comment #43)
> > Bug 593618 alone wouldn't make a difference without bug 581212. We should
> > set layers.acceleration.disabled to true, push it to try and see if layers
> > acceleration still accounts for a tp regression.
> Agreed about this being bug 581212 only.  You want to see if pushing to try
> just m-c tip (known good, of course) with that pref on and off has a
> regression, right?

Yes.
(In reply to comment #50)
> (In reply to comment #44 and comment #48)
> > tn later made some significant performance improvements to the rounded
> > clipping stuff (bug 459144 / bug 485501), which I suspect is the most likely
> > culprit here.
> So, should I push bug 626536 and bug 628745 on top of your push to see if
> that makes the regression go away?

Bug 626536 and bug 628745 probably won't apply on top of dbaron's push due to changes that happened in between. The quickest thing might be to push the changesets of bug 626536 and bug 628745 and the changesets before to see if they had a positive effect at the time of their landing.
Created attachment 535391 [details]
bfd144a54f0a to 6a78c8b01e9b

This indicates that bug 629291 (bz) is to blame for the second regression (xp: 4.25% http://tinyurl.com/3w5muz8) here.
Attachment #534016 - Attachment is obsolete: true
Created attachment 535404 [details]
tp4 times

Alright, this contains all data points and all identified regressions.  I'm going to include a new summary here too.

The first major regression (xp: 8% http://tinyurl.com/3sxdgn3) was bug 147777 (dbaron).
The second major regression (xp: 9% http://tinyurl.com/3jk4ydq) was bug 581212 (joe).
The third major regression (xp: 2.22% http://tinyurl.com/3v58e7z) was either bug 459144 or bug 458501 (dbaron).
The fourth major regression (xp: 9% http://tinyurl.com/3r4shsb) was either bug 594882 or bug 556734 (honza).
The fifth major regression was bug 596812, but it was backed out.
The sixth major regression (xp: 10% http://tinyurl.com/3l6szdd) was bug 541656 (dao).  Bug 658852 was filed to deal with this, but it only won us back 22 of the 36 milliseconds we lost.
The sixth and final major regression (xp: 4.25% http://tinyurl.com/3w5muz8) was bug 629291 (bz).
Attachment #535135 - Attachment is obsolete: true
Created attachment 535406 [details]
xlsx file
Attachment #535140 - Attachment is obsolete: true
(In reply to comment #54)
> The sixth and final major regression (xp: 4.25% http://tinyurl.com/3w5muz8)
> was bug 629291 (bz).

Looking at the details for this is seems one site is a huge regression (forumfree.net) and everything else is about the same.

Updated

6 years ago
Depends on: 660264
There's really no way the file output stream thing should have affected that.. unless the Unicode filename op is way slower for some reason and this affected session storage or something.
(In reply to comment #57)
> There's really no way the file output stream thing should have affected that..
> unless the Unicode filename op is way slower for some reason and this affected
> session storage or something.
Session restore might be tickling that.  The results are reproducible, so it's not like it's a one-off bad run.
(In reply to comment #54)
> The sixth major regression (xp: 10% http://tinyurl.com/3l6szdd) was bug
> 541656 (dao).  Bug 658852 was filed to deal with this, but it only won us
> back 22 of the 36 milliseconds we lost.

Given bug 658852 comment 8, I think the difference can be attributed to the platform having changed. I don't see anything else in bug 541656's patch that could affect page load time.
(In reply to comment #52)
> Bug 626536 and bug 628745 probably won't apply on top of dbaron's push due
> to changes that happened in between. The quickest thing might be to push the
> changesets of bug 626536 and bug 628745 and the changesets before to see if
> they had a positive effect at the time of their landing.

I did this. Neither bug seemed to move the tp4 numbers.
I ran a try server job to compare talos tp5o (the current version of tpX) with and without chrome (note, we run by default with chrome- a secondary window that controls the browser).  Looking at windows xp, windows 7, and windows 8, I don't see a regression worth noting:

	chrome	nochrome
winxp	233.05	228.72
win7	236.03	235.05
win8	231.55	232.35


I also looked at the reported counters, responsiveness and shutdown times, all of them were similarly reported (as the tp5o_paint numbers mentioned above).
  
Let me know if there is other testing desired, otherwise I recommend closing this bug.
(In reply to Joel Maher (:jmaher) from comment #61)
> (note, we run by default with chrome- a secondary
> window that controls the browser).

Hmm, I thought chrome vs. nochrome meant "loaded in the Firefox UI" vs. content loaded directly into an otherwise empty window. Is that not the case?
Assignee: sdwilsh → nobody
'chrome' is when we have a controlling window which accesses the browser to loadURI.  'nochrome' is a method where we just loadURI and the extension runs inside the browser.


Here is the code in pageloader, useBrowserChrome is the 'chrome' option which we run by default for tp5o:
http://hg.mozilla.org/build/talos/file/8cf5b862d113/talos/pageloader/chrome/pageloader.js#l131
(In reply to :Gavin Sharp (use gavin@gavinsharp.com for email) from comment #62)
> Hmm, I thought chrome vs. nochrome meant "loaded in the Firefox UI" vs.
> content loaded directly into an otherwise empty window. Is that not the case?

Looking at comment 0 this bug seems to be about "load in the Firefox UI" vs "load in an empty window", and we have more than one meaning for what "chrome" means.
You need to log in before you can comment on or make changes to this bug.