Closed
Bug 388939
Opened 17 years ago
Closed 17 years ago
bad Tdhtml regression on phlox (Mac PPC) tinderbox
Categories
(Core :: General, defect)
Tracking
()
RESOLVED
WONTFIX
People
(Reporter: kairo, Unassigned)
Details
(Keywords: perf, regression)
Attachments
(1 file)
something made Tdhtml on my "phlox" PPC tinderbox go from 820ms to 2306ms on trunk. It's surely not the box, as 1.8 branch is still at usual Tdhtml values and Ts/Txul are normal as well. The box is building SeaMonkey trunk, see http://tinderbox.mozilla.org/showbuilds.cgi?tree=SeaMonkey-Ports&hours=24&maxdate=1184713473&legend=0 for a tinderbox page just when it turned orange for that regression. It took me a while to find out the failing test was just Tdhtml taking way longer than the configured timeout. When I bumped it way enough that it could complete again, this is what it returned with: http://tinderbox.mozilla.org/showbuilds.cgi?tree=SeaMonkey-Ports&hours=24&maxdate=1184930389 Tdhtml output before the regression: Document http://www.mozilla.org/performance/test-cases/dhtml/runTests.html loaded successfully Test Average Data ============================================================ colorfade: 1427 1411,1390,1466,1482,1385 diagball: 1734 1713,1762,1713,1733,1751 fadespacing: 2401 2374,2406,2402,2409,2413 imageslide: 409 401,417,420,416,393 layers1: 1891 1939,1955,1820,1903,1839 layers2: 32 33,32,31,31,31 layers4: 32 33,31,32,32,31 layers5: 1250 1243,1229,1325,1220,1233 layers6: 72 73,73,73,71,70 meter: 1197 1316,1208,1138,1178,1146 movingtext: 1147 1046,1152,1188,1188,1159 mozilla: 2899 2876,2917,2900,2895,2907 replaceimages: 2083 2137,2075,2051,2061,2090 scrolling: 4162 4213,4122,4171,4152,4154 slidein: 2669 2892,2625,2584,2593,2653 slidingballs: 1359 1289,1293,1356,1414,1441 zoom: 645 628,890,442,652,611 _x_x_mozilla_dhtml,820 Tdhtml output after the regression: Test Average Data ============================================================ colorfade: 2009 1730,2077,2073,2079,2085 diagball: 2739 2391,3105,2101,3178,2920 fadespacing: 24015 24360,23347,24122,24129,24117 imageslide: 2093 2191,2423,2418,2423,1008 layers1: 3257 3096,3367,3358,3361,3104 layers2: 32 33,32,32,32,32 layers4: 31 31,32,32,31,31 layers5: 1915 1761,1685,1968,2055,2106 layers6: 73 73,73,73,72,72 meter: 25053 25081,25267,25017,25264,24638 movingtext: 5313 5662,4693,6419,5048,4742 mozilla: 21642 21157,21775,21754,21757,21769 replaceimages: 2940 2224,2734,3119,3120,3502 scrolling: 4225 4240,4193,4235,4217,4242 slidein: 14352 14401,13472,14658,14675,14555 slidingballs: 3368 3449,3276,3067,3231,3818 zoom: 6841 7164,6673,6806,6665,6899 _x_x_mozilla_dhtml,2306 Regression timeframe ("C" link from the tinderbox cycle that went orange with it): http://bonsai.mozilla.org/cvsquery.cgi?module=MozillaTinderboxAll&date=explicit&mindate=1184691840&maxdate=1184710499
Updated•17 years ago
|
Keywords: perf,
regression
Reporter | ||
Comment 1•17 years ago
|
||
This sounds like it was caused either by the move to gcc4 or by the cocoa changes shortly after that. CCing the assignees of those two bugs plus mento, who reviewed the gcc4 change.
Comment 2•17 years ago
|
||
This is (I'm sure) a really dumb question ... but how do you run the test at http://www.mozilla.org/performance/test-cases/dhtml/runTests.html and make it generate the results you've posted here? (Just loading the URL doesn't do the job.)
Comment 3•17 years ago
|
||
The results are reported using dump(), so you'll need to start the test from the command line and make sure that browser.dom.window.dump.enabled is set to true in about:config. The results should show up in the console after the test finishes.
Comment 4•17 years ago
|
||
Thanks, Adam! With your instructions I just tried the test at http://www.mozilla.org/performance/test-cases/dhtml/runTests.html with yesterday's Minefield nightly (2007-07-19-04-trunk) on my dual 1Ghz PowerPC G4 ... and had no problems: Test Average Data ============================================================ colorfade: 1688 1456,1790,1736,1722,1735 diagball: 2138 2063,2139,2167,2153,2166 fadespacing: 2803 2834,2795,2755,2875,2756 imageslide: 520 468,527,538,539,527 layers1: 2688 2623,2651,2714,2678,2772 layers2: 43 43,43,42,44,43 layers4: 43 44,42,43,42,43 layers5: 2056 2106,2036,2034,2040,2062 layers6: 99 101,98,99,97,100 meter: 1374 1262,1446,1324,1502,1338 movingtext: 1385 1263,1292,1268,1805,1295 mozilla: 3365 3273,3312,3466,3385,3389 replaceimages: 2074 1829,2099,2093,2187,2161 scrolling: 5405 5353,5391,5390,5487,5403 slidein: 3114 3241,3048,3103,3035,3142 slidingballs: 3154 3076,3125,3143,3256,3172 zoom: 633 905,530,572,587,572 _x_x_mozilla_dhtml,1052 I'll try today's Minefield, SeaMonkey and Camino nightlies and report back again.
Comment 5•17 years ago
|
||
I ran the test at http://www.mozilla.org/performance/test-cases/dhtml/runTests.html on all of today's trunk nightlies, and again had no problems. There must be some additional factor(s) involved. Minefield 2007-07-20-04-trunk Test Average Data ============================================================ colorfade: 1720 1594,1801,1736,1735,1736 diagball: 2129 2167,2245,2177,1854,2203 fadespacing: 2798 2831,2795,2783,2694,2888 imageslide: 511 567,480,533,494,480 layers1: 2704 2682,2676,2765,2704,2693 layers2: 43 44,44,43,43,42 layers4: 44 44,44,43,44,43 layers5: 2109 2197,2129,2082,2067,2072 layers6: 101 99,101,100,102,101 meter: 1330 1292,1309,1348,1364,1336 movingtext: 1358 1349,1398,1356,1316,1369 mozilla: 3473 3547,3413,3508,3494,3403 replaceimages: 2128 1823,2161,2173,2281,2203 scrolling: 5532 5488,5496,5595,5477,5604 slidein: 3138 3291,3095,3108,3104,3092 slidingballs: 3282 2998,3545,3279,3334,3254 zoom: 695 906,587,666,651,667 _x_x_mozilla_dhtml,1067 Camino 2007-07-20-01-trunk Test Average Data ============================================================ colorfade: 1697 1693,1702,1692,1699,1698 diagball: 2103 2092,2113,2103,2104,2103 fadespacing: 2784 2797,2791,2778,2778,2776 imageslide: 591 566,587,600,601,601 layers1: 2454 2442,2439,2456,2460,2471 layers2: 43 43,42,43,43,43 layers4: 42 43,42,44,41,42 layers5: 1914 1913,1910,1919,1913,1914 layers6: 97 96,97,97,97,98 meter: 2753 2755,2757,2754,2757,2742 movingtext: 1884 1870,1881,1896,1894,1880 mozilla: 3460 3468,3459,3463,3445,3463 replaceimages: 2804 2015,2505,3149,3166,3185 scrolling: 5198 5179,5161,5200,5221,5229 slidein: 3161 3089,3175,3179,3181,3179 slidingballs: 2899 2942,2861,2878,2895,2917 zoom: 1668 1578,1698,1684,1685,1696 _x_x_mozilla_dhtml,1190 SeaMonkey 2007-07-20-01-trunk Test Average Data ============================================================ colorfade: 1749 1607,1819,1777,1750,1791 diagball: 2159 2003,2151,2215,2206,2219 fadespacing: 2824 2827,2824,2809,2823,2836 imageslide: 553 603,525,564,537,537 layers1: 2572 2258,2648,2652,2654,2650 layers2: 49 50,49,49,49,49 layers4: 49 49,48,48,50,49 layers5: 2103 2231,2063,2059,2075,2088 layers6: 109 111,108,107,108,109 meter: 1296 1269,1312,1325,1328,1246 movingtext: 1283 1154,1321,1347,1339,1255 mozilla: 3507 3530,3564,3352,3543,3548 replaceimages: 2248 1975,2294,2326,2314,2330 scrolling: 5543 5586,5520,5544,5528,5539 slidein: 3131 3296,3097,3097,3069,3098 slidingballs: 3432 3507,3327,3456,3471,3401 zoom: 688 842,645,698,631,623 _x_x_mozilla_dhtml,1092
Comment 6•17 years ago
|
||
> There must be some additional factor(s) involved.
My PowerPC G4's second CPU isn't one of them -- I disabled it and got
more-or-less the same results with Minefield 2007-07-20-04:
Test Average Data
============================================================
colorfade: 1734 1489,1904,1854,1704,1720
diagball: 2271 2107,2310,2348,2322,2270
fadespacing: 2903 2863,2933,3010,2802,2907
imageslide: 500 512,514,432,513,527
layers1: 2977 2874,2948,2906,3099,3059
layers2: 44 43,44,45,43,44
layers4: 44 47,43,42,43,43
layers5: 2191 2191,2184,2185,2193,2201
layers6: 104 105,103,102,104,108
meter: 2000 2025,1967,1882,2109,2018
movingtext: 1325 1191,1403,1381,1306,1342
mozilla: 3578 3607,3522,3523,3504,3733
replaceimages: 2232 1973,2309,2271,2313,2294
scrolling: 5814 5703,5761,5862,5898,5845
slidein: 3266 3538,3216,3193,3165,3219
slidingballs: 3490 3326,3545,3596,3505,3479
zoom: 888 908,881,816,886,951
_x_x_mozilla_dhtml,1141
Comment 7•17 years ago
|
||
Steven, this regressed between the 2007-07-17 and 2007-07-18 nightlies, so I think you'd need to test those to see if there was a regression (or am I missing something?).
Comment 8•17 years ago
|
||
> or am I missing something? Yes. I don't need to compare the results of the 2007-07-17 and 2007-07-18 nightlies to see that I don't experience anything like the gross performance regression that Robert reported. And if the regression had disappeared between then and now, it'd still be very unlikely that either the change to gcc4 on PPC or my popup patch (bug 387164) had anything to do with it. But I don't see the regression with any of the 2007-07-18 nightlies, either (Minefield 2007-07-18-04-trunk, SeaMonkey 2007-07-18-01-trunk and Camino 2007-07-18-01-trunk). And I see almost no difference between the performance of the Minefield 2007-07-17-04-trunk nightly and the 2007-07-17-04-trunk nightly. These two changes aren't _totally_ off the hook ... but they're pretty close to it. Robert needs to figure out what the "additional factors" that I mentioned in comment #5 might be. Then I'll try to use them to reproduce the regression on my system.
Comment 9•17 years ago
|
||
Minefield 2007-07-17-04-trunk Test Average Data ============================================================ colorfade: 1515 1330,1527,1549,1619,1551 diagball: 1708 1727,1754,1692,1674,1693 fadespacing: 2472 2443,2490,2443,2499,2487 imageslide: 407 399,407,407,413,407 layers1: 2681 2635,2717,2646,2662,2744 layers2: 44 44,42,43,45,44 layers4: 43 43,42,44,42,43 layers5: 2081 2074,2049,2058,2152,2071 layers6: 101 101,100,102,100,102 meter: 1275 1290,1250,1276,1286,1273 movingtext: 1245 1241,1237,1283,1233,1229 mozilla: 2926 2891,2920,2950,2945,2925 replaceimages: 2102 1800,2125,2151,2166,2268 scrolling: 5688 5685,5637,5719,5748,5652 slidein: 2723 2641,2741,2732,2739,2760 slidingballs: 2751 2746,2728,2723,2836,2722 zoom: 626 672,609,599,601,650 _x_x_mozilla_dhtml,981 Minefield 2007-07-18-04-trunk Test Average Data ============================================================ colorfade: 1659 1476,1761,1735,1601,1721 diagball: 2223 2293,2241,2240,2123,2216 fadespacing: 2807 2794,2776,2818,2830,2816 imageslide: 511 551,500,487,519,499 layers1: 2682 2578,2700,2694,2719,2721 layers2: 43 44,43,42,43,43 layers4: 43 43,42,43,43,43 layers5: 2190 2277,2173,2173,2153,2175 layers6: 99 97,99,100,101,100 meter: 1310 1309,1311,1336,1297,1297 movingtext: 1323 1136,1303,1423,1359,1395 mozilla: 3491 3516,3521,3535,3452,3429 replaceimages: 2155 1899,2167,2201,2296,2214 scrolling: 5508 5392,5522,5563,5480,5582 slidein: 3220 3311,3169,3182,3263,3173 slidingballs: 3274 3393,3321,3241,3176,3238 zoom: 716 979,701,634,607,660 _x_x_mozilla_dhtml,1068
Reporter | ||
Comment 10•17 years ago
|
||
And this comment #9 shows that you apparently also are seeing a regression, even though it is smaller...
Comment 11•17 years ago
|
||
> And this comment #9 shows that you apparently also are seeing a > regression, even though it is smaller... Fair enough. I didn't notice it at first. But I don't know if it's large enough to be visibly reproducible in continued tests. So please try to figure out why your regression was so much larger. The bottom line (of course) is that I need a pretty reliable baseline against which to measure what happens when I back out my popup patch (bug 387164), or try compiling with GCC 3 instead of GCC 4. (I haven't yet had time to do an apples-to-apples comparison with either of these factors.)
Comment 12•17 years ago
|
||
I redid my test of Minefield 2007-07-17-04-trunk and 2007-07-18-04-trunk, and (more or less) confirmed a 7-8% regression for the whole dhtml test at http://www.mozilla.org/performance/test-cases/dhtml/runTests.html. But I also did other tests that eliminate the switch from gcc3 to gcc4 and my popup patch (bug 387164) as possible causes of this regression. I built four different copies of Minefield -- all starting from source that I downloaded via CVS at around 7:00 PDT on 2007-07-16, and all using the following .mozconfig file: . $topsrcdir/browser/config/mozconfig mk_add_options MOZ_OBJDIR=@TOPSRCDIR@/obj-firefox ac_add_options --enable-optimize ac_add_options --disable-tests For my first build (which I call "gcc3") I left the source unchanged, but ran gcc_select and chose 3.3. For the second build ("gcc4") I applied the patch for bug 385221 and ran gcc_select to choose 4.0. For the third ("gcc3 popup") I added the patch for bug 387164 and ran gcc_select to choose 3.3. For the fourth ("gcc4 popup") I just ran gcc_select to choose 4.0. All my results fall within a 2-3% range, and it's actually "gcc4 popup" that gets the best results. (In all my tests I ran Minefield, quit it, and then ran it again (to avoid any possible influence from the infamous "double-bounce"). Then the first thing I did was to run the test at http://www.mozilla.org/performance/test-cases/dhtml/runTests.html.)
Comment 13•17 years ago
|
||
Side note, probably unrelated: In all the Minefield versions I tested (the nightlies and the ones I built) I noticed that the movements in many of the dhtml tests were quite jerky (not smooth). This doesn't happen in Camino, or in any of the 1.8 branch browsers.
Reporter | ||
Comment 14•17 years ago
|
||
Do you have numbers of branch on your machine for comparison? phlox gets a Tdhtml number similar to before the regression on both branches as well, even now: 1.8 branch from today: Test Average Data ============================================================ colorfade: 1392 1374,1372,1395,1412,1407 diagball: 1660 1657,1660,1659,1661,1661 fadespacing: 2302 2292,2305,2301,2304,2310 imageslide: 405 397,409,405,407,408 layers1: 2674 2691,2659,2685,2653,2680 layers2: 56 57,56,55,55,55 layers4: 55 56,55,55,55,55 layers5: 1774 1767,1790,1781,1748,1783 layers6: 144 144,144,145,146,142 meter: 1178 1188,1174,1174,1178,1175 movingtext: 979 986,989,976,971,971 mozilla: 2777 2697,2770,2791,2851,2778 replaceimages: 1115 1071,1106,1149,1137,1114 scrolling: 4116 4155,4101,4085,4120,4120 slidein: 2562 2775,2495,2511,2517,2511 slidingballs: 961 1065,925,927,959,928 zoom: 672 668,682,673,668,671 _x_x_mozilla_dhtml,878 1.8.0 branch: Test Average Data ============================================================ colorfade: 1394 1352,1393,1411,1408,1406 diagball: 1660 1633,1678,1661,1663,1665 fadespacing: 2313 2297,2327,2315,2316,2309 imageslide: 404 388,405,408,411,407 layers1: 2667 2689,2664,2689,2643,2650 layers2: 58 60,58,58,57,59 layers4: 58 59,58,59,57,59 layers5: 1766 1761,1762,1772,1744,1791 layers6: 148 150,149,150,145,147 meter: 1165 1179,1158,1155,1169,1166 movingtext: 988 967,991,1001,993,988 mozilla: 2777 2708,2793,2781,2831,2772 replaceimages: 1283 1266,1284,1286,1286,1292 scrolling: 4078 4131,4057,4084,4074,4046 slidein: 2572 2833,2501,2517,2499,2512 slidingballs: 966 1102,948,944,919,918 zoom: 676 692,672,671,673,670 _x_x_mozilla_dhtml,892
Reporter | ||
Comment 15•17 years ago
|
||
Interesting... the cycle that first went orange looks like it still used gcc-3.3 for building, even though the patch to use gcc4 was already in! See http://tinderbox.mozilla.org/showlog.cgi?log=SeaMonkey-Ports/1184710500.19343.gz&fulltext=1 Anyways, it didn't generate Tdhtml numbers from that cycle, as it hit the timeout. A few cycles later I clobbered the build, it uses gcc4 since then, and a few cycles later I increased the timeout enough to get numbers - so the regressed numbers are gcc4 numbers actually. The worst regressions are in fadespacing, meter, movingtext, mozilla, slidein, and zoom tests - is there anything those have in common that is not tested in others? On the other hand, layers2, layers4, layers6, and scrolling did not regress at all - anything special to those?
Comment 16•17 years ago
|
||
> The worst regressions are in fadespacing, meter, movingtext, > mozilla, slidein, and zoom tests - is there anything those have in > common that is not tested in others? If you're asking me, I don't know :-) Do you still get the terrible results (on phlox) that you reported in comment #0? I wonder if your builds have some components compiled with gcc3 and others compiled with gcc4. What results do you get from running gcc_select on phlox? (If it's anything other than "4.0", this may explain your troubles.)
Reporter | ||
Comment 17•17 years ago
|
||
phlox:~ robert$ gcc_select Current default compiler: gcc version 4.0.1 (Apple Computer, Inc. build 5247) And yes, the results are still the same, see http://tinderbox.mozilla.org/showbuilds.cgi?tree=SeaMonkey-Ports
Comment 18•17 years ago
|
||
What does phlox's .mozconfig file look like?
Comment 19•17 years ago
|
||
> What does phlox's .mozconfig file look like?
Never mind. I can see it in phlox's latest log.
Comment 20•17 years ago
|
||
How, exactly, do you run the dhtml test (and the other two tests) on phlox? I'd like to try reproducing your procedure, and see if this makes any difference. Where can I download the builds that phlox produces?
Reporter | ||
Comment 21•17 years ago
|
||
The latest phlox build is always in http://ftp.mozilla.org/pub/mozilla.org/seamonkey/tinderbox-builds/phlox-trunk/ The tests are run via tinderbox, which is launched from a terminal window. The exact command that calls the test should be in the log, the call is created and launched through the tinderbox scripts, see http://mxr.mozilla.org/mozilla/source/tools/tinderbox/build-seamonkey-util.pl#2853
Comment 22•17 years ago
|
||
Thanks for the info. I just downloaded the current phlox tinderbox build (from the link you gave me) and ran the dhtml tests on it (http://www.mozilla.org/performance/test-cases/dhtml/runTests.html): Test Average Data ============================================================ colorfade: 1729 1577,1750,1765,1790,1764 diagball: 2168 2115,2192,2178,2204,2152 fadespacing: 2820 2840,2852,2824,2757,2825 imageslide: 543 548,542,536,550,537 layers1: 2728 2756,2711,2722,2723,2729 layers2: 48 47,49,49,49,48 layers4: 48 49,49,48,48,48 layers5: 2133 2256,2098,2100,2100,2112 layers6: 108 108,108,108,106,108 meter: 1315 1369,1343,1298,1259,1304 movingtext: 1309 1272,1308,1308,1348,1308 mozilla: 3548 3504,3566,3578,3525,3566 replaceimages: 2973 2668,3003,3048,3066,3081 scrolling: 5647 5624,5628,5642,5672,5668 slidein: 3108 3285,3071,3057,3070,3057 slidingballs: 3428 3365,3427,3457,3448,3444 zoom: 626 793,617,595,563,564 _x_x_mozilla_dhtml,1108 As you'll see, I don't get the terrible results you reported in comment #0, and are still getting. I now think the difference between my 7%-8% regression and your gross regression has something to do with how phlox is set up (or maybe is caused by a flaw in the tests themselves). The 7%-8% regression still needs to be explained, though. I'll keep working on it.
Reporter | ||
Comment 23•17 years ago
|
||
Maybe something just makes a Mac Mini with 10.4.10 worse than your machine somehow and maybe it just reacts worse to the same changes. It's a pretty stock installation, after I recieved the machine I only installed the build environment and updates, nothing else.
Comment 24•17 years ago
|
||
Here's something more to try: Run 'top' and look at the usage statistics for the WindowServer daemon. Two people have reported WindowServer resource usage going through the roof after running recent Camino nightlies for a few hours, one of them at bug 389683. I've now been running today's phlox SeaMonkey nightly for about half an hour, and here are my current statistics on WindowServer (from 'top -l 1'): 60 WindowServ 0.4% 0:20.68 3 227 456 5.11M 28.2M 31.3M 224M I'll continue running the phlox SeaMonkey nightly for the rest of the day, and load some Flash heavy pages into it.
Reporter | ||
Comment 25•17 years ago
|
||
No build on phlox ever runs more than a few minutes, so anything that only comes up after hours of running can't affect it - this might be connected in some way though, perhaps all being symptoms of the same problem. phlox currently has 60 WindowServ 0.0% 75:28.33 2 139 121 796K 4.90M 4.71M 130M which is all in all quite moderate (note the machine has been up for 22 days now and continuously being compiling and testing SeaMonkey build all that time, that's where the 75:28.33 number comes from). I need to watch top during a branch and a trunk test run and compare those numbers, that might give us more insight though.
Comment 26•17 years ago
|
||
> I need to watch top during a branch and a trunk test run and compare
> those numbers,
Please do so ... but I already suspect you won't see anything.
I just tried this during a dhtml test (on the phlox SeaMonkey
nightly), and the WindowServer stats hardly budged.
phlox appears to be a PPC mini, which means it's a slower Mac (good for running perf tests!). It would have been interesting to see if boxset (Camino's 933 MHz G4 trunk perf box) had a similar regression during this time period, but unfortunately that was when boxset was down for upgrade to 10.4 (and boxset hadn't been running tests previously, so we can't even say whether or not it jumped). Running perf tests on new, top-of-the-line Intel Xserves (which is how the Firefox tree works) isn't very useful in spotting perf regressions that most users will experience. :(
Comment 28•17 years ago
|
||
> Running perf tests on new, top-of-the-line Intel Xserves (which is
> how the Firefox tree works) isn't very useful in spotting perf
> regressions that most users will experience. :(
True enough. And it's particularly bad for _this_ problem, which is
PPC-only.
(I see no dhtml performance regression at all (not even a fraction of
one percent) on my MacBook Pro.)
Comment 29•17 years ago
|
||
(Following up comment #24) For what it's worth, I've now had the phlox SeaMonkey nightly open for several hours, with several tabs open (including one with a real _nasty_ load of Flash objects (http://espn.go.com/), and another with a single very CPU-intensive Flash object (http://rogerdean.com)). I've used it throughout the day for various tests and things. The last thing I did was to close all the tabs but one and rerun the dhtml tests there. The results were quite a bit worse than when I'd run them just after I started the phlox SeaMonkey nightly (overall 16%-17% worse) -- I'm not sure what this means. After all this the load on my WindowServer has hardly changed: 60 WindowServ 0.0% 10:15.03 3 238 447 3.75M 29.8M 31.3M 223M
Reporter | ||
Comment 30•17 years ago
|
||
WindowServer looks pretty normal during trunk Tdhtml runs. There is a "slight" difference of data on the seamonkey-bin process between trunk and branch after a Tdhtml run: after Tdhtml tests on trunk: PID COMMAND %CPU TIME #TH #PRTS #MREGS RPRVT RSHRD RSIZE VSIZE 1016 seamonkey- 0.0% 1:56.74 7 100 501 29.1M 28.8M 46.6M 1.20G after Tdhtml tests on 1.8 branch: 27384 seamonkey- 0.0% 1:37.13 5 84 325 13.0M 25.9M 27.5M 1005M Data on trunk/branch builds is nearly the same during other tests, so this is not Tdhtml-specific, and it is the same shortly after startup, so those sizes are pretty coherent differences between branch and trunk.
Reporter | ||
Comment 31•17 years ago
|
||
Interestingly, this decreased slightly to numbers between 2000ms and 2100ms with that checkin window: http://bonsai.mozilla.org/cvsquery.cgi?module=MozillaTinderboxAll&date=explicit&mindate=1190183640&maxdate=1190213219
Reporter | ||
Comment 32•17 years ago
|
||
Perf numbers of trunk are still worse than branch for SeaMonkey on phlox, but I have given up hope that we'll find what's causing that. Marking WONTFIX because of that.
Status: NEW → RESOLVED
Closed: 17 years ago
Resolution: --- → WONTFIX
You need to log in
before you can comment on or make changes to this bug.
Description
•