Closed Bug 180241 Opened 22 years ago Closed 8 years ago

lower paint delay timeout from 1200ms to 250ms

Categories

(Core :: Layout, defect, P1)

defect

Tracking

()

RESOLVED FIXED
mozilla1.3beta

People

(Reporter: kerz, Assigned: dbaron, NeedInfo)

References

()

Details

(Keywords: embed, perf, topembed-, Whiteboard: [wgate])

Attachments

(3 files, 1 obsolete file)

In talking to hyatt on IRC, he told me our current paint suppression in Mozilla
is set at 1200ms, whereas Chimera's is set at 500 and Phoenix's is set at 250. 
It seems like Mozilla's is set awfully high compared to it's kid's times, and
should be brought down lower.  This will hurt our time on the iBench test I
think, but it will speed up perceived pageload times, which seems like a good
tradeoff.
Alias: paint
Is the current jrgm page load test a good test to measure the change in
performance?  If so, can we try a patched build with the page loader?
I don't think this will affect the pageload stuff that much, I think the main
hit will be ibench because there is no pause between the loading of pages.
The numbers I remember are:

Win9X: 3000 ms
WinNT: 750 ms
Everything else: 1200 ms (selectable by pref)

The windows numbers were tuned based on jrgm's page loader test.  Perhaps it's
time to retune the timeout on other platforms as well?  It might also be an idea
to see if the one-size-fits-all value is good enough or if this should be a
platform specific value.

Another thing: is the timeout value really the problem, or does Moz need to be
more aggressive at unsuppressing before hitting the timeout?
Actually, I believe the knob that this bug refers to is PAINTLOCK_EVENT_DELAY
in nsPresShell.cpp, while the knob that is 3000 on win9x and 750 on winNT is
(WIN9X_)?PAINT_STARVATION_LIMIT in plevent.c. They're separate, but tightly
related parameters (IIRC).

But you're right that these may not be 'one size fits all platforms' values.

Priority: -- → P1
Target Milestone: --- → Future
QA Contact: ian → kerz
Keywords: perf
Keywords: mozilla1.3
Whiteboard: [wgate]
What are the current plans to approach this problem?
Blocks: 71668
Target Milestone: Future → ---
Mozilla feels 100% faster for me with a DSL connection and a paint surpression
set to 250.
The default seem to be too high and we should really change it to 500 or 250.
This sounds like something we can easily tune to get some perceived performance
boost. Perhaps we need to have the user set a pref for their connection speed,
and set this (and some other prefs) to appropriate values based on that?
btw: the default value of that in recent Phoenix nightlies and 0.5 release is 
250ms, in contrast to Mozilla, where it's still 1200ms.
http://lxr.mozilla.org/mozilla/search?string=nglayout.initialpaint.delay says
that the current value is 250 for both Phoenix and Chimera, I think.  Perhaps we
should use 250?
That'll be fine - according to users feedback that sounds like a good choice.
I can't test this, since testing performance over a remote X connection doesn't
work very well.  Does this make a difference?
The same for me as comment #6 !
Keywords: embed
Attachment #110150 - Flags: superreview+
Taking.
Assignee: other → dbaron
Target Milestone: --- → mozilla1.3beta
Fix checked in, 2002-12-26 13:03 PDT.
Status: NEW → RESOLVED
Closed: 22 years ago
Resolution: --- → FIXED
This fix doesnt seem to help performance, actually made numbers slightly worse,
in most cases.  mac tp got hit the hardest!

We should back out this patch, and figure out the most optimal value, with more
analysis.  Thanks!

tinderbox btek (linux)
------------------------
tp before: 1061 ms
tp after : 1093 ms    ( + 32 ms, or 3.02% )

tinderbox comet (linux)
------------------------
tp before: 1460 ms
tp after : 1464 ms    ( + 4 ms )
txul before: 465 ms
txul after : 468 ms   ( + 3 ms )

tinderbox luna  (linux)
------------------------
tp before: 1165 ms
tp after : 1184 ms    ( + 19 ms, or + 1.63% )
txul before: 1165 ms
txul after : 1156 ms  ( - 9 ms )
ts before : 3209 ms
ts after  : 3212 ms   ( + 3 ms )

tinderbox darwin (mac X)
-------------------------
tp before: 665 ms
tp after : 737 ms     ( + 72ms, or + 10.83% )
ts before: 3532 ms
ts after : 3575 ms    ( + 43ms, or + 1.22% )

tinderbox beast (win2k)
-------------------------
ts before: 1391 ms
ts after : 1431 ms    ( + 40ms, or + 2.87% ) 
txul before: 343 ms
txul after : 344 ms   ( + 1ms )
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Just because the page finishes loading later doesn't mean the perceived
performance is slower.
I'd like to leave it this way for a few days and see what people think of it. 
(I've heard many comments that Phoenix and Chimera seem to load pages faster --
this is probably why.)

Also, it would be useful to get feedback on the difference in perception of
speed between 250 and 500.  Any comments?
I agree with dbaron. The pageload numbers don't necessarily reflect user
perception of speed, particularly on slower links. Nor do they take into account
program responsiveness to user events during pageload, which is another reason
we may want to tune for apparently worse pageload numbers.
more data from testerbox page:

tinderbox fuego (linux)
-------------------------
tp before: 2953 ms
tp after : 3007 ms     ( + 54ms, or + 1.83% )
txul before: 3012 ms
txul after : 3042 ms   ( + 30ms, or + 1%)
ts before: 6397 ms
ts after : 6424 ms     ( + 27ms, or + 0.42% )

tinderbox maple (linux) 
-------------------------
tp before: 1154 ms
tp after : 1181 ms     ( + 27ms, or + 2.34% )
txul before: 1147 ms
txul after : 1154 ms   ( + 7ms )
ts before: 3235 ms
ts after : 3246 ms     ( + 11ms )

tinderbox mecca (linux)
-------------------------
tp before: 2156 ms
tp after : 2216 ms     ( + 60ms, or + 2.78% )
txul before: 2149 ms
txul after : 2154 ms   ( + 5ms )
ts before: 5984 ms
ts after : 6000 ms     ( + 16ms )

tinderbox rheeeet (linux)
-------------------------
txul before: 11416 ms
txul after : 11420 ms  ( + 4ms )
ts before: 27877 ms
ts after : 28532 ms    ( + 655ms, or + 2.35% )

tinderbox darwin (mac X)
-------------------------
txul before: 1609 ms
txul after : 1615 ms   ( + 6ms )
ts before: 7034 ms
ts after : 7043 ms     ( + 9ms )
Okay, I'm done with getting all statistical data in the bug.

I think preceived performace improvement is good, if it is realy noticible.  My
concern is that we should try to determine a value (through more analysis!) and
find one that can give us both preceived and actual improvement, which is the
step missing here...

The hard reality is people run through benchmarks to determine where we are, and
numbers determine the score.  I have no problem taking this fix, if we have
proven ourselves.  There could be a value out there which can give us both?

I think the vast number of people will rate Mozilla due to the perceived 
performance (I do agree with David & Simon here).
Suggesting leaving 250ms for a testing periode, then we might try out a higher 
value to get a feedback and tune this value (to incorporate both perceived and 
pure statistical measured performance).
Switching from Word to Mozilla (or Excel to Mozilla) seems a lot snappier, now
that I use the win32 nightly from 28th of december 2002 - I´m on Win2000 Pro SP3.

Build ID:
Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.3b) Gecko/20021227

I also find the general operation, ie switching tabs with pages rendered,
loading pages a bit more easy going. So I think this area could bear improvement.
It might make sense to fix the following bugs before doing more fine-tuning:

bug 162122 stop paint suppression when vertical scrollbar appears
bug 162123 stop paint suppression when html finishes loading (don't wait for 
images)
forget those pageload-times, what matters is the time from the click until you
see the page, so this delay should be part of that time (which it isn't!)
I've played around comparing
user_pref("nglayout.initialpaint.delay", 250);
and
user_pref("nglayout.initialpaint.delay", 5000);
(at least from the code it seems like nglayout.initialpaint.delay is the pref
that overrides  PAINTLOCK_EVENT_DELAY). 
I have a K6-2/500 and a 768kbps connection and the difference is hardly
noticable and could be in my imagination only. Tab switching and the back button
seem a bit snappier with 250. Following links could be, too, but this could just
be wishful thinking. When loading many tabs in the background (I like to
middle-click lots of links on Slashdot) Mozilla's responsiveness seems to be
worse with 250 than with 5000, though.
you don't see it? Sure you changed it for the right profile? :)
Surfing a Web-Forum (like heise.de) is WAY faster
Yes, *perceived* performance (under Win32/XP) has increased considerably --
especially on pages using DHTML menus, and both <iframe> and <applet> tags. 
Mozilla seems nice and responsive now on such pages.  I honestly noticed this
"performance boost" while browsing using last night's build.  Being curious, I
wanted to understand what caused this performance gain...and here I am.  Thanks!
If I understand correctly, tp measures
1. The amount of time it takes to load a sequence of pages without pausing, with
the onload event of each page triggering the load of the next.

Can we measure times more closely related to useful/perceived speed?  For example:
2. The amount of time before onload is called.
3. The amount of time before the first screenful stops changing.
4. The amount of time before text in the first screenful stops moving.
5. The amount of time before anything (or any text) appears in the content area.
In my experience, any value between 250ms and 500ms will greatly improved
perceived performance. Values below 250ms cause complex pages to be reflown
multiple times and as such are not advisable.
Answer to comment 27: I have only one profile, so no way to get it wrong. I even
checked with about:config that the value has been set. Today I downloaded and
installed 20021229 where the 250 ms is in the source code (I checked this, of
course). And of course I removed the nglayout pref from both user.js and
prefs.js (without Mozilla running) and I checked with about:config that the pref
was gone. Guess what, my initial observation is confirmed. My system does NOT, I
repeat NOT, show any significant improvement (not on heise.de or slashdot.org
anyway). There seems to be a very slight speed gain but I wouldn't notice it if
I wasn't looking for it and it might just as well be in my imagination.
Sorry to burst your bubble, but apparently this is no magic bullet. It's great
if it gives some people a performance boost, but my UA string will remain set to
"Slozilla" :-)
Opinion - at least some people seem really happy with the change,i.e. 
starting paints sooner, at the expense of a bit slower completion. On the other 
hand, as Cathleen says, it will be a 'ding' to standard benchmark results. The 
deeper problem here is having one fixed paintdelay time applying to everything. 
Its a good idea, but overly simplistic in implementation. By that I mean, 
typical real usage will often involve mix of cache loads and network loads. I 
suspect performance perception benefit may be largely due to very quick 
rendering of cached content, rather than 'waiting around' for paintdelay time 
to expire and/or completion of network stream content parts. There is probably 
also some cases real benefit in elapsed time performance, cache stream content 
can render during time otherwise spent doing nothing much except waiting for 
network streams content. 

I guess I am saying this... I think 'paintdelay default change' is important 
because it 'points toward' a performance fix.... but simply reducing the 
paintdelay constant value is not the best fix for this problem... just my 
opinion.

  
It occurs to me the purpose/bemefit of paintdelay may not be clear to everyone, 
the intent of it is to reduce the amount of time in unproductive layout and 
rendering of woefully incomplete content... until such time that a 'meaningful 
amount' of layout and content is available. Watching frames being constructed 
and contents dancing around as layout 'evolves' is costly to performance.

Also, paintdelay reduces and hides mozilla overheads in handling small images, 
i.e. handling image frames and placeholder gifs. Small gifs are usually just 
painted in final form rather than several paints for frame, placeholder, then 
final form. 

Thirdly, the paintdelay often will allow ehough time to esablish the existance 
or non-existance of scroll bars. The significance this is that  scrollbars, 
when they become required, reduce the visible window area and scrollable window 
area. All content previously layed out and rendered needs be re-layed out and 
re-rendered under the new window sizes when scrollbar state changes.

Fourth, at least, is to allow mozilla a while to fetch and parse content 
without contention from 'user animations' or whatever cpu-intensive code that 
may be included in the page content.

Fifth, (which may be fixed now?) is (or was?) the way mozilla paints partial 
images, each repaint of partial image content, as it arrives, re-paints all 
available image content, ie, firt paint might paint band 1, second paint paints 
band 1 and 2, third repaint paints bands 1,2,3, and so forth. Image data ends 
up going through many unproductive paints.   

These are at least some of the things that paintdelay helped.
Sam
Reconsidering myself... actually a lot of things (things that are in obtuse ways
related to paintdelay) things have been fixed since the number 1200 was somehow
decided a long tine ago... given NOW... a smaller number is probably useful,
better, and not very costly. I think maybe however 250 is too small? Taking a
3%-11% hit in  benchmarks is hard. Maybe 500 ms is a better compromise?

(Still, I think the problem/opportunity is deeper than change to delay constant
value).    
Some small tests with different settings
Clearly 1200 is too high for user perception, but 250 leads to reflows and bad
benchmarks. The obvious approach should be to try various intermediate values
(400 to 700) and find the sweet spot.

Perhaps a PRNG attached to the nightly build system?  :)
I would like to see benchmark results for 500ms.

re: comment 29, Tp is a carefully constructed median average (? I think,
jrgm is Tp master) of individually-timed pageloads.  There's some handshaking
going on to time this, so it should be (and is) fairly repeatable.  Look at
the "raw data" link on any Tp graph to see the raw data before we munge
a Tp value out of it.
Re: comment 29

point 1 is mostly right, except it does pause (default 1 second) between page
loads.  Specifically what is measured is the time from when the load of "page
N+1" is initiated in "page N" and when the onload event is fired for "page
N+1".

point 2, then, is the same thing as point 1, no?

For point 3 and 4, I don't know if there is a way to actually retrieve those
values from Gecko (in C++ code).

Point 5, I guess, would be the time of the first paint, although I don't
really know if that would actually map to what an end-user perceives as
"better".

The overall Tp number is, as mcafee notes, the average of the median time to
load each page. (Actually, it also reports the overall average as well; median
is just a way to toss out the occasional abberant result). It is an arbitrary
measure, and probably too heavily weighted to 'images already in cache'
loadtimes. But, we will get measured on the similar IBench test, so we should
make the right tradeoff between this arbitrary number and giving the end-user
the right "feel".

You can "study for the test", and paint suppression is (partially) doing that,
although it is a real gain to the degree that it suppresses paints that
end-users don't really need or want to see.

My own perception is that 250 is too low, but then I'm looking at this on a
high-speed connection. It would be good to get some measurements, and end-user
comments, based on a setting of 500 ms.
Cool, I didn't know Tp paused between pages.  In that case, 2 is the same as 1.

3 and 4 are probably the hardest to measure, but if I'm right that they're more
closely related to useful/perceived speed than Tp, it might be worth it to
measure them in addition to measuring Tp.
By the way, if we could measure 3 and 4 for other browsers, then it wouldn't be
as important to do well on the ibench test, because we could say "but we do much
better than IE on a test of when the first screenful of a page stabilizes".
My impressions after a while of normal usage on 512K and 2MB connections is that
250ms is a tiny bit too short (whereas 1200ms was way too long). I think 500ms
would be a good compromise. Of course we can try to find an even better value
somewhere between these two, but I don't believe it would have much effect on
the perceived speed.
Hi All!

First of all many thanks for this awesome performance improvement within the
last 1.3b (nightly builds)!!!

TO POINT IT OUT CLEARLY: Mozilla/Gecko seems to be much faster now than M$
Internet Exploiter and Opera! 

So I wonder that you @ Mozilla/Netscape can get worry about some engineering
benchmarks which nobody outside notes of. 

Mozilla relies mostly on its community. And we should use the chance to beat the
'Root of All Evil' in the field of user perception. Where else this could be done? 

(I do not know anyone who uses Mozilla for benchmark reasons nor I saw some in
any magazine, I only see the market share reports.) 


BTW:
I'm here at work on a 2MB connection and it may be lesser significant on slower
connections (This could be determined in the setup and preferences). 

This could be most significant on W2K & Co. Good so - this could hurt the dark
side as more.


END: 
Do not worry about this little melodramatic appeal. I was a little wondered too.

Regards, Jan
I wonder how machine (CPU, RAM-speed etc.) dependent this is. I'd expect higher
values to be better (overall) on slow machines (less recalculations) and lower
values to be much better on faster computers (no unnecessary delays). The
problem is, for what machines do we want to finetune? Or should we try to
measure the machine-speed somehow once and then set the value accordingly etc.?
It just seems a bit weird to set a value in real time units when the world
things take place are for a good deal in a machine dependent time frame and only
the user interaction is measurable in seconds.
Just played around with the
user_pref("nglayout.initialpaint.delay", 100);
setting: Whooohooo! :D 

For me reducing this timer made Mozilla approach IE's speed, even on complex
(forum) pages. Really fantastic. The same is true on pages that are using slow
banner servers. In the old situation you had to wait for 1.2 seconds just
because some silly banner couldn't be loaded immediately.

BTW I also tried setting this to 50ms, that was even snappier but it gives a
kind of jumpy IE like feeling to the browser. IMO this is really a great
performance improvement. IMO Mozilla at this point only had two major drawbacks: 
- (perceived) pagerendering is/was slower than IE's
- The large amounts of memory that it requires.

IMO the first point is fixed now :D I guess that should enable me to convert
some more people to Mozilla ;)

I tested with a Linux machine (512MB, Athlon 2000XP), build 2002121005 through a
10Mbit connection. Tested slashdot, www.tweakers.net (slow ads) and some phpBB
sites (one of which is 4ms away from me).


Furthermore I completely agree with Arthur, what he mentioned was also the first
thing I though when reading this bugreport. IMO 250ms is something _very_
different on an Athlon 2000XP than on a 500Mhz laptop with a slow videocard. So
having some kind of dynamic or performance related delay would be ideal IMO.

Anyway, reducing this time this much really made a huge (perceived) difference
IMO! Thanks!
re comment 43: Fine-tune for slower machines. On a fast machine Mozilla's speed
is not really a problem. It's on slower machines where Mozilla still doesn't see
the taillights of IE and Opera. Unfortunately it seems that on slow machines
decreasing the paint timeout doesn't really help much.
Re: comment 43. I think any tuning of these parameters should also take network
speed into account. See also bug 69931.
There are additional "knobs" which control how quickly you see content.
The content notification interval set through the pref.
("content.notify.interval" ) controls how frequently frames are constructed.

On WIN32 there is also the PAINT_STARVATION_LIMIT which is set to a constant of
750Ms on WinXP and 3 seconds on WIN9x.  This determines how long to allow
pending paints to be left in the event queue without processing them.

If  you lower the paint suppression delay and both of these values close to 0 
you will see the initial page content painted very quickly. The disadvantage is
you will see page elements jump around as additional content is loaded and page
load times will be affected. The current value for each setting was chosen to
not impact page load on a variety of machines.

I doubt that we will ever be able show content as quickly as possible without
affecting page load time.  This is because the user's impression of page load
time is based on when they see the first chunk or page of content, while the
overall page load is measured based on the onload handler firing.  

Any painting that happens before the onload handler fires will probably have a
noticeable impact on page load. The current paint suppression logic does not
unsuppress the painting until after the onload handler has fired. This
effectively removes the page paint time from the performance measurement on
jgrm's page load test. The only reason you see the pages paint at all before
going to the next page in jrgm's test is that the test sets a timer which delays
the load of the next page which allows enough time for the current page to paint
before initiating the load of the next page.

One proposed solution is to unsuppress as soon as the a vertical scrollbar
appears. This is appears to be rational solution, but it will also have a cost.
 The scrollbar appearing is driven by frames being constructed which means the
content notification interval may need to be smaller than what it is currently
so frame construction will happen more frequently.  It also means that pages
will paint before the onload handler fires which means the paint time will
become part of the performance measurement and we will look slower on the page
load tests.
adding topembed
Keywords: topembed
From a recent n.p.m.performance posting in the thread "Impressivest Mozilla 
1.3b Performance"
"1.2.1 was abnormally slow on my machine. I tried a 1.3 nightly to try the new 
spam features, and was blown away by how much faster things were."
My opinion is that the value 250 is too low. As mentioned on more complex tables
I more reflows. One good example is:
http://mozilla.deskmod.com/?show=showcat&cat_name=mozilla

Also, when a page loads it shows the content much faster, but the complete
loading time of the page is now bigger. So for example now you have to wait
longer now to access dhtml menues that are built when the page has finished
loading. Most of the dhtml menues work like this, an example is:
http://www.idbibank.com/
Good point - your suggested value?
Something higher than 250 ;-)
Havent tested any other values.
Note the bandwidth-limiting proxies (bug 23459, comment 4), which can help with
testing the possible settings at various connection speeds.
(one slight caveat with the bandwidth-limiting proxies... they work by setting
up buckets of, e.g., 8KB, and dole them out on a timer. What this means is 
that the aggregate transfer rate is 8KB/s over some period of time, but the
"burst" rate can be whatever is the normal connection speed (e.g., 10Mbps)).
Discussed in edt.  Minusing.  Please renominate with a clear reason why needed
for topembed plus if appropriate.
Keywords: topembedtopembed-
The new default value of 250 is at least in part responsible for a bad
regression with regards to the Preferences window (see bug 199267). If I set the
value to anything below 1000 I'm seeing very visible and very annoying reflows
(e.g. text fields being inserted and buttons jumping around) in the Preferences
UI. As I've said in comment 45, the lower paint timeout doesn't improve things
much for "slow" machines. If it causes regressions as bad as bug 199267 on these
machines, the paint timeout should not be lowered. Mozilla's performance should
be fine-tuned for slower machines because it's on those machines where Mozilla
needs to catch up with Opera and IE badly.
How about WFM this now or should we investigate tweaking this value with current reflow performance?
On 56k dialup, with an older computer (512MB RAM, 700Mhz Duron CPU), I find a value of 5000 suits me - cuts way down on those annoying reflows mentioned in comment #56. Obviously, this would be an unacceptable delay on fast machines; my point is that any value is going to be a compromise. 250 is obviously too low for slower machines and connections, how much perceived speed improvement does it really provide on faster machines? When I first altered this setting I tried 1000, I think that would be acceptable.
QA Contact: kerz → layout
This bug is really old. Should it still be opened?
No.
Status: REOPENED → RESOLVED
Closed: 22 years ago13 years ago
Resolution: --- → INCOMPLETE
Based on perceived increases in FX speed from Shield Study 1 and 2.

5ms performed better (in perceived speed) than 50, 250, and 1000.
Attachment #8748926 - Flags: review?(dbaron)
Status: RESOLVED → REOPENED
Flags: needinfo?(dbaron)
Resolution: INCOMPLETE → ---
Attachment #8748926 - Attachment is obsolete: true
Attachment #8748926 - Flags: review?(dbaron)
Attachment #8748929 - Flags: review?(dbaron)
Comment on attachment 8748929 [details] [diff] [review]
bug.180241.5ms.patch

change it to 5ms based on Shield Study 1 & 2.

5ms performed better in perceived performance than 50, 250, 1000.
Are the results of this study available somewhere?

And do you have talos numbers for this change, and possibly numbers for other performance benchmarks?  (Does the press still look at iBench?  Or other similar things?)
Flags: needinfo?(dbaron) → needinfo?(glind)
Assignee: dbaron → glind
I don't, and I don't know how to get them :) Study results will be posted in the next few days.  We will chat a bit more soon.
Flags: needinfo?(glind) → needinfo?(dbaron)
There still are browser shootouts periodically, which include pageload times (and loading multiple tabs).  In theory this might affect them a little; I suspect not a lot.

Do *please* test (if not done already) with connections similar to what people get with a) poor wifi connections, which can greatly slow getting "initial" loads done, b) typical 2nd/3rd-world conditions, c) Android and d) low-spec but still usable desktop machines).  And maybe congested shared link (i.e. bufferbloat-induced delays loading additional data - 40-500ms RTT values).

McManus, bz, any thoughts?

(Note: I'm not dissing this idea; the 250ms value is *ancient* and could use updating, but what the right target should be is more nebulous.  And perceived speed is very important, if it doesn't come with "glitches".)
Flags: needinfo?(mcmanus)
Flags: needinfo?(bzbarsky)
I don't have any thoughts without seeing the study methodology, but I have serious doubts about "5ms" being different from "0" in any practical sense.  Was 0 tested?

> I don't, and I don't know how to get them

You get Talos numbers by pushing to try, for a start.
Flags: needinfo?(bzbarsky)
the metric I trust is speed index from web page test - I would be interested in seeing how the paint timeout impacts that. PLT is only modestly interesting, PLT over localhost (which is what talos gives you) is even less so because the data arrives at an unrealistic rate.
Flags: needinfo?(mcmanus)
(In reply to Patrick McManus [:mcmanus] from comment #69)
> the metric I trust is speed index from web page test - I would be interested
> in seeing how the paint timeout impacts that. PLT is only modestly
> interesting, PLT over localhost (which is what talos gives you) is even less
> so because the data arrives at an unrealistic rate.

Agreed.  For Gregg's benefit (mostly), where/how does he measure the speed index?  (I've seen it go by, and it's in my bookmarks *somewhere*).  As always with this bug, perception of speed != pageload time, especially in unrealistic network situations.  For perception, things like ordering and when certain things unblock is critical, not when the last bit is processed (in many cases, not all).  I do believe that the current setting is dusty and can use re-examination, but I agree with bz that 5ms is likely way too low in practice unless this no longer has any useful effects due to redesigns.
(In reply to Gregg Lind (User Advocacy - Heartbeat - Test Pilot) from comment #66)
> I don't, and I don't know how to get them :)

See:
https://wiki.mozilla.org/ReleaseEngineering/TryServer
https://treeherder.mozilla.org/perf.html#/comparechooser

But you probably really want the numbers that mcmanus suggests, although it would be good to know what's going to happen to talos.

And I agree that it seems unlikely for there to be a difference between 0ms and 5ms; it seems like effectively zero.
Flags: needinfo?(dbaron)
I tried setting the pref to 5ms, and it seemed much faster - but on Yahoo I had repeated problems, browser lockups with slow script warnings. Changed to 100ms, everything was smooth again.

I have DSL from a provider 50 miles away, get 1.3+ mbps. Machine has an AMD A4 5300 APU, 8GB of RAM. There are lots of people with worse than that - 5ms is definitely too low.
Are the study results available somewhere?
Flags: needinfo?(glind)
See Also: → 1271691
Depends on: 1283302
I'm going to re-close this bug which overloads a discussion from a decade ago. Please post relevant comments in bug 1283302 where we have patch, and test results.
Status: REOPENED → RESOLVED
Closed: 13 years ago8 years ago
Resolution: --- → INCOMPLETE
Comment on attachment 8748929 [details] [diff] [review]
bug.180241.5ms.patch

This is over in bug 1283302 now.
Attachment #8748929 - Flags: review?(dbaron)
@ Gregg Lind - Half a year passed (bug #180241 Comment 73) and still no results of Shield Study 1 & 2.
This was in fact fixed in comment 15, so restoring that state to reflect work that did happen here, and adjusting the summary to reflect that as well.
Resolution: DUPLICATE → FIXED
Summary: Paint Timeout is too high → lower paint delay timeout from 1200ms to 250ms
Assignee: glind → dbaron
You need to log in before you can comment on or make changes to this bug.