Last Comment Bug 180241 - lower paint delay timeout from 1200ms to 250ms
: lower paint delay timeout from 1200ms to 250ms
: embed, perf, topembed-
Product: Core
Classification: Components
Component: Layout (show other bugs)
: Trunk
: All All
P1 major with 7 votes (vote)
: mozilla1.3beta
Assigned To: David Baron :dbaron: ⌚️UTC-8
: Jet Villegas (:jet)
Depends on:
Blocks: 71668
  Show dependency treegraph
Reported: 2002-11-14 17:24 PST by Jason Kersey
Modified: 2016-11-15 15:33 PST (History)
76 users (show)
dbaron: needinfo? (glind)
See Also:
Crash Signature:
QA Whiteboard:
Iteration: ---
Points: ---
Has Regression Range: ---
Has STR: ---

patch to change default to 250ms (1.64 KB, patch)
2002-12-26 07:35 PST, David Baron :dbaron: ⌚️UTC-8
rjesup: review+
bzbarsky: superreview+
Details | Diff | Splinter Review
Small Comparison Test (3.86 KB, text/html)
2002-12-30 02:09 PST, Frank Zimmer
no flags Details
change it to 5ms based on Shield Study 1 & 2 (806 bytes, patch)
2016-05-04 16:33 PDT, Gregg Lind (Fx Strategy and Insights - Shield - Heartbeat )
no flags Details | Diff | Splinter Review
bug.180241.5ms.patch (1.06 KB, patch)
2016-05-04 16:38 PDT, Gregg Lind (Fx Strategy and Insights - Shield - Heartbeat )
no flags Details | Diff | Splinter Review

Description User image Jason Kersey 2002-11-14 17:24:39 PST
In talking to hyatt on IRC, he told me our current paint suppression in Mozilla
is set at 1200ms, whereas Chimera's is set at 500 and Phoenix's is set at 250. 
It seems like Mozilla's is set awfully high compared to it's kid's times, and
should be brought down lower.  This will hurt our time on the iBench test I
think, but it will speed up perceived pageload times, which seems like a good
Comment 1 User image Hong Kwon 2002-11-14 17:47:17 PST
Is the current jrgm page load test a good test to measure the change in
performance?  If so, can we try a patched build with the page loader?
Comment 2 User image Jason Kersey 2002-11-14 17:56:16 PST
I don't think this will affect the pageload stuff that much, I think the main
hit will be ibench because there is no pause between the loading of pages.
Comment 3 User image Nicholas Allen 2002-11-14 20:23:21 PST
The numbers I remember are:

Win9X: 3000 ms
WinNT: 750 ms
Everything else: 1200 ms (selectable by pref)

The windows numbers were tuned based on jrgm's page loader test.  Perhaps it's
time to retune the timeout on other platforms as well?  It might also be an idea
to see if the one-size-fits-all value is good enough or if this should be a
platform specific value.

Another thing: is the timeout value really the problem, or does Moz need to be
more aggressive at unsuppressing before hitting the timeout?
Comment 4 User image John Morrison 2002-11-14 21:02:22 PST
Actually, I believe the knob that this bug refers to is PAINTLOCK_EVENT_DELAY
in nsPresShell.cpp, while the knob that is 3000 on win9x and 750 on winNT is
(WIN9X_)?PAINT_STARVATION_LIMIT in plevent.c. They're separate, but tightly
related parameters (IIRC).

But you're right that these may not be 'one size fits all platforms' values.

Comment 5 User image Markus Hübner 2002-12-11 02:02:04 PST
What are the current plans to approach this problem?
Comment 6 User image Matthias Versen [:Matti] 2002-12-22 08:37:32 PST
Mozilla feels 100% faster for me with a DSL connection and a paint surpression
set to 250.
The default seem to be too high and we should really change it to 500 or 250.
Comment 7 User image Simon Fraser 2002-12-23 17:10:11 PST
This sounds like something we can easily tune to get some perceived performance
boost. Perhaps we need to have the user set a pref for their connection speed,
and set this (and some other prefs) to appropriate values based on that?
Comment 8 User image Markus Hübner 2002-12-25 09:30:00 PST
btw: the default value of that in recent Phoenix nightlies and 0.5 release is 
250ms, in contrast to Mozilla, where it's still 1200ms.
Comment 9 User image Christian :Biesinger (don't email me, ping me on IRC) 2002-12-26 05:49:38 PST
the file to be changed would be
Comment 10 User image David Baron :dbaron: ⌚️UTC-8 2002-12-26 07:30:17 PST says
that the current value is 250 for both Phoenix and Chimera, I think.  Perhaps we
should use 250?
Comment 11 User image Markus Hübner 2002-12-26 07:31:25 PST
That'll be fine - according to users feedback that sounds like a good choice.
Comment 12 User image David Baron :dbaron: ⌚️UTC-8 2002-12-26 07:35:24 PST
Created attachment 110150 [details] [diff] [review]
patch to change default to 250ms

I can't test this, since testing performance over a remote X connection doesn't
work very well.  Does this make a difference?
Comment 13 User image Markus Hübner 2002-12-26 07:37:43 PST
The same for me as comment #6 !
Comment 14 User image David Baron :dbaron: ⌚️UTC-8 2002-12-26 12:59:35 PST
Comment 15 User image David Baron :dbaron: ⌚️UTC-8 2002-12-26 13:04:26 PST
Fix checked in, 2002-12-26 13:03 PDT.
Comment 16 User image Cathleen 2002-12-26 16:21:14 PST
This fix doesnt seem to help performance, actually made numbers slightly worse,
in most cases.  mac tp got hit the hardest!

We should back out this patch, and figure out the most optimal value, with more
analysis.  Thanks!

tinderbox btek (linux)
tp before: 1061 ms
tp after : 1093 ms    ( + 32 ms, or 3.02% )

tinderbox comet (linux)
tp before: 1460 ms
tp after : 1464 ms    ( + 4 ms )
txul before: 465 ms
txul after : 468 ms   ( + 3 ms )

tinderbox luna  (linux)
tp before: 1165 ms
tp after : 1184 ms    ( + 19 ms, or + 1.63% )
txul before: 1165 ms
txul after : 1156 ms  ( - 9 ms )
ts before : 3209 ms
ts after  : 3212 ms   ( + 3 ms )

tinderbox darwin (mac X)
tp before: 665 ms
tp after : 737 ms     ( + 72ms, or + 10.83% )
ts before: 3532 ms
ts after : 3575 ms    ( + 43ms, or + 1.22% )

tinderbox beast (win2k)
ts before: 1391 ms
ts after : 1431 ms    ( + 40ms, or + 2.87% ) 
txul before: 343 ms
txul after : 344 ms   ( + 1ms )
Comment 17 User image David Baron :dbaron: ⌚️UTC-8 2002-12-26 16:29:44 PST
Just because the page finishes loading later doesn't mean the perceived
performance is slower.
Comment 18 User image David Baron :dbaron: ⌚️UTC-8 2002-12-26 16:31:19 PST
I'd like to leave it this way for a few days and see what people think of it. 
(I've heard many comments that Phoenix and Chimera seem to load pages faster --
this is probably why.)

Also, it would be useful to get feedback on the difference in perception of
speed between 250 and 500.  Any comments?
Comment 19 User image Simon Fraser 2002-12-26 16:42:55 PST
I agree with dbaron. The pageload numbers don't necessarily reflect user
perception of speed, particularly on slower links. Nor do they take into account
program responsiveness to user events during pageload, which is another reason
we may want to tune for apparently worse pageload numbers.
Comment 20 User image Cathleen 2002-12-26 17:04:14 PST
more data from testerbox page:

tinderbox fuego (linux)
tp before: 2953 ms
tp after : 3007 ms     ( + 54ms, or + 1.83% )
txul before: 3012 ms
txul after : 3042 ms   ( + 30ms, or + 1%)
ts before: 6397 ms
ts after : 6424 ms     ( + 27ms, or + 0.42% )

tinderbox maple (linux) 
tp before: 1154 ms
tp after : 1181 ms     ( + 27ms, or + 2.34% )
txul before: 1147 ms
txul after : 1154 ms   ( + 7ms )
ts before: 3235 ms
ts after : 3246 ms     ( + 11ms )

tinderbox mecca (linux)
tp before: 2156 ms
tp after : 2216 ms     ( + 60ms, or + 2.78% )
txul before: 2149 ms
txul after : 2154 ms   ( + 5ms )
ts before: 5984 ms
ts after : 6000 ms     ( + 16ms )

tinderbox rheeeet (linux)
txul before: 11416 ms
txul after : 11420 ms  ( + 4ms )
ts before: 27877 ms
ts after : 28532 ms    ( + 655ms, or + 2.35% )

tinderbox darwin (mac X)
txul before: 1609 ms
txul after : 1615 ms   ( + 6ms )
ts before: 7034 ms
ts after : 7043 ms     ( + 9ms )
Comment 21 User image Cathleen 2002-12-26 17:17:53 PST
Okay, I'm done with getting all statistical data in the bug.

I think preceived performace improvement is good, if it is realy noticible.  My
concern is that we should try to determine a value (through more analysis!) and
find one that can give us both preceived and actual improvement, which is the
step missing here...

The hard reality is people run through benchmarks to determine where we are, and
numbers determine the score.  I have no problem taking this fix, if we have
proven ourselves.  There could be a value out there which can give us both?

Comment 22 User image Markus Hübner 2002-12-27 03:06:14 PST
I think the vast number of people will rate Mozilla due to the perceived 
performance (I do agree with David & Simon here).
Suggesting leaving 250ms for a testing periode, then we might try out a higher 
value to get a feedback and tune this value (to incorporate both perceived and 
pure statistical measured performance).
Comment 23 User image thebeastwitheyesthatstared 2002-12-27 15:20:09 PST
Switching from Word to Mozilla (or Excel to Mozilla) seems a lot snappier, now
that I use the win32 nightly from 28th of december 2002 - I´m on Win2000 Pro SP3.

Build ID:
Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.3b) Gecko/20021227

I also find the general operation, ie switching tabs with pages rendered,
loading pages a bit more easy going. So I think this area could bear improvement.
Comment 24 User image Jesse Ruderman 2002-12-27 18:36:54 PST
It might make sense to fix the following bugs before doing more fine-tuning:

bug 162122 stop paint suppression when vertical scrollbar appears
bug 162123 stop paint suppression when html finishes loading (don't wait for 
Comment 25 User image Kai Lahmann (is there, where MNG is) 2002-12-28 03:32:36 PST
forget those pageload-times, what matters is the time from the click until you
see the page, so this delay should be part of that time (which it isn't!)
Comment 26 User image haferfrost 2002-12-28 05:55:15 PST
I've played around comparing
user_pref("nglayout.initialpaint.delay", 250);
user_pref("nglayout.initialpaint.delay", 5000);
(at least from the code it seems like nglayout.initialpaint.delay is the pref
that overrides  PAINTLOCK_EVENT_DELAY). 
I have a K6-2/500 and a 768kbps connection and the difference is hardly
noticable and could be in my imagination only. Tab switching and the back button
seem a bit snappier with 250. Following links could be, too, but this could just
be wishful thinking. When loading many tabs in the background (I like to
middle-click lots of links on Slashdot) Mozilla's responsiveness seems to be
worse with 250 than with 5000, though.
Comment 27 User image Kai Lahmann (is there, where MNG is) 2002-12-28 11:13:23 PST
you don't see it? Sure you changed it for the right profile? :)
Surfing a Web-Forum (like is WAY faster
Comment 28 User image jrofkar 2002-12-28 14:44:37 PST
Yes, *perceived* performance (under Win32/XP) has increased considerably --
especially on pages using DHTML menus, and both <iframe> and <applet> tags. 
Mozilla seems nice and responsive now on such pages.  I honestly noticed this
"performance boost" while browsing using last night's build.  Being curious, I
wanted to understand what caused this performance gain...and here I am.  Thanks!
Comment 29 User image Jesse Ruderman 2002-12-28 17:22:44 PST
If I understand correctly, tp measures
1. The amount of time it takes to load a sequence of pages without pausing, with
the onload event of each page triggering the load of the next.

Can we measure times more closely related to useful/perceived speed?  For example:
2. The amount of time before onload is called.
3. The amount of time before the first screenful stops changing.
4. The amount of time before text in the first screenful stops moving.
5. The amount of time before anything (or any text) appears in the content area.
Comment 30 User image Sören 'Chucker' Kuklau (gone) 2002-12-29 08:21:46 PST
In my experience, any value between 250ms and 500ms will greatly improved
perceived performance. Values below 250ms cause complex pages to be reflown
multiple times and as such are not advisable.
Comment 31 User image haferfrost 2002-12-29 14:02:39 PST
Answer to comment 27: I have only one profile, so no way to get it wrong. I even
checked with about:config that the value has been set. Today I downloaded and
installed 20021229 where the 250 ms is in the source code (I checked this, of
course). And of course I removed the nglayout pref from both user.js and
prefs.js (without Mozilla running) and I checked with about:config that the pref
was gone. Guess what, my initial observation is confirmed. My system does NOT, I
repeat NOT, show any significant improvement (not on or
anyway). There seems to be a very slight speed gain but I wouldn't notice it if
I wasn't looking for it and it might just as well be in my imagination.
Sorry to burst your bubble, but apparently this is no magic bullet. It's great
if it gives some people a performance boost, but my UA string will remain set to
"Slozilla" :-)
Comment 32 User image Sam Emrick 2002-12-29 18:01:47 PST
Opinion - at least some people seem really happy with the change,i.e. 
starting paints sooner, at the expense of a bit slower completion. On the other 
hand, as Cathleen says, it will be a 'ding' to standard benchmark results. The 
deeper problem here is having one fixed paintdelay time applying to everything. 
Its a good idea, but overly simplistic in implementation. By that I mean, 
typical real usage will often involve mix of cache loads and network loads. I 
suspect performance perception benefit may be largely due to very quick 
rendering of cached content, rather than 'waiting around' for paintdelay time 
to expire and/or completion of network stream content parts. There is probably 
also some cases real benefit in elapsed time performance, cache stream content 
can render during time otherwise spent doing nothing much except waiting for 
network streams content. 

I guess I am saying this... I think 'paintdelay default change' is important 
because it 'points toward' a performance fix.... but simply reducing the 
paintdelay constant value is not the best fix for this problem... just my 

Comment 33 User image Sam Emrick 2002-12-29 18:53:39 PST
It occurs to me the purpose/bemefit of paintdelay may not be clear to everyone, 
the intent of it is to reduce the amount of time in unproductive layout and 
rendering of woefully incomplete content... until such time that a 'meaningful 
amount' of layout and content is available. Watching frames being constructed 
and contents dancing around as layout 'evolves' is costly to performance.

Also, paintdelay reduces and hides mozilla overheads in handling small images, 
i.e. handling image frames and placeholder gifs. Small gifs are usually just 
painted in final form rather than several paints for frame, placeholder, then 
final form. 

Thirdly, the paintdelay often will allow ehough time to esablish the existance 
or non-existance of scroll bars. The significance this is that  scrollbars, 
when they become required, reduce the visible window area and scrollable window 
area. All content previously layed out and rendered needs be re-layed out and 
re-rendered under the new window sizes when scrollbar state changes.

Fourth, at least, is to allow mozilla a while to fetch and parse content 
without contention from 'user animations' or whatever cpu-intensive code that 
may be included in the page content.

Fifth, (which may be fixed now?) is (or was?) the way mozilla paints partial 
images, each repaint of partial image content, as it arrives, re-paints all 
available image content, ie, firt paint might paint band 1, second paint paints 
band 1 and 2, third repaint paints bands 1,2,3, and so forth. Image data ends 
up going through many unproductive paints.   

These are at least some of the things that paintdelay helped.
Comment 34 User image Sam Emrick 2002-12-29 19:42:25 PST
Reconsidering myself... actually a lot of things (things that are in obtuse ways
related to paintdelay) things have been fixed since the number 1200 was somehow
decided a long tine ago... given NOW... a smaller number is probably useful,
better, and not very costly. I think maybe however 250 is too small? Taking a
3%-11% hit in  benchmarks is hard. Maybe 500 ms is a better compromise?

(Still, I think the problem/opportunity is deeper than change to delay constant
Comment 35 User image Frank Zimmer 2002-12-30 02:09:05 PST
Created attachment 110334 [details]
Small Comparison Test 

Some small tests with different settings
Comment 36 User image Frankie 2003-01-02 06:34:57 PST
Clearly 1200 is too high for user perception, but 250 leads to reflows and bad
benchmarks. The obvious approach should be to try various intermediate values
(400 to 700) and find the sweet spot.

Perhaps a PRNG attached to the nightly build system?  :)
Comment 37 User image Chris McAfee 2003-01-02 15:38:56 PST
I would like to see benchmark results for 500ms.

re: comment 29, Tp is a carefully constructed median average (? I think,
jrgm is Tp master) of individually-timed pageloads.  There's some handshaking
going on to time this, so it should be (and is) fairly repeatable.  Look at
the "raw data" link on any Tp graph to see the raw data before we munge
a Tp value out of it.
Comment 38 User image John Morrison 2003-01-02 16:29:02 PST
Re: comment 29

point 1 is mostly right, except it does pause (default 1 second) between page
loads.  Specifically what is measured is the time from when the load of "page
N+1" is initiated in "page N" and when the onload event is fired for "page

point 2, then, is the same thing as point 1, no?

For point 3 and 4, I don't know if there is a way to actually retrieve those
values from Gecko (in C++ code).

Point 5, I guess, would be the time of the first paint, although I don't
really know if that would actually map to what an end-user perceives as

The overall Tp number is, as mcafee notes, the average of the median time to
load each page. (Actually, it also reports the overall average as well; median
is just a way to toss out the occasional abberant result). It is an arbitrary
measure, and probably too heavily weighted to 'images already in cache'
loadtimes. But, we will get measured on the similar IBench test, so we should
make the right tradeoff between this arbitrary number and giving the end-user
the right "feel".

You can "study for the test", and paint suppression is (partially) doing that,
although it is a real gain to the degree that it suppresses paints that
end-users don't really need or want to see.

My own perception is that 250 is too low, but then I'm looking at this on a
high-speed connection. It would be good to get some measurements, and end-user
comments, based on a setting of 500 ms.
Comment 39 User image Jesse Ruderman 2003-01-02 17:16:29 PST
Cool, I didn't know Tp paused between pages.  In that case, 2 is the same as 1.

3 and 4 are probably the hardest to measure, but if I'm right that they're more
closely related to useful/perceived speed than Tp, it might be worth it to
measure them in addition to measuring Tp.
Comment 40 User image Jesse Ruderman 2003-01-02 17:30:07 PST
By the way, if we could measure 3 and 4 for other browsers, then it wouldn't be
as important to do well on the ibench test, because we could say "but we do much
better than IE on a test of when the first screenful of a page stabilizes".
Comment 41 User image Ere Maijala (slow) 2003-01-02 23:11:02 PST
My impressions after a while of normal usage on 512K and 2MB connections is that
250ms is a tiny bit too short (whereas 1200ms was way too long). I think 500ms
would be a good compromise. Of course we can try to find an even better value
somewhere between these two, but I don't believe it would have much effect on
the perceived speed.
Comment 42 User image Jan Rasche 2003-01-04 15:58:19 PST
Hi All!

First of all many thanks for this awesome performance improvement within the
last 1.3b (nightly builds)!!!

TO POINT IT OUT CLEARLY: Mozilla/Gecko seems to be much faster now than M$
Internet Exploiter and Opera! 

So I wonder that you @ Mozilla/Netscape can get worry about some engineering
benchmarks which nobody outside notes of. 

Mozilla relies mostly on its community. And we should use the chance to beat the
'Root of All Evil' in the field of user perception. Where else this could be done? 

(I do not know anyone who uses Mozilla for benchmark reasons nor I saw some in
any magazine, I only see the market share reports.) 

I'm here at work on a 2MB connection and it may be lesser significant on slower
connections (This could be determined in the setup and preferences). 

This could be most significant on W2K & Co. Good so - this could hurt the dark
side as more.

Do not worry about this little melodramatic appeal. I was a little wondered too.

Regards, Jan
Comment 43 User image Arthur 2003-01-05 02:10:13 PST
I wonder how machine (CPU, RAM-speed etc.) dependent this is. I'd expect higher
values to be better (overall) on slow machines (less recalculations) and lower
values to be much better on faster computers (no unnecessary delays). The
problem is, for what machines do we want to finetune? Or should we try to
measure the machine-speed somehow once and then set the value accordingly etc.?
It just seems a bit weird to set a value in real time units when the world
things take place are for a good deal in a machine dependent time frame and only
the user interaction is measurable in seconds.
Comment 44 User image Bart van Bragt 2003-01-06 02:11:30 PST
Just played around with the
user_pref("nglayout.initialpaint.delay", 100);
setting: Whooohooo! :D 

For me reducing this timer made Mozilla approach IE's speed, even on complex
(forum) pages. Really fantastic. The same is true on pages that are using slow
banner servers. In the old situation you had to wait for 1.2 seconds just
because some silly banner couldn't be loaded immediately.

BTW I also tried setting this to 50ms, that was even snappier but it gives a
kind of jumpy IE like feeling to the browser. IMO this is really a great
performance improvement. IMO Mozilla at this point only had two major drawbacks: 
- (perceived) pagerendering is/was slower than IE's
- The large amounts of memory that it requires.

IMO the first point is fixed now :D I guess that should enable me to convert
some more people to Mozilla ;)

I tested with a Linux machine (512MB, Athlon 2000XP), build 2002121005 through a
10Mbit connection. Tested slashdot, (slow ads) and some phpBB
sites (one of which is 4ms away from me).

Furthermore I completely agree with Arthur, what he mentioned was also the first
thing I though when reading this bugreport. IMO 250ms is something _very_
different on an Athlon 2000XP than on a 500Mhz laptop with a slow videocard. So
having some kind of dynamic or performance related delay would be ideal IMO.

Anyway, reducing this time this much really made a huge (perceived) difference
IMO! Thanks!
Comment 45 User image haferfrost 2003-01-06 04:51:25 PST
re comment 43: Fine-tune for slower machines. On a fast machine Mozilla's speed
is not really a problem. It's on slower machines where Mozilla still doesn't see
the taillights of IE and Opera. Unfortunately it seems that on slow machines
decreasing the paint timeout doesn't really help much.
Comment 46 User image Simon Fraser 2003-01-06 10:12:51 PST
Re: comment 43. I think any tuning of these parameters should also take network
speed into account. See also bug 69931.
Comment 47 User image Kevin McCluskey (gone) 2003-01-06 12:52:13 PST
There are additional "knobs" which control how quickly you see content.
The content notification interval set through the pref.
("content.notify.interval" ) controls how frequently frames are constructed.

On WIN32 there is also the PAINT_STARVATION_LIMIT which is set to a constant of
750Ms on WinXP and 3 seconds on WIN9x.  This determines how long to allow
pending paints to be left in the event queue without processing them.

If  you lower the paint suppression delay and both of these values close to 0 
you will see the initial page content painted very quickly. The disadvantage is
you will see page elements jump around as additional content is loaded and page
load times will be affected. The current value for each setting was chosen to
not impact page load on a variety of machines.

I doubt that we will ever be able show content as quickly as possible without
affecting page load time.  This is because the user's impression of page load
time is based on when they see the first chunk or page of content, while the
overall page load is measured based on the onload handler firing.  

Any painting that happens before the onload handler fires will probably have a
noticeable impact on page load. The current paint suppression logic does not
unsuppress the painting until after the onload handler has fired. This
effectively removes the page paint time from the performance measurement on
jgrm's page load test. The only reason you see the pages paint at all before
going to the next page in jrgm's test is that the test sets a timer which delays
the load of the next page which allows enough time for the current page to paint
before initiating the load of the next page.

One proposed solution is to unsuppress as soon as the a vertical scrollbar
appears. This is appears to be rational solution, but it will also have a cost.
 The scrollbar appearing is driven by frames being constructed which means the
content notification interval may need to be smaller than what it is currently
so frame construction will happen more frequently.  It also means that pages
will paint before the onload handler fires which means the paint time will
become part of the performance measurement and we will look slower on the page
load tests.
Comment 48 User image Michele Carlson 2003-01-14 10:40:18 PST
adding topembed
Comment 49 User image Markus Hübner 2003-01-19 08:53:42 PST
From a recent n.p.m.performance posting in the thread "Impressivest Mozilla 
1.3b Performance"
"1.2.1 was abnormally slow on my machine. I tried a 1.3 nightly to try the new 
spam features, and was blown away by how much faster things were."
Comment 50 User image José Jeria 2003-01-31 03:34:05 PST
My opinion is that the value 250 is too low. As mentioned on more complex tables
I more reflows. One good example is:

Also, when a page loads it shows the content much faster, but the complete
loading time of the page is now bigger. So for example now you have to wait
longer now to access dhtml menues that are built when the page has finished
loading. Most of the dhtml menues work like this, an example is:
Comment 51 User image Markus Hübner 2003-01-31 03:43:32 PST
Good point - your suggested value?
Comment 52 User image José Jeria 2003-01-31 03:48:23 PST
Something higher than 250 ;-)
Havent tested any other values.
Comment 53 User image Myk Melez [:myk] [@mykmelez] 2003-02-18 18:28:34 PST
Note the bandwidth-limiting proxies (bug 23459, comment 4), which can help with
testing the possible settings at various connection speeds.
Comment 54 User image John Morrison 2003-02-18 18:51:43 PST
(one slight caveat with the bandwidth-limiting proxies... they work by setting
up buckets of, e.g., 8KB, and dole them out on a timer. What this means is 
that the aggregate transfer rate is 8KB/s over some period of time, but the
"burst" rate can be whatever is the normal connection speed (e.g., 10Mbps)).
Comment 55 User image Michael Buckland 2003-03-05 13:13:51 PST
Discussed in edt.  Minusing.  Please renominate with a clear reason why needed
for topembed plus if appropriate.
Comment 56 User image haferfrost 2003-03-26 03:12:46 PST
The new default value of 250 is at least in part responsible for a bad
regression with regards to the Preferences window (see bug 199267). If I set the
value to anything below 1000 I'm seeing very visible and very annoying reflows
(e.g. text fields being inserted and buttons jumping around) in the Preferences
UI. As I've said in comment 45, the lower paint timeout doesn't improve things
much for "slow" machines. If it causes regressions as bad as bug 199267 on these
machines, the paint timeout should not be lowered. Mozilla's performance should
be fine-tuned for slower machines because it's on those machines where Mozilla
needs to catch up with Opera and IE badly.
Comment 57 User image Markus Hübner 2006-01-29 23:38:03 PST
How about WFM this now or should we investigate tweaking this value with current reflow performance?
Comment 58 User image Spencer Selander [greenknight] 2007-08-04 19:06:53 PDT
On 56k dialup, with an older computer (512MB RAM, 700Mhz Duron CPU), I find a value of 5000 suits me - cuts way down on those annoying reflows mentioned in comment #56. Obviously, this would be an unacceptable delay on fast machines; my point is that any value is going to be a compromise. 250 is obviously too low for slower machines and connections, how much perceived speed improvement does it really provide on faster machines? When I first altered this setting I tried 1000, I think that would be acceptable.
Comment 59 User image Marco Castelluccio [:marco] 2011-07-22 11:52:26 PDT
This bug is really old. Should it still be opened?
Comment 60 User image Robert O'Callahan (:roc) (email my personal email if necessary) 2011-07-31 20:24:52 PDT
Comment 61 User image Gregg Lind (Fx Strategy and Insights - Shield - Heartbeat ) 2016-05-04 16:26:30 PDT
*** Bug 549294 has been marked as a duplicate of this bug. ***
Comment 62 User image Gregg Lind (Fx Strategy and Insights - Shield - Heartbeat ) 2016-05-04 16:33:30 PDT
Created attachment 8748926 [details] [diff] [review]
change it to 5ms based on Shield Study 1 & 2

Based on perceived increases in FX speed from Shield Study 1 and 2.

5ms performed better (in perceived speed) than 50, 250, and 1000.
Comment 63 User image Gregg Lind (Fx Strategy and Insights - Shield - Heartbeat ) 2016-05-04 16:38:44 PDT
Created attachment 8748929 [details] [diff] [review]
Comment 64 User image Gregg Lind (Fx Strategy and Insights - Shield - Heartbeat ) 2016-05-04 16:41:39 PDT
Comment on attachment 8748929 [details] [diff] [review]

change it to 5ms based on Shield Study 1 & 2.

5ms performed better in perceived performance than 50, 250, 1000.
Comment 65 User image David Baron :dbaron: ⌚️UTC-8 2016-05-04 16:54:00 PDT
Are the results of this study available somewhere?

And do you have talos numbers for this change, and possibly numbers for other performance benchmarks?  (Does the press still look at iBench?  Or other similar things?)
Comment 66 User image Gregg Lind (Fx Strategy and Insights - Shield - Heartbeat ) 2016-05-05 07:44:15 PDT
I don't, and I don't know how to get them :) Study results will be posted in the next few days.  We will chat a bit more soon.
Comment 67 User image Randell Jesup [:jesup] 2016-05-05 11:10:18 PDT
There still are browser shootouts periodically, which include pageload times (and loading multiple tabs).  In theory this might affect them a little; I suspect not a lot.

Do *please* test (if not done already) with connections similar to what people get with a) poor wifi connections, which can greatly slow getting "initial" loads done, b) typical 2nd/3rd-world conditions, c) Android and d) low-spec but still usable desktop machines).  And maybe congested shared link (i.e. bufferbloat-induced delays loading additional data - 40-500ms RTT values).

McManus, bz, any thoughts?

(Note: I'm not dissing this idea; the 250ms value is *ancient* and could use updating, but what the right target should be is more nebulous.  And perceived speed is very important, if it doesn't come with "glitches".)
Comment 68 User image Boris Zbarsky [:bz] (still a bit busy) 2016-05-05 11:23:46 PDT
I don't have any thoughts without seeing the study methodology, but I have serious doubts about "5ms" being different from "0" in any practical sense.  Was 0 tested?

> I don't, and I don't know how to get them

You get Talos numbers by pushing to try, for a start.
Comment 69 User image Patrick McManus [:mcmanus] 2016-05-06 05:55:05 PDT
the metric I trust is speed index from web page test - I would be interested in seeing how the paint timeout impacts that. PLT is only modestly interesting, PLT over localhost (which is what talos gives you) is even less so because the data arrives at an unrealistic rate.
Comment 70 User image Randell Jesup [:jesup] 2016-05-06 07:39:42 PDT
(In reply to Patrick McManus [:mcmanus] from comment #69)
> the metric I trust is speed index from web page test - I would be interested
> in seeing how the paint timeout impacts that. PLT is only modestly
> interesting, PLT over localhost (which is what talos gives you) is even less
> so because the data arrives at an unrealistic rate.

Agreed.  For Gregg's benefit (mostly), where/how does he measure the speed index?  (I've seen it go by, and it's in my bookmarks *somewhere*).  As always with this bug, perception of speed != pageload time, especially in unrealistic network situations.  For perception, things like ordering and when certain things unblock is critical, not when the last bit is processed (in many cases, not all).  I do believe that the current setting is dusty and can use re-examination, but I agree with bz that 5ms is likely way too low in practice unless this no longer has any useful effects due to redesigns.
Comment 71 User image David Baron :dbaron: ⌚️UTC-8 2016-05-06 15:32:41 PDT
(In reply to Gregg Lind (User Advocacy - Heartbeat - Test Pilot) from comment #66)
> I don't, and I don't know how to get them :)


But you probably really want the numbers that mcmanus suggests, although it would be good to know what's going to happen to talos.

And I agree that it seems unlikely for there to be a difference between 0ms and 5ms; it seems like effectively zero.
Comment 72 User image Spencer Selander [greenknight] 2016-05-07 01:07:53 PDT
I tried setting the pref to 5ms, and it seemed much faster - but on Yahoo I had repeated problems, browser lockups with slow script warnings. Changed to 100ms, everything was smooth again.

I have DSL from a provider 50 miles away, get 1.3+ mbps. Machine has an AMD A4 5300 APU, 8GB of RAM. There are lots of people with worse than that - 5ms is definitely too low.
Comment 73 User image David Baron :dbaron: ⌚️UTC-8 2016-05-17 21:40:21 PDT
Are the study results available somewhere?
Comment 74 User image Jet Villegas (:jet) 2016-07-07 13:57:17 PDT
I'm going to re-close this bug which overloads a discussion from a decade ago. Please post relevant comments in bug 1283302 where we have patch, and test results.
Comment 75 User image David Baron :dbaron: ⌚️UTC-8 2016-07-12 14:59:20 PDT
Comment on attachment 8748929 [details] [diff] [review]

This is over in bug 1283302 now.
Comment 76 User image Nikhil Pandey 2016-11-10 04:41:12 PST Comment hidden (spam)
Comment 77 User image Virtual_ManPL [:Virtual] - (ni? me) 2016-11-13 01:29:35 PST

*** This bug has been marked as a duplicate of bug 1283302 ***
Comment 78 User image Virtual_ManPL [:Virtual] - (ni? me) 2016-11-13 02:22:40 PST
@ Gregg Lind - Half a year passed (bug #180241 Comment 73) and still no results of Shield Study 1 & 2.
Comment 79 User image David Baron :dbaron: ⌚️UTC-8 2016-11-15 15:32:19 PST
This was in fact fixed in comment 15, so restoring that state to reflect work that did happen here, and adjusting the summary to reflect that as well.

Note You need to log in before you can comment on or make changes to this bug.