Open Bug 51202 Opened 24 years ago Updated 7 months ago

Dynamically determine optimum incremental reflow interval

Categories

(Core :: Layout, enhancement, P3)

enhancement

Tracking

()

Future

People

(Reporter: mpt, Unassigned)

References

Details

(Keywords: perf)

This is adapted from my comment in bug 17325.

Incremental reflow balances two factors against each other: the need to show 
useful content immediately for the user to read, and the need to finish the whole 
loading/rendering process quickly. The better you do with one, the worse you do 
with the other.

Each factor has its own limiting variable. The faster the Internet connection, 
the more content you can pull in before the user will be expecting another 
incremental reflow. And the faster the computer, the more incremental reflows you 
can perform without noticably hurting the total load/render time.

To some extent, these variables are a product of the system on which Mozilla is 
installed. But they are also a product of what else the user is doing with their 
Internet connection or with their computer at the time, how fast the content 
provider (e.g. the Web server) happens to be, and how complex the layout of the 
particular document is. As such, it is inappropriate to use a session-wide pref 
to determine how often reflows occur for a particular document -- even if the 
average user is able to make an intelligent decision about this anyway, which is 
probably not the case.

What is needed, then, is a way of dynamically determining how often incremental 
reflows should occur, in such a way that the layout engine performs optimally 
(from the user's point of view) based on how fast or slow content is being 
received, and how fast or slow previous reflows for the particular document were 
performed (as an indicator of how long further reflows will take).

I suggest the following algorithm. Do an incremental reflow whenever any of the 
following situations occur:
* the connection has been closed (for whatever reason, including that the
  document has finished loading);
* (2 seconds + x) has passed since the last incremental reflow, and you have any
  new content to display (or new object dimensions to lay out).

x = t * M
t = the time which the previous incremental reflow of this page took to complete
    (so it's zero if this is the first rendering)
M = some constant (a value of about 6, probably, but this could be worked out in
    testing labs)

This way,
* we don't have to second-guess the speed of the computer, since we've
  stipulated that Mozilla should never spend more than 1/M of its time
  reflowing;
* we don't have to second-guess the speed of the connection, since we don't
  start a reflow just to display a small additional amount (unless it's been at
  least (2 seconds + x) since the last reflow);
* we don't need Yet Another Pref for setting the reflow interval, because
  Mozilla works the optimal interval out for itself as the page loads.

This algorithm, with the same value of M, should produce appropriate results no 
matter whether you have a G4 with a 14.4kbps modem (where reflows will take place 
at a steady interval of 2 seconds), or a 486 with a T1 line (where reflows will 
hardly happen at all, because the calculations from the first will tell us not to 
do the second until the page has finished loading).
Why (2 seconds + x) rather than just x?  Or maybe a constant lower than 2
seconds?
The 2 seconds is a minimum time for content to stay in the same place (between 
reflows), so that the user has a chance of reading some of it. If we reflowed 
more frequently than that, content would jump around so fast that it would be 
unusable, and we might as well not have bothered incrementally rendering it in 
the first place.
The only content that jumps around on incremental reflow is tables.  For long
pages, it's nice to hold down PgDn as the content comes in to scan for
something.  Do we want to design a browser so that the current tables-as-layout
hacks work better than web pages based on CSS?  It could at least be a pref...
Please let it be a (hidden?) pref. I think continous incremental reflow is great ...
The whole idea of this is to calculate an optimum reflow frequency for every page
*without* the user having to constantly fine tune a pref. If this algorithm was a 
pref, this might as well be WONTFIX because it would be an increase in complexity 
for no real gain in usability.

If David is correct, and HTML tables are unique among all HTML/XML content in 
causing content to jump about between incremental reflows (and that large 
semantic tables will rarely be used), then the 2 second minimum can be removed -- 
so the time to the next reflow would be just t * M. (The optimum value of M would 
obviously be larger in this case.) So fast machines would have near-continuous 
reflow, while slow machines would have much less frequent reflow.
> The whole idea of this is to calculate an optimum
> reflow frequency for every page
> *without* the user having to constantly fine tune a pref.

Yes, and this is good. But I don't like the extra 2 second delay, and would 
like to be able to turn it off (a hidden pref is OK).

> [...] then the 2 second minimum can be removed -- 
> so the time to the next reflow would be just t * M.

Sounds good to me.
Re-assigning 5 bugs to Heikki from Clayton's bug list.  Please triage.  Thanks!
Assignee: clayton → heikki
If the computer is fast enough, I'd like to have a page update _at least_ every
1-2 seconds. This is the whole reason why the current behaviour is bad: the UI
often freezes for a very long time. If the 2 seconds constant is dropped and M
is small enough, this could be the solution.
Talked with vidur, reasigning to him. Also futuring per his request.

This bug has been marked "future" because the original netscape engineer working 
on this is over-burdened. If you feel this is an error, that you or another
known resource will be working on this bug,or if it blocks your work in some way 
-- please attach your concern to the bug for reconsideration.
Assignee: heikki → vidur
Target Milestone: --- → Future
Keywords: mozilla1.1
Well, please don't forget HTML chat systems. I'm sure you hate them. ;) But they 
are out there and should work fine with Mozilla. Those HTML chats are streaming 
websites, there is no such thing as "page load time" cause the page is never 
finished loading. All that counts is how fast new text is rendered. 
I absolutely vote for NOT including this 2 seconds limit, it's evil.
I developed some of those chatsystems some time ago and I know how much 2 
seconds can be. The key was, to print new content in less than one second. But 
this would be useless if the browser can't handle it. At the moment Mozilla 
works quite good with HTML chats, but it feels slow.
Maybe the algorithm could also try to figure, if the layout would jump around or 
not? If the content can be shown without moving things around, it should be 
shown ASAP!
Just my 2 cents. And I hope this will be adressed soon, it would make gecko so 
much more appealing. :) Quality, page loading time and behaviour is already 
great, just incremental page loading speed could be better...
You are correct.  If you attempt to measure connection speed by assuming data 
is coming down in a steady stream, then you are making chat systems not work, 
as well as most CGI scripts.

Many CGI scripts go to a database, so if there is a delay in data coming in, it 
is probably just because the CGI is doing something, rather than a slow 
connection speed.  A CGI script working over a T1 will show up in spurts: 0.1s 
1K, nothing for 2s, 0.2s 2K, nothing for 3s, 0.1s 1K.  But an algorithm that 
uses time elapsed to determine data rate will think it's a really slow 
connection because it seems like it took 5.4s to send 4K of data--about 0.7K/s, 
when the real data rate was 4K/0.4s -- 10K/s.  Off by a factor of 15.

If we're going to guess the connection speed we'll have to use a better measure 
than the speed of data coming in.
Blocks: 114584
I realized later that David was wrong: tables aren't the only, or even (in 
standards mode) the most common, cause of incremental reflow -- images are.

> This is the whole reason why the current behaviour is bad: the UI often
> freezes for a very long time.

This bug is not about the UI freezing. That's bug 83732, and other bugs which I 
can't find about having the UI and the Web page layout in different threads.

I'd be happy with dropping the 2 seconds to 0.5 seconds to make chats work.
No longer blocks: 114584
Blocks: 114584
I noticed it's not mentioned here how reflows work currently... is it just a
(very short) fixed interval? Seems like it tries too hard.
Keywords: mozilla1.0, perf
@Jeremy: Information about how reflow currently works is at
http://www.mozilla.org/newlayout/doc/reflow.html
Keywords: mozilla1.2mozilla1.3
Blocks: 203448
Keywords: qawanted
Assignee: vidur → nobody
QA Contact: chrispetersen → layout
Removing QAwanted from this bug since it didn't come with any specific request and it was added 8 years ago. Please re-add if you need anything specific from QA.
Keywords: qawanted
Severity: normal → S3
You need to log in before you can comment on or make changes to this bug.