Closed Bug 292965 Opened 20 years ago Closed 19 years ago

Improve eviction algorithm for fastback/bfcache

Categories

(Core :: DOM: Navigation, defect)

defect
Not set
major

Tracking

()

RESOLVED FIXED
mozilla1.8beta5

People

(Reporter: bzbarsky, Assigned: marria)

References

Details

(Keywords: fixed1.8, memory-footprint, Whiteboard: [needs review cbiesinger])

Attachments

(2 files, 3 obsolete files)

In brief, keeping 5 (or however many) viewers per docshell means we'll have too
many cached presentations around in many cases.  On windows, where the cached
DOM will keep alive all the images and hence all the GDI resources used by them,
this approach will probably cause us to have painting issues very quickly any
time someone uses lots of tabs and browses in all of them.

David suggested that we should have "a global MRU list (perhaps around 25?)". 
That's probably the most reasonable approach.
I'll take this -- unless someone objects.

/be
Assignee: nobody → brendan
brendan's not sure he will be able to get to this, but I think it needs to be
fixed for beta.  Pulling back onto my list.
Assignee: brendan → bryner
Target Milestone: --- → mozilla1.8beta4
Flags: blocking1.8b4+
Blocks: 305462
Marria has a patch almost done for this.  Should be able to get it in for 1.5 
beta.
Assignee: bryner → marria
Attachment #193762 - Flags: superreview?(bryner)
Attachment #193762 - Flags: review?(cbiesinger)
Comment on attachment 193762 [details] [diff] [review]
add global content viewer cache eviction

Looks good.  Just to give reviewers a general idea of the eviction mechanism,
this implements a per-tab limit of 3 content viewers (usage data shows that
people don't tend to go really far back).  There is a second, global limit on
the number of content viewers that scales with the amount of physical memory,
up to a maximum of 8.
Attachment #193762 - Flags: superreview?(bryner) → superreview+
+    // Index into the SHTransaction list, indicating the previous and current
+    // transaction at the time that this DocShell begins to load
+    PRInt32                    mPreviousTransIndex;
+    PRInt32                    mLoadedTransIndex;

Is there a reason not to put these onto the nsISHEntry?
It seems like a convenient place to put them since at the time that these values
are accessed (in Show()) there isn't a good way to get to an SHEntry.  It's also
a little strange that there are 2 SHEntries per docshell, but you only need one
set of indexes per docshell, for eviction after the load of that docshell.
Comment on attachment 193762 [details] [diff] [review]
add global content viewer cache eviction

A couple of quick and minor comments, unsolicited but I hope helpful anyway:

Naming nit -- "Total" instead of "Global" (or uncapitalized in the pref name)
avoids the connotation of global that there are "local" viewers as well as
global viewers being counted, e.g., by
browser.sessionhistory.max_global_viewers.

>+  PRInt32 viewers = 0;
>+  double x = log(kBytesD)/log(2.0) - 14;
>+  if (x > 0) {
>+    viewers    = (PRInt32)(x * x - x + 2.001); // add .001 for rounding
>+    viewers   /= 4;
>+  } else {
>+    viewers    = 0;

No need for both the viewers = 0 initialization and the else clause here.

>+  }

/be
Thanks for the feedback Brendan.  I will apply both of these suggestions to my
patch.
Comment on attachment 193762 [details] [diff] [review]
add global content viewer cache eviction

Could you diff with a bit more context? Say, -u8

camino needs to be patched as well:
/camino/resources/application/all-camino.js, line 132 --
pref("browser.sessionhistory.max_viewers", 3);

+PRInt32
+nsSHistory::GetMaxGlobalViewers()

Why not PRUint32?

in that function:
+  if (LL_CMP(bytes, ==, LL_ZERO))
+    return 0;

LL_IS_ZERO

It looks like the SHistory destructor doesn't properly maintain the doubly
linked list, doesn't it need to set this->mNextSHistory->mPrevSHistory too?

However, is there a need for this list to be doubly-linked?


What does "Tab" mean in this context? Docshell doesn't know about tabs. The
current application may not support tabs. does it mean "the current shistory
tree"?

   PRInt32 startIndex, endIndex;

How about a comment like:
  // These indices give the range of SHEntries whose content viewers will be
  // evicted

-  for (PRInt32 i = startIndex; trans && i < endIndex; ++i) {
+  for (PRInt32 i = startIndex; i < endIndex; ++i) {

Why's the null check no longer needed?

@@ -660,7 +861,6 @@ nsSHistory::UpdateIndex()
     // We've just finished a history navigation (back or forward), so enforce
     // the max number of content viewers.

-    EvictContentViewers(mIndex, mRequestedIndex);
     mIndex = mRequestedIndex;

The comment is now wrong.

So is subframe navigation the reason why you moved the EvictContentViewers
calls into docshell?


+nsSHEntry::GetAnyContentViewer(nsISHEntry **aOwnerEntry,
+				nsIContentViewer **aResult)
+{
+  // Find a content viewer in the root node or any of its children,
+  // assuming that there is only one content viewer total in any one
+  // nsSHEntry tree

So is this just a bugfix in this code, for the case of a subframe navigation
where the main sh entry wouldn't have a content viewer but a child would? (Is
this even the case?)

If I'm wrong here... please explain what this is for :)

nsSHistory.cpp:
+static PRInt32  gHistoryMaxViewers = 3;

should this continue to be a pref? If not, this could as well be a constant...

+  // Default preference is negative

not in all products (embeddors)...

+  if (gHistoryMaxGlobalViewers < gHistoryMaxViewers) {
+    gHistoryMaxViewers = gHistoryMaxGlobalViewers;

This would be a bit easier to read for me if you wrote the if:
+  if (gHistoryMaxViewers > gHistoryMaxGlobalViewers ) {


The patch makes gHistoryMaxGlobalViewers a completely non-live pref... apps
with
live profile switching maybe won't appreciate that... In fact... This will make
it impossible to set this pref in the profile for some applications (where
docshell gets initialized before the profile is selected).

maybe nsDocShellModule.cpp should have a _DTOR too, which unsets gInitialized,
to
avoid assertions if someone shuts down xpcom and reinitializes it in the same
app.

What's the use case for the evictAll parameter in EvictGlobalContentViewer?
There does not seem to be any caller that passes PR_TRUE.
Also, why is this method in the IDL, when it's only called from nsSHistory.cpp?
It looks like it could be a static method.

+    NS_ASSERTION(totalContentViewers - gHistoryMaxGlobalViewers == 1,
+		  "More than 1 over our global limit on ContentViewers");

(I note that this assertion only makes sense while the pref is not live)

nsDocShell.cpp:
-    mPrefs->GetIntPref("browser.sessionhistory.max_viewers", &maxViewers);
+    mPrefs->GetIntPref("browser.sessionhistory.max_global_viewers",
+			&maxViewers);
     if (maxViewers == 0)
	 return PR_FALSE;

If it's -1 and only 32 MB RAM are present, this check is wrong...

@@ -5244,7 +5280,7 @@ nsDocShell::RestoreFromHistory()

     if (mContentViewer) {
	 mContentViewer->Close(mSavingOldViewer ? mOSHE.get() : nsnull);
-	 mContentViewer->Destroy();
+	 viewer->SetPreviousViewer(mContentViewer);
     }

     mContentViewer.swap(viewer);

Why this change?
Attachment #193762 - Flags: review?(cbiesinger) → review-
(In reply to comment #10)
> What's the use case for the evictAll parameter in EvictGlobalContentViewer?
> There does not seem to be any caller that passes PR_TRUE.
> Also, why is this method in the IDL, when it's only called from nsSHistory.cpp?
> It looks like it could be a static method.

I can answer this part.  I had asked Marria to include scriptable evict-all
functionality so that it can be used to fix bug 305462.
(In reply to comment #10)
> camino needs to be patched as well:
> /camino/resources/application/all-camino.js, line 132 --
> pref("browser.sessionhistory.max_viewers", 3);

Should this really be in all the app pref files rather than just in all.js (with
any apps that want to override the defaults still able to)?  I'd think since the
code reading the pref is in core code, the default should live in all.js.
(In reply to comment #11)
> I can answer this part.  I had asked Marria to include scriptable evict-all
> functionality so that it can be used to fix bug 305462.

Oh, ok... but how does that code get an nsISHistoryInternal instance?
Good question... maybe it would be better to implement that as an observer
service notification.
So this walks all session history transactions for all currently existing
toplevel docshells every time we make a page traversal, right?  This could
easily be up into the thousands of items (we keep 50 transactions by default,
and having 20 toplevel docshells is not uncommon).

Is there a reason not to just store global pointers to the very few shentries
we're actually caching a content viewer in and only walk those?
Comment on attachment 193762 [details] [diff] [review]
add global content viewer cache eviction

Another quick thought: prclist.h defines macros for a header-based circular
doubly-linked list, which gives the desired O(1) remove time.  Why reinvent,
even if it's small?

/be
(In reply to comment #10)
> (From update of attachment 193762 [details] [diff] [review] [edit])
> Could you diff with a bit more context? Say, -u8

sure, will do this in the next post of this patch

> camino needs to be patched as well:
> /camino/resources/application/all-camino.js, line 132 --
> pref("browser.sessionhistory.max_viewers", 3);

ok, I'll change this. I think I'm going to actually put the new pref stuff in
all.js, as somebody else recommended

> +PRInt32
> +nsSHistory::GetMaxGlobalViewers()
> 
> Why not PRUint32?

changed this

> in that function:
> +  if (LL_CMP(bytes, ==, LL_ZERO))
> +    return 0;
> 
> LL_IS_ZERO
> 

changed this

> It looks like the SHistory destructor doesn't properly maintain the doubly
> linked list, doesn't it need to set this->mNextSHistory->mPrevSHistory too?
> 
> However, is there a need for this list to be doubly-linked?

Oops, you're right that there is an error in the destructor.  I made this doubly
linked for constant time removal.  I think I'm going to take Brendan's advice
and use the linked list defined in prclist.h

> 
> What does "Tab" mean in this context? Docshell doesn't know about tabs. The
> current application may not support tabs. does it mean "the current shistory
> tree"?

yes, it means the current shistory tree.  I adjusted the comments appropriately.

>    PRInt32 startIndex, endIndex;
> 
> How about a comment like:
>   // These indices give the range of SHEntries whose content viewers will be
>   // evicted

ok, added

> -  for (PRInt32 i = startIndex; trans && i < endIndex; ++i) {
> +  for (PRInt32 i = startIndex; i < endIndex; ++i) {
> 
> Why's the null check no longer needed?

It's not possible for the transaction to be null, since the startIndex and
endIndex must be within the range 0 to mLength-1

> @@ -660,7 +861,6 @@ nsSHistory::UpdateIndex()
>      // We've just finished a history navigation (back or forward), so enforce
>      // the max number of content viewers.
> 
> -    EvictContentViewers(mIndex, mRequestedIndex);
>      mIndex = mRequestedIndex;
> 
> The comment is now wrong.

took this out

> So is subframe navigation the reason why you moved the EvictContentViewers
> calls into docshell?

I took out the evict call in UpdateIndex because at that point, the content
viewer from the previously viewed page is not inserted into history yet.  That
means we would be missing that content viewer while iterating through the
history entries looking for a candidate content viewer to evict.

> +nsSHEntry::GetAnyContentViewer(nsISHEntry **aOwnerEntry,
> +				nsIContentViewer **aResult)
> +{
> +  // Find a content viewer in the root node or any of its children,
> +  // assuming that there is only one content viewer total in any one
> +  // nsSHEntry tree
> 
> So is this just a bugfix in this code, for the case of a subframe navigation
> where the main sh entry wouldn't have a content viewer but a child would? (Is
> this even the case?)
> 
> If I'm wrong here... please explain what this is for :)

yes, in subframe navigation the content viewer is cached in a child sh entry, so
that's what this code is for.

> nsSHistory.cpp:
> +static PRInt32  gHistoryMaxViewers = 3;
> 
> should this continue to be a pref? If not, this could as well be a constant...

I don't think it needs to be a pref, so I made it constant

> +  if (gHistoryMaxGlobalViewers < gHistoryMaxViewers) {
> +    gHistoryMaxViewers = gHistoryMaxGlobalViewers;
> 
> This would be a bit easier to read for me if you wrote the if:
> +  if (gHistoryMaxViewers > gHistoryMaxGlobalViewers ) {

I actually ended up taking this out after making the pref live (see below.)

> The patch makes gHistoryMaxGlobalViewers a completely non-live pref... apps
> with
> live profile switching maybe won't appreciate that... In fact... This will make
> it impossible to set this pref in the profile for some applications (where
> docshell gets initialized before the profile is selected).

Ok, I'm making it a live pref.  Also, I'm assuming that if the pref is negative,
we calculate number of content viewers to cache based on available memory. 
Otherwise we assume that we should leave the pref value as is.

> maybe nsDocShellModule.cpp should have a _DTOR too, which unsets gInitialized,
> to
> avoid assertions if someone shuts down xpcom and reinitializes it in the same
> app.

Sounds like a good idea

> What's the use case for the evictAll parameter in EvictGlobalContentViewer?
> There does not seem to be any caller that passes PR_TRUE.
> Also, why is this method in the IDL, when it's only called from nsSHistory.cpp?
> It looks like it could be a static method.

Brian had asked me to add that, as he mentioned in a follow up post, but after
we discussed it, he thinks there is a better way to do that.  I'm taking this
function out of the interface and taking out the evictAll parameter.

> +    NS_ASSERTION(totalContentViewers - gHistoryMaxGlobalViewers == 1,
> +		  "More than 1 over our global limit on ContentViewers");
> 
> (I note that this assertion only makes sense while the pref is not live)

true, I'm changing it to calculate how many content viewers need to be evicted
instead of assuming it is always 1.

> nsDocShell.cpp:
> -    mPrefs->GetIntPref("browser.sessionhistory.max_viewers", &maxViewers);
> +    mPrefs->GetIntPref("browser.sessionhistory.max_global_viewers",
> +			&maxViewers);
>      if (maxViewers == 0)
> 	 return PR_FALSE;
> 
> If it's -1 and only 32 MB RAM are present, this check is wrong...

If only 32 MB RAM are present, then it would be 0, not -1 right?  Anyway, it is
probably safer to check maxViewers <= 0, so I made that change.

> @@ -5244,7 +5280,7 @@ nsDocShell::RestoreFromHistory()
> 
>      if (mContentViewer) {
> 	 mContentViewer->Close(mSavingOldViewer ? mOSHE.get() : nsnull);
> -	 mContentViewer->Destroy();
> +	 viewer->SetPreviousViewer(mContentViewer);
>      }
> 
>      mContentViewer.swap(viewer);
> 
> Why this change?

This makes RestoreFromHistory more similar to the normal flow through
SetupNewViewer, where the old viewer is stored as the previous viewer until it
is destroyed in the Show() for the new content viewer.  Also, we use the
presence of a previous viewer as the criteria for when to evict content viewers
in Show().  Make sense?

Thanks for your feedback, I'll be posting a new patch with all of those changes
shortly.
Marria

(In reply to comment #12)
> (In reply to comment #10)
> > camino needs to be patched as well:
> > /camino/resources/application/all-camino.js, line 132 --
> > pref("browser.sessionhistory.max_viewers", 3);
> 
> Should this really be in all the app pref files rather than just in all.js (with
> any apps that want to override the defaults still able to)?  I'd think since the
> code reading the pref is in core code, the default should live in all.js.

I think originally bryner wanted this to be opt in for the different apps, but
he siad he is fine with putting the pref in all.js now.  I'll make that change.
thanks,
Marria
(In reply to comment #15)
> So this walks all session history transactions for all currently existing
> toplevel docshells every time we make a page traversal, right?  This could
> easily be up into the thousands of items (we keep 50 transactions by default,
> and having 20 toplevel docshells is not uncommon).
> 
> Is there a reason not to just store global pointers to the very few shentries
> we're actually caching a content viewer in and only walk those?

I've been thinking about this as well.  I agree that it's wasteful to iterate
through the whole list of shentry objects, but we can optimize this by only
walking through the shentry objects within the possible window of shentries that
could possibly have a content viewer.  This would mean (mIndex - 3) to (mIndex +
3) for each SHistory object.  This is not much worse than keeping a separate
list of shentries.  I'll add this change to my patch.
thanks,
Marria
(In reply to comment #16)
> (From update of attachment 193762 [details] [diff] [review] [edit])
> Another quick thought: prclist.h defines macros for a header-based circular
> doubly-linked list, which gives the desired O(1) remove time.  Why reinvent,
> even if it's small?

true, this makes sense.  I'll incorporate the existing linked list implementation.
thanks,
Marria
>> If it's -1 and only 32 MB RAM are present, this check is wrong...
>If only 32 MB RAM are present, then it would be 0, not -1 right?  Anyway, it is
>probably safer to check maxViewers <= 0, so I made that change.

Checking <= 0 would disable fastback entirely, would it not?

(It would be wrong because the pref would be -1, while the calculated value
would be 0.)

(In reply to comment #21)
> >> If it's -1 and only 32 MB RAM are present, this check is wrong...
> >If only 32 MB RAM are present, then it would be 0, not -1 right?  Anyway, it is
> >probably safer to check maxViewers <= 0, so I made that change.
> 
> Checking <= 0 would disable fastback entirely, would it not?
> 
> (It would be wrong because the pref would be -1, while the calculated value
> would be 0.)

You are right, that's not quite what I want to do.  I really need to check the
calculated value, not the pref.  Will fix appropriately.
thanks,
Marria
Attached patch revised patch after feedback (obsolete) — Splinter Review
Attachment #193762 - Attachment is obsolete: true
Attachment #194708 - Flags: superreview?(cbiesinger)
Attachment #194708 - Flags: review?(bryner)
Attachment #194708 - Flags: superreview?(cbiesinger)
Attachment #194708 - Flags: superreview?(bryner)
Attachment #194708 - Flags: review?(cbiesinger)
Attachment #194708 - Flags: review?(bryner)
Comment on attachment 194708 [details] [diff] [review]
revised patch after feedback

looks good.
Attachment #194708 - Flags: superreview?(bryner) → superreview+
Comment on attachment 194708 [details] [diff] [review]
revised patch after feedback

Some thoughts on EvictGlobalContentViewer:

- The gross structure is:

  while (shouldTryEviction) {
    [find a victim if we can, computing totalContentViewers]

    if (totalContentViewers > sHistoryMaxTotalViewers && evictViewer) {
      [evict the victim]

      if (totalContentViewers - sHistoryMaxTotalViewers == 1) {
	shouldTryEviction = PR_FALSE;
      }
    } else {
      // couldn't find a content viewer to evict, so we are done
      shouldTryEviction = PR_FALSE;
    }
  }

  but this could be simplified, unindenting the bulk of the code and avoiding
  the shouldTryEviction flag altogether:

  PRInt32 totalContentViewers;
  do {
    [find a victim if we can, computing totalContentViewers]

    if (!evictViewer || totalContentViewers <= sHistoryMaxTotalViewers) {
      break;
    }

    [evict the victim]
  } while (--totalContentViewers > sHistoryMaxTotalViewers);

- The [find a victim if we can] part is itself a loop.	In that loop, we have:

      PRInt32 startIndex = PR_MAX(0, shist->mIndex - gHistoryMaxViewers);
      PRInt32 endIndex = PR_MIN(shist->mLength - 1,
				shist->mIndex + gHistoryMaxViewers);

  The terms being min'ed and max'ed are not consistent.  If there are at most
  gHistoryMaxViewers, then if all are used by shist and one is at
shist->mIndex,
  the least index that might cache a viewer is

    shist->mIndex - gHistoryMaxViewers + 1

  Since endIndex is the last index, not the fencepost, the second argument to
  PR_MIN should likewise be shist->mIndex + gHistoryMaxViewers - 1.

  This assumes the SHEntry at shist->mIndex caches a viewer.  Since some pages
  have content that disables bfcache, isn't it the case that SHEntries caching
  viewers are not necessarily contiguous in shist?

- With a PRCList, to implement an LRU cache, it suffices to append new elements
  to the end of the list (as this patch does), remove least recently used ones
  from the front when evicting, and promote re-used elements by unlinking them
  and appending to the list on each "use".

  Suppose we maintain such a list only of SHEntries that cache content viewers,
  by appending to the list only when caching?  Then we merely need to promote
  on history navigation back or forward to an SHEntry that already caches a
  viewer, via unlink and append.

  We maintain a count of the number of cached viewers in another global, along
  with the PRClist.  When the counter equals gHistoryMaxViewers (or perhaps one
  less than that limit, see below) and a navigation step wants to cache another
  viewer, we evict from the head of the list.

  Here I am assuming we can always know that only one viewer is being cached,
  but clearly the current patch evicts as many as it needs to in order to
  restore the (total <= limit) constraint.  Without writing the code, I don't
  want to get hung up on this detail, or on whether we should try to avoid
  creating one more viewer than the limit.

  The main idea is to avoid O(N^2) growth rate in the EvictGlobalContentViewers
  by maintaining a global list only of entries (not history objects) that cache
  viewers, and by reorganizing that list to put more recently used entries
after
  less recently used ones, leaving the least recently used entry at the front.

Let me know if I'm missing something, or if this works.  Thanks,

/be
(In reply to comment #25)

>   The terms being min'ed and max'ed are not consistent.  If there are at most

There will never be a cached viewer at mIndex, because that's the page 
currently being viewed.  If you have navigated forward through a series of 
pages, you could have content viewers at mIndex - 3, mIndex - 2, mIndex - 1.  
If you've gone backwards through a series of pages, you could have them at 
mIndex + 1, mIndex + 2, mIndex + 3. (where gHistoryMaxViewers == 3, 
obviously).  So, I believe the arguments to min/max there are correct.

>   have content that disables bfcache, isn't it the case that SHEntries 
caching
>   viewers are not necessarily contiguous in shist?

That's true, but we don't allow them to exist outside of the range

[(mIndex - gHistoryMaxViewers), (mIndex + gHistoryMaxViewers)]

for each SHistory.  Was your concern that we would break out early if there 
was an entry without a cached viewer, or that the loop conditions aren't 
correct to find all of the viewers?

>   The main idea is to avoid O(N^2) growth rate in the 
EvictGlobalContentViewers

I don't think any part of this method is O(N^2).  If N is the number of open 
tabs, we will examine N*(2*gHistoryMaxViewers+1) entries to count the cached 
content viewers and locate one for eviction.  This will be by far the common 
case for navigation.  The case that doesn't perform as well is when more than 
one viewer needs to be evicted, that is, when the pref value changes.  In that 
case, the running time becomes O(M*N), where M is the number of viewers to 
evict.  This number is not going to be large, and typically not more than 8.  
In any case, it doesn't make sense to me to optimize for that case if there is 
overhead involved.  We could build up a sorted array of viewers based on 
distance from focus, for example, but that's wasted work when only evicting a 
single viewer.

The design here is really not LRU-based at all... from a memory-usage and user 
usage-pattern perspective it makes a lot more sense, I think, to only allow 
entries within a small distance of the current focus per SHistory.  For 
example, just because you haven't reached your global limit doesn't mean you 
should waste memory assuming the user might go back 8 pages.  We have a much 
better navigational context available here than say, the memory cache, and I 
believe that allows us to do a better job than pure LRU eviction.
Whiteboard: [needs review cbiesinger]
(In reply to comment #26)
> (In reply to comment #25)
> 
> There will never be a cached viewer at mIndex, because that's the page 
> currently being viewed.

Oh, sorry about that -- I thought we were counting that page's content viewer in
the quota, but you're right, that doesn't make sense.

> Was your concern that we would break out early if there 
> was an entry without a cached viewer, or that the loop conditions aren't 
> correct to find all of the viewers?

The latter, but now that you point out how the "sliding window" covers entries
whether or not their docs qualified for content viewer caching (something I
should have learned from reading the patch), my next concern is that we are not
caching up to the per-tab quota as people might expect.  It may complicate
things to skip non-viewer-caching entries, but it seems like the right thing
given the quota structure and pref names.

If we almost always cache, then it doesn't matter much.

> I don't think any part of this method is O(N^2).

Growth rate is O(N^2) for linear search because O(N) compounds as N*(N+1)/2.  I
agree we don't need to overdesign this, but if the LRU techniques with doubly
linked lists could be used, the code would be smaller.

Plus, and this is why I brought this up: people always abuse these mechanisms,
it's a corollary of Parkinson's Law, or maybe Murphy's ;-).  I will bet real
money that a bunch of users will follow some "Firefox performance hacks" advice
on the web, or perhaps even download a tweaked distribution, and wind up with
large (and growing) N and M.

I hear of users with hundreds of tabs in dozens of windows.  Such users don't
often have enough RAM to benefit across all those histories from bfcache, but
then why search all of them when evicting (which will happen often)?

Sure, such nutty (or not! some VIPs I know do this ;-)) users are hard cases,
but if we can make the code deal gracefully with them and with the more normal
design target, why not?

> We could build up a sorted array of viewers based on 
> distance from focus, for example, but that's wasted work when only evicting a 
> single viewer.

Agreed, there's no point in pessimizing by keeping such extra books all the time.

> The design here is really not LRU-based at all...

Yeah, that's the point of the distance from focus stuff -- I was pretty much
ignoring that just to promote the LRU technique :-P.

Why is evicting the content viewer furthest from the focus better?  That has no
feedback or measuring loop to learn whether the victim was actually more likely
to be hit again than a nearer entry-with-viewer, when surfing within a tab.  If
anyone has empirical studies shedding light on this, please share.

And independent of that question, keeping gSHistoryList in LRU order seems like
a better idea.  If I open three tabs, A, B, and C, and surf this page reference
string:

  A1 B1 C1 A2 B2 C2 A3 B3 C3

I'd like A1 to be evicted, and the current patch and a global LRU both do that.
 But if the string is

  A1 B1 C1 A2 B2 C3 A1 B3 C3 D4

then I probably don't want anything cached in A evicted.  This pattern and ones
like it come up a lot with breadth-first tab browsing.  Our simple policy for
which tab you go back to when closing another tab, which ignores chronological
order, is predicated on it.

/be
(In reply to comment #27)
> > I don't think any part of this method is O(N^2).
> Growth rate is O(N^2) for linear search because O(N) compounds as N*
(N+1)/2.  I

I'm still not quite sure what growth you're referring to:

4 tabs --> examine 4*(2*3 + 1) == 28 entries
10 tabs --> examine 10*(2*3 + 1) == 70 entries
20 tabs --> examine 20*(2*3 + 1) == 140 entries

This is the number of entries we'll need to iterate for each page navigation, 
to find an eviction candidate.  

> on the web, or perhaps even download a tweaked distribution, and wind up with
> large (and growing) N and M.

How do you end up with consistently large M?  That implies that the number of 
pages you need to evict at each traversal is always more than 1.  I don't see 
how this is possible given the way page navigation works - we're only caching 
one viewer, so we will only need to evict one viewer.

> I hear of users with hundreds of tabs in dozens of windows.  Such users don't
> often have enough RAM to benefit across all those histories from bfcache, but
> then why search all of them when evicting (which will happen often)?

Agreed that it's not ideal, but to avoid it I think you need more bookkeeping, 
given the eviction strategy.

> but if we can make the code deal gracefully with them and with the more
> normal design target, why not?

Diminishing returns + limited time + higher priority things to work on, mostly.

> Why is evicting the content viewer furthest from the focus better?  That has 
no
> feedback or measuring loop to learn whether the victim was actually more 
likely
> to be hit again than a nearer entry-with-viewer, when surfing within a tab.  
If
> anyone has empirical studies shedding light on this, please share.

I've seen some data exactly like this.  It was unfortunately in hardcopy form 
(I'll try to find a link).  The gist of it is that frequency of revisitng a 
page falls off exponentially with distance backwards from the current position.

> And independent of that question, keeping gSHistoryList in LRU order seems 
like
> a better idea.  If I open three tabs, A, B, and C, and surf this page 

I agree that that may improve the choice of which tab to evict from in the 
event of a tie in distance-from-focus, and it's minimal bookkeeping on each 
traversal.  I was actually thinking about that improvement being done 
separately.
(In reply to comment #28)
> (In reply to comment #27)
> > > I don't think any part of this method is O(N^2).
> > Growth rate is O(N^2) for linear search because O(N) compounds as N*
> (N+1)/2.
> 
> I'm still not quite sure what growth you're referring to:
> 
> 4 tabs --> examine 4*(2*3 + 1) == 28 entries
> 10 tabs --> examine 10*(2*3 + 1) == 70 entries
> 20 tabs --> examine 20*(2*3 + 1) == 140 entries

I meant the standard linear search asymptotic analysis: as the number of items
in a list grows linearly, the cumulative number of comparisons to find an item
grows quadratically.  The cost we're counting must be paid when navigating with
a full bfcache, not when creating a new tab, so counting tabs does not measure
the growth rate.

> How do you end up with consistently large M?  That implies that the number of 
> pages you need to evict at each traversal is always more than 1.  I don't see 
> how this is possible given the way page navigation works - we're only caching 
> one viewer, so we will only need to evict one viewer.

I must have meant a different M -- there are (or were) two independent prefs
that can be set.

Should we keep the pref for max_viewers?  It's well-known and used.  The patch
removes it in spite of its user base, and it's an independent tuneable from
max_total_viewers, so could be kept just on general principles -- unless the
fear is that people will jack it up too high and expose the growth rate problem.

> I've seen some data exactly like this.  It was unfortunately in hardcopy form 
> (I'll try to find a link).  The gist of it is that frequency of revisitng a 
> page falls off exponentially with distance backwards from the current
> position.

I believe this gist in general.

Here's my thing: every day I deal with apps such as bugzilla, maps.google.com,
my stinking bank, and others, and I often go back N for N > 1 -- much more often
than I go back 1.  Yahoo! groups with its interstitial ads comes to mind too.  I
suppose many of these apps want me to just keep going forward, but I don't want
to -- I want to get back to a "home page" of some sort.  (Sometimes I have to
shift-reload, but not always.)

Given counterexamples such as this, I think LRU works better.  And in the cases
where likelihood does drop exponentially, it works just as well.

I'm talking about LRU within a tab's sliding window of history entries, as an
additional layer under the global gSHistoryList LRU proposal.

> I agree that that may improve the choice of which tab to evict from in the 
> event of a tie in distance-from-focus, and it's inimal bookkeeping on each 
> traversal.  I was actually thinking about that improvement being done 
> separately.

Either is good, if we have till 1.8b5/beta2 to get this right.  Or did you want
to land a patch for 1.8b4/beta1?

/be
(In reply to comment #29)
> I meant the standard linear search asymptotic analysis: as the number of items
> in a list grows linearly, the cumulative number of comparisons to find an item
> grows quadratically.  The cost we're counting must be paid when navigating with
> a full bfcache, not when creating a new tab, so counting tabs does not measure
> the growth rate.
> 

Ah, so you mean the total number of entries traversed across all navigations? 
It's still not clear to me what the N is in your O(N^2).

> I must have meant a different M -- there are (or were) two independent prefs
> that can be set.

No longer, see below.

> 
> Should we keep the pref for max_viewers?  It's well-known and used.  The patch
> removes it in spite of its user base, and it's an independent tuneable from
> max_total_viewers, so could be kept just on general principles -- unless the
> fear is that people will jack it up too high and expose the growth rate problem.
> 

I don't think having shipped a pref in an alpha release is really a
justification for keeping it.  I just don't see the use case for tuning it
personally, and I don't like adding new knobs to turn where the testing coverage
on non-default values is spotty or nonexistant.

> Here's my thing: every day I deal with apps such as bugzilla, maps.google.com,
> my stinking bank, and others, and I often go back N for N > 1 -- much more often
> than I go back 1.  Yahoo! groups with its interstitial ads comes to mind too.  I

Ok, how about N > 3?

> Given counterexamples such as this, I think LRU works better.  And in the cases
> where likelihood does drop exponentially, it works just as well.
> 
> I'm talking about LRU within a tab's sliding window of history entries, as an
> additional layer under the global gSHistoryList LRU proposal.
> 

So "evict from the least recently used tab starting with the least recently used
entry"?  I think you're right that it's equivalent in terms of eviction order in
the common use case, except for one thing: Don't you think there's an advantage
to not even wasting the user's memory caching a page that they're very unlikely
(per the usage study) to go back to?  Seems like that's worth considering,
especially given our inability to measure a content viewer's memory usage at
runtime.

> Either is good, if we have till 1.8b5/beta2 to get this right.  Or did you want
> to land a patch for 1.8b4/beta1?

I did want to, but clearly that won't happen if we rework this to be an
LRU-based eviction scheme.  If we go that route, I think we should finalize as
much of the design as possible on the wiki first... doing that in the first
place would have saved a lot of time.  I definitely underestimated the amount of
interest in this.
Sorry, I didn't mean to pre-empt biesi's review here.  We can figure out the
best design and patch the patch after it lands on the trunk, and gets some
testing.  With the right testers, and possibly some instrumentation, we could
get results showing where the replacement alg. differs from an LRU one.

/be
Target Milestone: mozilla1.8beta4 → mozilla1.8beta5
Comment on attachment 194708 [details] [diff] [review]
revised patch after feedback

+nsDocShell::GetLoadedTransIndex(PRInt32 *aLoadedTransIndex)
+{
+    *aLoadedTransIndex = mLoadedTransIndex;

hmm, I wonder if it might be better to make this |return
mSessionHistory->GetIndex(aLoadedTransIndex);|...

(with a null check, I suppose)

This'd also remove this line, which kinda makes me nervous:
+	 mLoadedTransIndex = mPreviousTransIndex + 1;


docshell/shistory/public/nsISHEntry.idl
+    /** Return any content viewer present in or below this node in the
+	 nsSHEntry tree */
+    nsIContentViewer getAnyContentViewer(out nsISHEntry ownerEntry);

maybe this comment should mention when this is different from contentViewer


I think that the docs in nsISHistoryInternal expose too much implementation
details... but maybe that's ok for an *Internal interface.

+class nsSHistoryPrefObserver : public nsIObserver

ok.. hm... maybe pref observer should evict the content viewers as necessary to
ensure they are within the limits of the new pref value?

nsSHistory.h:
+  static PRInt32  sHistoryMaxTotalViewers;

why not make this private? (probably requires making the pref observer a
friend)

Sorry for the delay. r=biesi
Attachment #194708 - Flags: review?(cbiesinger) → review+
Comment on attachment 194708 [details] [diff] [review]
revised patch after feedback

oh, also:
docshell/shistory/src/nsSHistory.cpp
-}
+}
\ No newline at end of file

please re-add that newline :)
(In reply to comment #32)
> (From update of attachment 194708 [details] [diff] [review] [edit])
> +nsDocShell::GetLoadedTransIndex(PRInt32 *aLoadedTransIndex)
> +{
> +    *aLoadedTransIndex = mLoadedTransIndex;
> hmm, I wonder if it might be better to make this |return
> mSessionHistory->GetIndex(aLoadedTransIndex);|...
> (with a null check, I suppose)

This actually isn't equivalent, because we don't evict until after the content 
viewer is shown (when painting is unsuppressed).  Consider the case where I 
navigate 2 different iframes, like this:

function navigate() {
  frame1.src = "http://foo.com";
  frame2.src = "http://bar.com";
}

If the starting index was 5, it's now 7, since we increment that index at the 
beginning of the load.

Now, if frame1 finishes loading, we'll go to evict a content viewer.  With the 
patch as-is, we'll consider the entries at index 2 and index 3 for eviction.  
Then, when the second iframe loads, we'll again consider the entry at index 3 
for eviction.  This is extra work, plus it seems simpler if we always consider 
the loads independently.  So I'd prefer to keep this as-is.

> This'd also remove this line, which kinda makes me nervous:
> +	 mLoadedTransIndex = mPreviousTransIndex + 1;

It seems pretty well-defined that AddEntry will increment the current index by 
1.
ok... it's fine with me to leave it as-is, then.
Attached patch final patch to check in (trunk) (obsolete) — Splinter Review
Attachment #194708 - Attachment is obsolete: true
Please land on trunk asap if you want in the branch . . 
The previous patch had a bug that shows up when the session history reaches its
maximum number of entries.  The oldest entries are dropped off the front of the
list, so all of the indices need to shift by the number of entries that were
removed.  This patch fixes that, fixes the subframe navigation case to refetch
the index rather than adding 1, and finally adds in a missing return NS_OK in
nsSHistory::Startup (which I think caused the orange on balsa when this went in
originally).

I tested this approach on beast and found that it fixed the performance
problem.
Attachment #195988 - Attachment is obsolete: true
Attachment #196943 - Flags: review?(cbiesinger)
Attachment #196943 - Flags: review?(cbiesinger) → review+
checked in on the trunk. no perf regression this time.
Status: NEW → RESOLVED
Closed: 19 years ago
Resolution: --- → FIXED
Comment on attachment 196943 [details] [diff] [review]
patch that doesn't leak

requesting branch approval
Attachment #196943 - Flags: approval1.8b5?
Attachment #196943 - Flags: approval1.8b5?
Attached patch branch patchSplinter Review
This is actually the patch that would go in on the branch.
Attachment #197210 - Flags: approval1.8b5?
Attachment #197210 - Flags: approval1.8b5? → approval1.8b5+
Checked in on the branch.
Keywords: fixed1.8
Component: History: Session → Document Navigation
QA Contact: history.session → docshell
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: