Closed Bug 38486 Opened 24 years ago Closed 19 years ago

Caching DOM and JS context in memory might provide faster back/forward

Categories

(Core :: DOM: Core & HTML, enhancement, P3)

enhancement

Tracking

()

RESOLVED DUPLICATE of bug 274784
Future

People

(Reporter: sicking, Unassigned)

References

Details

(Keywords: dom0, perf, topperf, Whiteboard: parity-opera, see comment 68)

It would be really nice if mozilla could keep the DOM and the JavaScript 
context in memory when leaving a page so that if the user click back mozilla 
wouldn't have to reload the page. This also works when going forward. This 
makes it possible to go back to dynamic pages and also makes it a lot faster to 
go back.

Note that this should only happen when clicking back and forward. Not when you 
click a link to a page that happens to be DOM-cached. Then that page should be 
loaded the normal way (so there might actually exist two different DOM's for 
the same page)

There would have to be some mechanism that deallocs the DOM/JS when they are 
too far 'back'.
really not sure who should own this but I'm sending it over to DOM folks for an 
initial look.
Assignee: asadotzler → jst
Component: Browser-General → DOM Level 0
QA Contact: jelwell → desale
(just trick you into doing this)
I think this would automaticly fix bug 22320, bug 35566, bug 16806 and proboly a
few more...
Saving the DOM and JS Contexts in memory is not the sollution here, and that

would be a huge waste of memory. Once the cache is working perfectly the gain

from doing something like this is very small since parsing a small file is

*really* fast as long as the file comes from the cache, parsing a big file can

take a while but saving the DOM for a big fie in memory is out of the question.



IMO this adds far more complexity than the gain in doing this is worth, and it's

not as if mozillas foot print is too small as it is, so I'm marking this

WONTFIX.



Jonas, there are other far more elegant sollutions to all of those bugs you

mentioned, storing frame stat in the session history being one of them.

Status: NEW → RESOLVED
Closed: 24 years ago
Resolution: --- → WONTFIX
I agree that the bugs above is not a satisfactory reason for implementing this 
feature. However this feature is the only way to get the ability to go back to 
a dynamic page. Loading from cache will only get you back to the url not to the 
page as you left it.

We are currently going from having just static pages to having compleate 
applications in your browser, that is why we have DOM and JS. As we all know 
mozilla is the browser to use if you want to make more powerful pages so 
marking this a WONTFIX is a step in the wrong direction IMHO.

I agree that waiting until the bloat is down is a good idea but please don't 
mark it a WONTFIX
I would really like to have the ability to go back to pages other than normal 
static HTML pages. I think this RFE is the only way to get this working so I'm 
reopening this bug. Feel free to mark it WONTFIX again if you disagree with me 
or think that "back to dynamic pages" should be filed as another RFE
Sorry, forgot to actually reopen...
Status: RESOLVED → REOPENED
Resolution: WONTFIX → ---
Target Milestone: --- → Future
Setting target milestone to future to get this off the PDT radar.
Adding perf keyword since implementing this would increase performance on 
back/forward
Keywords: perf
Some thoughts I just posted to porkjockeys on this issue....

I'd worked out a design for this with Travis way back, when he was reworking
webshell and session history (with radha.)  I thought he'd posted it, but I
can't find it on mozilla.org anywhere.

At the time, we were debating whether it was worth the in-memory bloat to
store the DOM, as well as the extra mechanism for saying "I have x bytes to
use for such a cache" or "I am willing to store x number of pages in the back
direction," and maybe "I am also willing to store y number of pages in the
forward direction."  We decided we'd need to run some experiements to see how
much time DOM creation was taking (once the cache was fully operational), vs.
the rest of the cost of bringing up the document (primarily style resolution
and frame construction), and then decide if it was worth adding the extra
code and complexity and bloat for saving that amount of time.  I don't mean
to phrase it in negative terms, but I want to be clear that "just caching the
DOM" isn't free.  Bindu sends around some great timing numbers every week
that break down the layout process into content creation, frame construction,
etc.  Bindu or Marc, can you post a pointer to the latest in case anyone is
interested?

We discussed going further:  what about caching the frame tree as well (and
therefore the computation for style resolution.)  If the containng window had
not changed size, you'd get extremely fast display.  Even if it had changed
size, you'd get just a resize reflow rather than a full reconstruction.

With such a system, you'd get the added benefit that the tear-down cost of
the previously-displayed page could be delayed until after the next page is
shown, maybe even spun out on a separate thread.

Being past the deadline for adding functionality, I think this is pretty
academic.  But I'd love to see the spec posted (if it can be found), or have
someone rework the design with real numbers plugged in so we can make an
in/out decision for 6.01.
This would be the best way to implement "History Lists" as defined in RFC 2616
(see http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.13 and
bug 56346 "Need to cache *all* pages for session history purposes").

RFC 2616 says "a history mechanism is meant to show exactly what the user saw at
the time when the resource was retrieved", but IMHO it's even better to show
what they saw at the time when they left the resource.

But it would cost much memory. So, how about saving the DOM on disk instead of
memory (could be a pref)? IMHO, performance is not the main benefit of this RFE.
I think the best thing would be to keep a certain number of DOM's in memory and 
after that revert to using the cached file. (I think this was originally 
discussed in the NGs but it didn't seem to get in here).

This could even be combined with busters idea to keep the frametree in memory. 
For example you could have the previous page's frametree/DOM/JS in memory, the 
three preceding pages DOM/JS in memory and the pages further back just have the 
response cached.
Is there an event that gets fired when we leave the page?  If so, a page might
expect to never be returned to in the state it left itself, and storing the DOM
might break some pages.
Yes, there is the onunload event on the window/document/body that fires for all
documents that have such an event when you leave the document/close the window.

Saveing the current DOM and JS state is extremely complicated and *will not*
work unless content developers are aware of it, and they're not so I wouldn't
expect that to ever be done...
In this case, yes, I don't think this can be done in a way which is consistent
and 100% correct.

The only way I can see that we could get to a state where this could be done is
if a page could opt-into this feature, which would mean a change to W3C
standards (not sure which one is best).
Read my last comments as applying to dynamic pages.  Obviously you still could
cache the DOM and stuff for static pages so as to get a speedup, if it was
worthwhile.
Keywords: dom0
Regarding breaking existing pages:

IE does this and I've never heard anybody flaming them for it. Ok, just because 
IE does something it dosn't mean it's good or standardscomplient ;)

Regarding breaking standards:
I'm not sure if the standards say anything about wether it's ok to restore 
state of a page when going "back". We do restore form-values which could break 
pages the same way restoring DOM or JS-context can. In fact restoring only 
parts of the state (only formvalues but not DOM/JS) sounds even more dangerous 
IMHO
This sounds more like a Networking:cache [rfe], doesn't really sound like DOM 0
except for the IE bit, Maybe DOM Other?

Not sure if it is related but see example technique 2 in guideline 2 in the
UAAG-TECHS document
http://www.w3.org/TR/2001/WD-UAAG10-TECHS-20010622/guidelines.html#gl-content-access
*** Bug 113721 has been marked as a duplicate of this bug. ***
Blocks: 91351
Blocks: 33269
Ok, since my bug got duped against this - How about we give this (keeping the
docshells around as a cache) a try?  This idea would produce a very
user-noticable speedup; click ... Back is one of the most common things done on
the web.  Think Ebay if nothing else.

As best I can tell, firing OnUnloads (when we move it to the cache), and OnLoads
(when moving it out of cache) should be sufficient, if combined with logic to
decide when not to cache pages (or to throw away the cached page, for example if
it's expire time has passed).

Some random measurements on a slow Win95 machine (233MMX, 64MB) - forward and
back between a ebay search list and a listing (after doing it several times to
make sure everything was cached) took circa 15 sec for a recent mozilla build,
and circa 7 seconds for IE.  I'll do some measurements on faster machines and
post after lunch.  My belief is that if we cache the docshell we should be able
to do it in a a second or two.
How would we get the scripts written in, in the right state, when we return to
the page?

If we save the textual contents of the scripts and reevaluate from the
beginning, we'd have the problem that |document.write| wouldn't put things in
the right place.

If we instead save all the script objects, we'd be firing onload handlers with
all the variables set that should be unset, etc.

Or is there a simple way around this?  (Would you just reconstruct the way we do
now for pages that have scripts?)
expiredate and no-cache headers should IMHO not be respected when going back (or forward). When i click back i expect to get back to the page as I left it, even if it has changed on the server. 

However when i click on a link it should always respect cache-headers, even if the same URL exist in the "history-list".

I think we currently do this when getting unparsed pages from cache.
Randell, how about we concentrate on making our runtime memory footprint even
close to acceptable before we even think about keeping a few extra DOM trees in
memory for fast back/forward? IMO doing that is time way better spent.
JST: You have a good point that our footprint is too large.  However, the
difference in apparent speed/usability to the user for footprint reductions
(while hard to quantify due to machine differences) is low; the difference in
perceived speed for improving Back may be much larger.  I have 300+MB on my
machine at home; I'm willing (on that machine) to trade some ram for speed.


On investigation, the browser I've worked on had these requirements for using
cached documents (views in spyglass parlance):

Don't use the view cache if:
a) the document expired
b) if it's a frameset, unless it's a wysiwyg frameset (?)
c) if the page has OnLoad, OnUnload, OnBlur, OnFocus, onClick, onMouseMove,
   onMouseOut, onMouseOver, onReset, onSubmit, onFocus, onSelect
   (ok, that's probably all of them), or a script (that we understand)

When we do the cached load:
a) start loads for any missing images
b) change window name, scroll position, etc
c) no need to fire onloads (since we don't cache if it has them)

When adding to the cache:
a) no need to fire onunloads (since we don't cache if it has them)
b) if it's in the cache already, destroy the old copy (if size > 1)

As you might guess, this was done to make JS contexts work correctly (the
original spyglass browser didn't have JS, we added it ourselves).

On the other hand, I also know that back does seem slow, and in my simple test
it was circa twice as slow as IE on a slow machine.  On a faster Win98 machine
(650 PIII), the difference was harder to measure, but IE was noticably faster -
almost instantaneous for back/forward.  Total time was <1.5s, much of which was
reaction-time delay, and probably 1/2 that time was waiting for the scrollbars
to settle (i.e. for non-visible portions to finish laying out).  Mozilla was
~3s, and felt much slower.

With jprof pages (large with no scripts or images), on the slow machine IE takes
about .75s to go back from one jprof to another; Mozilla takes 9.5s.  I also
tried this on the spyglass-based browser on a fast machine; back and forward
take <0.25s.

Having this caching done on pages only with no scripts knocks out a lot of
pages, but the caching would still help you with a lot of things like HTML docs,
etc.

BTW, you could make the argument that nothing needs to be re-run (other than
restarting anims, etc) on back/forward, and that those just move you around a
virtual stack of open pages (much as if the user kept using "Open in new Tab"
and closing tabs).  This solves some of the issues with going back/forward to
dynamic content.  (This is to an extent what I think sicking was saying).  The
counter-argument is that NS 4.x (and IE I assume) fire onload/onunload, though
it would be interesting to verify exactly what they do.
This would be a nice speed-up to Mozilla. It could be implemented in an
efficient manner. It is too big of a change for 1.0 at this point, though.
Nominating for Mozilla 1.1.
Keywords: mozilla1.1
As this one blocks bug 33269 the target milestone should be reset.
Keywords: topperf
Here's an idea (might be dumb because I don't know the internals of Mozilla):
after loading the document and parsing the source make a copy of DOM tree and JS
context before running onLoad. All the document.write's that happen immediatly
can be run, though, because they would happen next time too[1]. After going
forward throw away normal tree and JS stuff like currently but keep the copies
in memory (up to user selected amount, of course). When user presses back
proceed to continue with saved DOM tree and JS context. The only thing that
needs to be done is launching onLoad and running the page normally - of course
DOM tree and JS context should be copied again so that user can use back to
return the page again after going forward/following link another time. This all
comes to whether or not DOM tree and JS context can be effectively copied in
memory - I'm afraid that both would need to be implemented as copy-on-write for
this to make any sense.

[1] Some advertisers with document.write()s generating new file name for every
page view might be unhappy. Bad luck.
*** Bug 84286 has been marked as a duplicate of this bug. ***
No longer blocks: 33269
*** Bug 33269 has been marked as a duplicate of this bug. ***
This proposal would eat ridiculous amounts of memory (we're already too fat)
which would cause us to take even longer to swap in and out of memory and which
would therefore just slow us down.

It has huge problems with implementation (i.e. what to do with scripts) and has
been repeatedly shot down by the DOM module owner (jst) and what is currently
the closest we have to a Layout module owner (dbaron).

I suggest we go back to WONTFIX.

sicking: You say that IE does this. Could you provide some evidence? Cheers.
I can tell for certain that we did this in MozillaClassic, in the Mariner work,
because I touched the code.  I wouldn't be surprised at all to discover that IE
did the same thing, since it's a great way to make back-forward -- perhaps the
most common user actions in a browser, after link-clicking -- go really quickly.
(As long as your DOM and other representations aren't enormous, that is.)  It
certainly made an enormous difference in back/forward speed, and Mariner's
back/forward time was faster to begin with.

I'm not sure what Mariner did about scripts in this case -- probably just fired
onLoad -- but CVS would know.
I'm sure IE keeps data cached in some post-processed form for back/forward.
I have a large html file, and I did some tests in IE5.
I load the file: 13 seconds.
I reload it: 13 seconds.
I browse to a blank document, then hit back: 6 seconds.

Caching the DOM doesn't sound like a good solution to me, but what if the parser
created a flat structure from which the DOM could be quickly constructed?

The size will be known, so it can be placed in the cache and obey the limits set
in the prefs.  Since it will be attached to the original file in the cache, it
will be used in all cases where it has been determined that the file hasn't
changed, not just back/forward.  And I don't see any possible conflicts with
scripts and standards.  It should behave exactly the same, just faster.

To slim things down further, this cache can point to byte offsets in the
original file for the start and end of strings, rather than contain those
strings itself.
Well, one problem is that you have to spend the time serializing that stuff to
disk, when it's likely that, for any randomly-selected page, you're not going to
go [<-Back] to it.

We could extend the fastload code for this, I suppose.  We talked long ago about
doing this for scripts, because we've had the serialization code there for ages,
but at the time the cache wasn't done, and we had other stuff to do.

Sounds like someone should do an HTML fastload patch and post some numbers.
*** Bug 155050 has been marked as a duplicate of this bug. ***
I'm not able to get IE to do this. I no longer remember how i came to the
conclusion that IE did this, but i could very well have been wrong.
Blocks: 164421
Who's working on this one?
Keywords: mozilla1.2
Nobody is, AFAIK.
Assignee: jst → nobody
Status: REOPENED → NEW
Blocks: majorbugs
*** Bug 167310 has been marked as a duplicate of this bug. ***
*** Bug 169518 has been marked as a duplicate of this bug. ***
Keywords: mozilla1.1
Forward and Back Button Behavior in Internet Explorer:
http://support.microsoft.com/default.aspx?scid=KB;EN-US;Q199805

Opera cache:
http://elektrans.opera.com/resources/history-cache.html

Performance and the XML DOM:
http://msdn.microsoft.com/library/en-us/dnservice/html/service06202001.asp

(Dunno if there's anything of interest in these links but it's worth a shot.)



For those wanting a speedy back button now,
you can always do something silly like setting
browser.cache.check_doc_frequency to 2, then
mapping your middle mouse button to CTRL-F5 ...
*** Bug 117247 has been marked as a duplicate of this bug. ***
*** Bug 180534 has been marked as a duplicate of this bug. ***
Perhaps I'm missing something, but wouldn't keeping the entire page in some sort
of hidden window/tab someplace so that it could be recalled speedily prevent us
from firing onUnload events to those pages?  It seems like this approach has
exactly the same problems as any other suggestion with respects to script states.

I don't think it's going to be safe to do this, unless the page doesn't have an
onUnload handler (or any recurring timer events that it depends on?).  For those
types of pages, it makes the most sense to reload the page from a cache and
re-enter it (calling onLoad again, etc.).

The URL's in comment #39 describe the best behavior to target, in my opinion.

Perhaps what we need instead of caching the DOM/JS context is just to store
every page in the cache at retrieval time.  Pages that wouldn't ordinarily get
cached for any length of time (as expressed through HTTP caching headers) would
somehow be flagged as volatile and would be ignored/destroyed (as if expired)
for "real" requests (loading it through any other means besides Forward/Back).

Even this approach would need to be done carefully, as an external resource
might change and be loaded (recached) through a later request, changing the
appearance or behavior of the original page when a user clicks Back.  Tricky.

(FYI, this bug was just posted in a Slashdot comment along with a "call to
arms", so I imagine the votes will be going up a bit.)
In parrot, you have the parser output a machine-readable bytecode -- something
the interpreter can quickly translate and execute. We could do something similar
here: separate the parsing and execution steps with an intermediary bytecode of
sorts -- something that both executes and caches efficiently.

Apple does window caching in the OS X WindowServer.  Hiding and showing windows
is done quite effectively without lots and lots memory, and they're working with
32-bit images.  The system appears to be working quite efficiently; hiding and
showing is smooth and responsive. 

There's likely a way to store freshly parsed DOM trees without eating lots of
resources.  What's the current figure, without optimizations or compression, for
the current test suite (individual or entire)?

(Hello from the Slashdot crowd.  Yes, I voted; attention is good.)
how about this?

cache the tree of the one or two most recently visited page (the page you will
get from hitting back button) in memory, with older pages in disk cache. 

when you hit the back button, the tree of the previous page will be loaded
(fast), after the page has been displayed, move one disk cache to memory.

in short: make a stack of tree of pages with the few top most pages in memory,
while the tree below is in disk cache, when the top of the stack is popped, move
one page tree from disk cache to memory.

i am no professional programmer, may be i am entirely wrong, any comment about this?
Please, read the whole discussion before posting. The big problem isn't how the
old pages should be stored (cache, hidden tabs or something else) but how to
make the scripting work. Namely, onLoad and onUnload actions need to be working
as specified.

I think the hidden tabs solution sounds like the easiest one and the performance
of changing between different tabs is very good already. The solution to
scripting problem is that when a new link is followed we work just like
normally, but in addition to that we reload the previous document *from cache*
in a hidden tab. Or we almost reload it--we must stop just before onLoad takes
action. After user presses back we just display the hidden tab, fire the
onUnload action in the now-hidden-tab and fire the onLoad event handler. After
that, we reload the forward document again in a hidden tab to be ready in case
the user decides to press forward button. Again, the reloading should halt just
before firing onLoad event.

Pros: easy to implement (just create a tab every time we need to add to history,
hide some of them and rewire back and forward buttons).
Cons: excessive reloading can waste a lot of CPU cycles, pressing back multiple
times and then following another link requires removal of multiple hidden tabs,
keeping whole page structure ready can take pretty much memory.

Could somebody with enough knowledge of XUL create a simple demo. No need to
hide tabs or anything, simply make it work so that clicking a link follows a
link just like today, but in addition opens a new tab that reloads current page
from cache. Back button should be modified to change that "previous page" tab
instead. Is there any way to load document up to the position the onLoad event
should fire?

If this seems to be working, future optimizations could include reusing hidden
"history tabs", priorisation of reloading history tabs and loading of foreground
tabs, and a more intelligent way to decide when reloading the page is really
required instead of simply launching onLoad and onUnload events.

One thing to consider is window resize performance if this gets implemented. If
I currently load multiple complex pages in tabs and resize the browser window
the repaint can take quite some time because all the non-visible background tabs
take CPU power to resize too.
Could this option be made optional?  I think this is a good idea, but people
with slower computers may not.
I think the idea of creating hidden tabs is very interesting. Another idea is to
have tab opening by default in foreground with the back page in the back ground.
If the user goes back the new tab is hidden, if they go forward the background
tabs gradually scroll off into disk cache.
Is there already a bug filed for the slow-resize-with-multiple-tabs thing?
*** Bug 189824 has been marked as a duplicate of this bug. ***
I'm the original reporter of one of the bugs marked as a duplicate of this
bug ( http://bugzilla.mozilla.org/show_bug.cgi?id=117247 ).  An interesting
feature of 117247 is that it got fixed without 38486 being fixed.  If
anyone else following this has a bug that's a less-than-obvious duplicate,
it might be worth retrying it.  In fact, I find it's generally worth retrying
my open (and closed) bugs every now and then.  Surprising things can happen
without any explicit change in bug status in bugzilla:-)
Mozilla should keep the DOM around for pages generated entirely with DOM2 in
order to prevent dataloss.  Currently, going Back to a page generated entirely
with DOM2  gives a blank page.  (In one of my bookmarklets, I work around this
bug by setting target="_blank" on all links.)

Once that's fixed, determining whether it's worth doing the same for other pages
(perf vs. footprint) might be easy.
Keywords: dataloss
Whiteboard: parity-opera
Keywords: mozilla1.2
Summary: [FEATURE] Keep DOM and JS context in memory to provide fast access when clicking back → Keep DOM and JS context in memory to provide fast access when clicking back
*** Bug 223303 has been marked as a duplicate of this bug. ***
This BUG was opened 3.5 YEARS ago.
Jay Davis: stop spamming bugs with useless comments.  Or should I write
COMMENTS, to ape your shouting YEARS.  Unless you're here to help implement,
please follow bugzilla etiquette.

/be
I cannot believe that "Nobody's working on this" as the bug states at top, it
has 111 votes right now, it is a huge performance failing of Mozilla if not the
biggest right now, and it is not a blocker.  Maybe now that 3 1/2 years have
gone by, it should be a blocker.  I really doubt that the way things are going,
that anything will ever be done with this before Mozilla suite is retired. I
will personally put up $1000 to get this fixed.  If it is fixed within 1 month,
and as long as the person who fixes it doesn't do it for Firebird for at least 6
months because I'm sick of the recent trend of all Firebird new features never
making it over to Mozilla.  After 1 month, I will offer $600 until 2 months is
up.  After that, $300. Others here might want to consider adding their own $
reward.  Programmers can try to work out together ahead of time how they could
split the payoff.
sorry for the spam... slight summary tweak to make this easier to find.
Summary: Keep DOM and JS context in memory to provide fast access when clicking back → Cache DOM and JS context in memory to provide fast access when clicking back
Actually, whatever the specifics turn out to be in a reasonably good solution to
reaching 80% or more of Opera's performance back/forward and a distinct lack of
reloading the majority of currently reloaded pages will perfectly satisfy me. 
Mozilla doesn't measure up right now.  This has to be fixed or it's going to
kill it.
*** Bug 228748 has been marked as a duplicate of this bug. ***
*** Bug 229980 has been marked as a duplicate of this bug. ***
Quoted from section 13.13 (page 98) of RFC2616 (HTTP 1.1 protocol):

   History mechanisms and caches are different. In particular history
   mechanisms SHOULD NOT try to show a semantically transparent view of
   the current state of a resource. Rather, a history mechanism is meant
   to show exactly what the user saw at the time when the resource was
   retrieved.

   By default, an expiration time does not apply to history mechanisms.
   If the entity is still in storage, a history mechanism SHOULD display
   it even if the entity has expired, unless the user has specifically
   configured the agent to refresh expired history documents.

In other words, Mozilla should *never* be trying to reload pages when the user
clicks "back" or "forward".  The fact that Mozilla does this is highly annoying,
particularly when a page was produced by a POST.  I think step 1 here is to make
"back" and "forward" always load from the cache (even if expired), and step 2 is
to somehow save transient states like javascript.  IMO, step 1 is far more
important than step 2, and fortunately is also far easier to implement.
Yes, please fix this. The long wait to go "back" to the previous page is the
biggest shortcoming of FF and causing many of us to consider "going back" to Opera.
thank you for volunteering to fix this very complicated bug. in the future i'd
suggest you volunteer to fix much easier bugs. this one involves rewriting the
entire cache architecture. please inform us as to your expected completion date.
Assignee: nobody → marcel7
timeless: cut the crap, two wrongs don't make a right, etc.

/be
Assignee: marcel7 → brendan
(In reply to comment #64)
> timeless: cut the ****, two wrongs don't make a right, etc.
> 
> /be

Does anyone think that another bug should be created for Firefox, similar to
this, but be considered a enhancement. It would say, "Improve the caching of
pages to prevent any unnecessary reloading or rerendering of pages when going
backwards and forwards to be as fast as Opera" That keeps it open and might
attract more attention from the devs, because this bug is 4 years old, sort of
confusing and too specific and probably not going to be fixed this decade.
It's also a bug that heaps of people want fixed. 191 votes and many threads over
at Mozillazine such as
http://forums.mozillazine.org/viewtopic.php?t=171478&highlight= . For people who
have tried Opera it is one of the things that most impresses them about it and I
think it would improve retention rates with common users, because it's something
that every person why tries these browsers notices.
robertwiblin@gmail.com: "Core" bug means ff/suite/anything as well.
"Improve the caching of pages to prevent any unnecessary reloading or
rerendering of pages when going backwards and forwards to be as fast as Opera"

I do think that rewording the bug description would enable far more users to
find this page via the search engine. I'm not an advanced techie, and probably
never would have made it here except for a kind soul in the newsgroup. If you
want more votes for this bug, make it more readily findable.

Currently we have "Cache DOM and JS context in memory to provide fast access
when clicking back". I think the key is to include common discriptives beside
cache, such as slow, paint, navigate/navigation, render(ing), history, compose,
etc., which people would tend to use in trying to find a bug listing. I used
"historical navigation" to first describe our problem, but that might not be too
common.

IME, this bug is FF's greatest weakness, by far. I frequently encounter it once
a minute or more. As mentioned, it's lack is what sets Opera apart from all the
rest, now that other features such as tabs are becoming more common.
it's not an issue of weather users want faster back/forward or not. Of course
they do, and so does all mozilla developers. The issue here is whether the cost
of the additional memory needed to implement this would be bigger then the
speedup gained. The math is hard to do since a lot of these things are hard to
measure and they vary from platform to platform, and from webpage to webpage. In
the end, "fixing" this bug might slow mozilla down due to the added memory
consumption causing disk-swapping and cache-effects.

Getting more user attention to this bug isn't going to help. How many of the
users do you think are going to investigate enough to really have a qualified
oppinion? Have any of you commenting in this bug so far? Sure, if you want to
modify the summary/whiteboard/whatever to bring more attention to this bug go
ahead. Just don't think it'll make the bug "fixed" faster. Complicated bugs like
these have a tendency to just get more noisy when too many non-developers get
involved, making it even less likly to get fixed due to the bug in effect
getting useless.
Summary: Cache DOM and JS context in memory to provide fast access when clicking back → Caching DOM and JS context in memory might provide faster back/forward
Whiteboard: parity-opera → parity-opera, see comment 68
Comment 65 and comment 67 are not adding value here, they are making it less
likely this bug will be fixed.  Helping other newcomers come to this bug, vote
for it, and spam it with "me too" or similar comments does not help fix the bug
-- it actually hurts.  Please stop.

I'll do some investigation and prototyping over the holidays.

/be
After 4+ years, I think it is apparent the existence of this bugzilla entry
and its comments are not having any substantive impact one way or another
on the software functionality, so I think it is reasonable to expect that
more votes and "me too"s are also not going to have any such effect; but,
if they serve as a rallying cry for people to come to see how many others
are dissatisfied with the behavior of the code and the perennial failure
to improve it, I think that might help the user community by providing an
impetus for change which does not call for the assent of those who have so
persistently chosen not to improve the behavior of the software.
In reply to comment 70, I think you need to realize this *is not* an easy bug to
fix.  It's rather involved.  This is clearly one of the harder bugs in bugzilla
to fix.  If it were easy to fix, it would have been done years ago.

The fact that people want it fixed, doesn't make it fixed.  The only thing that
fixes it is someone stepping up and doing the job.  I think that's the point
timeless was trying to make.  I want a billion dollars, but posting that doesn't
make it a reality.



**** unless you have something relevent to say regarding the patching of this
bug, please refrain from comments ****
> This is clearly one of the harder bugs in bugzilla to fix.  If
> it were easy to fix, it would have been done years ago.

I do not think the reason the misbehavior continues to exist is
so much the high cost of improving things as it is the lack of
perceived benefit to those who would bring about that improvement,
and/or the perceived cost of failing to do so.


> The fact that people want it fixed, doesn't make it fixed.  The
> only thing that fixes it is someone stepping up and doing the job.

I think the point made here and elsewhere is that after 4+ years, there
is no reason to expect anything is going to get done before the level
of awareness and/or perception of the costs/benefits changes.  I would
posit that broader awareness of impact (and so cost/benefit) is often
a key determining factor in someone "stepping up and doing the job".

Comment 70 tears it.  I'm opening a new bug to track the work for this bug. 
Posturing can continue here, but if any of you noisemakers spams the new bug
with non-technical content, I'll personally revoke your bugzilla privileges.

Vent frustration over lack of this feature in the newsgroups, or better yet, do
something on the technical side, or pay someone who can, to help.

/be
Assignee: brendan → nobody
To do this properly, I think you have to separate the JS global scope object
from the "window" object; which is something I think we should do for security
reasons anyway. I don't see why scripting is a big deal; if we're going
back/forward to a page which has already "loaded", there is no reason to fire
onload events.
Larry, you're pretty insulting. Do you honestly think that the people that
developed a piece of software that was downloaded more then 10000000 times in 32
days doesn't know what features users want? What makes you think that you need
to run some sort of awareness campain for the mozilla developers?

You're talking about awareness of the cost/benefit of this bug. Are you aware of
it? If you are, plase tell us, that'll save us (brendan in this case) the
trouble of writing a whole lot of code and doing a whole lot of instrumenting.

You're arguing for a bug that you have no idea what it's going to do. What if it
speeds up back/forward by 10% but slowing down loading of new pages by 50% due
to disk thrashing? When I filed this bug it was not mostly for speed of
back/forward, that was a hoped for sideeffect.

So let me spell it out again. It is *not known* if implementing this suggestion
will have a positive effect on the performance of mozilla. Arguing that this bug
should be implemented means arguing for something that you have no idea what it is.
(In reply to comment #65)
> (In reply to comment #64)
> > timeless: cut the ****, two wrongs don't make a right, etc.
> > 
> > /be
> 
> Does anyone think that another bug should be created for Firefox, similar to
> this, but be considered a enhancement. It would say, "Improve the caching of
> pages to prevent any unnecessary reloading or rerendering of pages when going
> backwards and forwards to be as fast as Opera" That keeps it open and might
> attract more attention from the devs, because this bug is 4 years old, sort of
> confusing and too specific and probably not going to be fixed this decade.
...

If that is the case, they should just mark it as "WONTFIX". I didn't know they
were not fixing old bugs any longer.
(In reply to comment #71)
> In reply to comment 70, I think you need to realize this *is not* an easy bug to
> fix.  It's rather involved.  This is clearly one of the harder bugs in bugzilla
> to fix.  If it were easy to fix, it would have been done years ago.
> 
> The fact that people want it fixed, doesn't make it fixed.  The only thing that
> fixes it is someone stepping up and doing the job.  I think that's the point
> timeless was trying to make.  I want a billion dollars, but posting that doesn't
> make it a reality.
> 
> 
> 
> **** unless you have something relevent to say regarding the patching of this
> bug, please refrain from comments ****


I bet the community as a whole would benefit and in the end greatly appreciate
it if the entire Mozilla team took off the next three months from all other
fixes to focus on this one bug and maybe some of the memory bugs. 
(In reply to comment #77)
> I bet the community as a whole would benefit and in the end greatly appreciate
> it if the entire Mozilla team took off the next three months from all other
> fixes to focus on this one bug and maybe some of the memory bugs. 

It's hard to tell whether you are being sarcastic or not... I hope that you are.
(In reply to comment #77)
> I bet the community as a whole would benefit and in the end greatly appreciate
> it if the entire Mozilla team took off the next three months from all other
> fixes to focus on this one bug and maybe some of the memory bugs. 

I'm afraid that you absolutly don't understand what you're talking about. You
have probably no idea how does Mozilla project works nor what are it's targets.
I strongly advice you spending time to learn it by looking and reading rather
than writing such words.

Overall - we're accepting patches so if You're such desperate - feel free to pay
somebody to code this or code this by yourself. 
I'm talking seriously. We had previously some situations when people/companies
payed somebody/a group to fix some bugs. And thats the best way You all can go,
while the worst is to yawl more here.
If here is spam is allowed I dive my 5 eurocents i this sh...

There is about $150k left from the Times advertising left. Could the money go
for financing such a bugs like this one?
> There is about $150k left from the Times advertising left. Could the money go
> for financing such a bugs like this one?

Those funds were donated by individuals to a non-profit for a single purpose,
and it's illegal under the laws governing that non-profit to use them for other
purposes -- and the ad actually cost more than what was raised (MF made up the
difference), in order to be two pages and have big enough font that donors'
names show up.

And yes, your comment is spam here.

/be
My spam-reply needs correction: Chris Beard of MF got a great deal from the NYT
and MF didn't end up having to kick in more funds (but we were willing to, if it
had been the right thing).  My point about non-profit law binding us from using
those funds for arbitrary purposes stands.

/be
*** Bug 271851 has been marked as a duplicate of this bug. ***
Note bug 288462.
*** Bug 291777 has been marked as a duplicate of this bug. ***
No longer blocks: majorbugs
Fixed in bug 274784.  Download Firefox 1.5 Beta 1 and try it out :)

Dataloss issues (e.g. Firefox should know to pin a page in bfcache but doesn't)
should be filed as separate bugs.

*** This bug has been marked as a duplicate of 274784 ***
Status: NEW → RESOLVED
Closed: 24 years ago19 years ago
No longer depends on: blazinglyfastback
Keywords: dataloss
Resolution: --- → DUPLICATE
No longer blocks: 164421
You need to log in before you can comment on or make changes to this bug.