Open Bug 136633 Opened 23 years ago Updated 5 months ago

View Source gets wrong source when same URL is open in two windows simultaneously with different content


(Core :: DOM: Navigation, defect)






(Reporter: nbidwell, Unassigned)


(Blocks 2 open bugs)



(1 obsolete file)

Spun off from bug 40867

Now that we have the ability to always fetch the latest version of a web page's
source from the cache, we need to always store the correct version.

Steps to reproduce:
1. Go to a web page that changes often, such as
2. Open a second browser window to the same page
3. Reload the second window until it shows different content than the first
4. Select View Source from the first window
5. The source will show the version from the newer content shown in the second

Expected result:
The source for the older version of the content shown in the first window is shown.
Severity: enhancement → normal
Ever confirmed: true
Note that per discussion in bug 40867, there's at least one nasty evil corner
case that should either be included in this bug or spun off separately to
another one. The decision of what to do about it is, as far as I can tell, still
open, unlike the core issue in this bug, which clearly should be fixed.

The problem case is when the cache is either disabled or set to a size that's
not big enough to hold all the documents that are currently open in windows.

The tradeoffs are: Using the disk cache for this case would push the disk cache
over the size that the user has specifically selected. The user might have
chosen this size based on available space in a particular partition, in which
case violating it breaks things nastily. Even aside from the possibility of
breakage, when the user asks us to limit our disk usage, we really ought to.
(There's also the issue of SSL no-cache and no-store pages that shouldn't be put
in the disk cache at all - if we were going to use the disk cache, we could
perhaps encrypt the entries with a random key that we hold only in memory)

If on the other hand we use the memory cache, we exacerbate our already
substantial bloat problem. Loading a 500Mb HTML file into the memory cache would
be a problem.

We could provide a pref in the form of a checkbox: "Allow disk cache to grow
beyond this size if necessary to keep source of all open windows around". This
would keep the people with strict cache-size needs happy while still ensuring
correct behavior in 100% of corner cases for everyone else. On the downside,
adding another pref is generally considered a Bad Thing, for Good Reasons.

I'm not sure whether this issue should be addressed as part of this bug, or
separately. Perhaps if a consensus on this isn't reached by the time a patch for
the obvious case is available, we can spin off the problem case into a separate
bug then.
Transferring my vote for bug 40867 to this one.

Do we need to do some dependancy linkage to this bug?  Seems like bugs that were
blocked by 40867 would also potentially be blocked by this bug.  Also the
summary and description of this bug both specifically reference View Source, but
Send Page and Save Page and others would be affected in the same way. 

Candidates are :
   bug 17889 
   bug 68412 
   bug 74349 
   bug 84106 
   bug 86261 
   bug 115832 
   bug 120809

Or are we supposed to open new bugs for this issue with each of those features
that are affected?
we should make sure that this feature be enabled via a preference (default
enabled most likely) because some embedders (e.g., those with tight footprint
requirements) might want to disable this feature.
I'm still not certain that cache pinning is the solution here (see discussion
for bug 40867), but even if it is, I would like to suggest that the title of
this bug be changed so as to describe the problem and not [one of] the possible
solution, e.g., "Multiple browser windows/tabs don't correctly view their own

Comment #2 said:
> If on the other hand we use the memory cache, we exacerbate our already
> substantial bloat problem. Loading a 500Mb HTML file into the memory
> cache would be a problem.

  I'm not convinced of this -- We know that any viewable browser window already
has a copy of the page in memory as a DOM structure, which is, in almost every
case, larger than the original HTML.  So if you have a 500Mb HTML page, you can
be sure that Mozilla is already consuming >500Mb (probably much more) in memory
just to show it too you, regardless of keeping the origianl html file in memory
or not.

  What this means is that if we were to hang the source html off of the DOM as a
[hidden] element, per page memory usage would increase at a linear rate
proportional to < 1.0, which I would consider acceptable for all but maybe the
embedded guys, who should be able to disable this via some default setting in
prefs.js or whatnot.

  My fear is that the cache pinning strategy employed skillfully to fix bug
40867 is really just a stop-gap (a very necessary one) that will lead us down a
road of "never getting this quite right."

  What say ye?
I believe that a naive solution of using cache pinning for this bug would solve
it for 100% of cases, *at the expense* of doing what some people consider to be
the wrong thing when the cache size is too small.

I think that increasing our memory usage by something probably between 50% and
100% is pretty unacceptable in a situation where by definition we're using a
huge amount already. (I'd actually hope that mozilla's internal representation
of an html document uses *less* memory than storing the source code does,
because it doesn't need to store 3000 instances of <, />, <p>, align="center",
etc. That's less true for decently CSS-based html, but the majority that's
currently out there is still using crappy presentational markup).

My personal preference to get 100% of cases would be the checkbox preference, as
I described in comment #2. I think that encrypting no-store pages is a good
solution to the problem of not storing them in accessible cache diskspace.
About memory bloat, what about compressing the source document in memory? Will
zlib compression be too slow for this purpose? HTML source compresses real well,
so this should cut the memory bloat problem.

BTW- Can anyone run exhaustive test as to how IE, NS4 and Opera tackle this
problem? Shall I define a rigid test case to be performed?

P.S. Should we copy the CC list off bug 40867 here? Also, I think bug 40867
should be marked as a dependancy.
> Shall I define a rigid test case to be performed?

Yes, please.  I see the following behavior with IE 5.0/Solaris (I suspect IE
5.0/Windows acts the same way, since it's pretty much the same codebase):

1)  Open two IE windows
2)  Open a page in the first window.  View source.
3)  Change the source of the page.
4)  Open the page in the second window.  Refresh to pick up the new source.
5)  View source in second window.  Shows new source
6)  View source in first window.  Absolutely nothing happens (menu is dismissed,
    but a text editor with the source is not opened).
I have set up a test page at

I suggest the following tests preformed with each browser with the following

Browser: IE5/IE5.5/IE6/NS4/Opera/Moz0.9.9/MozNightly
OS: Linux/Solaris/Windows
Browser setup: with/without cache
Operation: save/view source

1. Load the page.
2. view-source. Do you see the same page?
3. On the GET form - select pragma: no-cache and cache-control: no-cache and submit.
4. view-source. Do you see the same page? new page? page with wrong data submitted?

Repeat test A with the POST form.

1. Load the page.
2. Load the page in a new window. Do you see the same page?
3. Reload the page in new window if nessecary.
4. view-source in new window. Do you see the same page? old page? new page?
5. view-source in old window. Do you see the same page? new page? page from
other window? error message?

Repeat test C with

Any other tests? Anyone willing to perform these tests (I'm currently running
How about caching the source in memory and/or disk when space permits and then
when the user tries "view source", "save page" etc with the source lost from
the cache we show a dialog "Source for this page/frame" is no longer cached.
Do you wish to reload it?  (Source may have changed in this time)".
We've been arguing about the memory bloat of hanging html off of the DOM for far
too long.  In order to settle the argument, here are some cold, hard facts:

After freshly running Mozilla with "blank" as the homepage, total memory usage
is 20676K.

After loading, total memory usage is 23392K.

This means that rendering requires 2716K.  The html source of  As of the time of this comment, the page was 39K.  Total increase
in memory needed to hang the source html off of the DOM: ***** 1.4% *****

Now, after loading, total memory usage jumps to 26112K, which is on
the same order of increase as slashdot at about 2720K.  The combined html source
of's frame index and two frames = 374+47602+1152 bytes, or 48K.  Total
increase in memory needed to hang the source html off of the DOM: ***** 1.8% *****

Intuitively, this makes a lot of sense: remember that the DOM is AN OBJECT
MODEL, while html is just a bunch of ascii bytes.  Also, the DOM contains all
the graphics files -- view source does not need those.

We are talking about UNDER 2% INCREASE in memory usage TO DO THIS THE RIGHT WAY
and fix it FOR ALL CASES, FOREVER.  It has been asked here and in this bug's
granddaddy, "why does Netscape 1.x, 2.x, 3.x, 4.x and IE get this right when
Mozilla can't?"  The answer: they keep the original html around.  This is
correct behavior.  Mozilla, above all else, should exhibit correct behavior.

In fact, the increase will likely be FAR LESS THAN 1.8% -- we only need to
reserve this memory FOR CURRENTLY VIEWED PAGES.  All those that are sitting in
the cache need not save the html.  We are probably talking about a < 0.5%
increase overall.

In summary:
  1. the in-memory solution solves ALL currently solved issues as well as all
remaining issues.
  2. the in-memory solution is, at my best estimation, far more straightforward
to implement than trying to kludge all remaining issues into the cache pinning

PLEASE CHANGE THE SUMMARY FOR THIS BUG.  If I haven't convinced you that pinning
isn't the best solution, I hope that I've at least convinced you that it isn't
the only viable one.  Even if you vehemently disagree with the solution I
propose, isn't it a truism that a bug reporting system should summarize bugs by
the character of those bugs, and not by the character of a particular solution?
As an addendum to comment #12, I just tested a text-only html page of about 18
lines, 491 bytes total.  Mozilla required 584K to render it, which means that
keeping the source html in memory alongside it would only require an increase in
memory of 0.08%, even for a simple page with no graphic elements.

I'm doing all this memory profiling somewhat unscientifically using 'top' under
Linux -- can someone verify my findings or provide a more accurate method? 
> It has been asked here and in this bug's granddaddy, "why does Netscape 1.x,
> 2.x, 3.x, 4.x and IE get this right when Mozilla can't?"

Um...  In the one test I did with the only IE I have, IE totally failed to get
this right.  Please make sure you actually test with those browsers before
claiming that they get it right.

Also note that Netscape 4.x does not show the actual source, since it replaces
"<script>document.write('foo')</script>" with "foo".  So what NS4 is really
showing you is a result of some bizarre source-DOM interaction...
> I'm doing all this memory profiling somewhat unscientifically
> using 'top' under Linux -- can someone verify my findings or 
> provide a more accurate method? 

I would say it doesn't matter.  This is an issue of correctness.
It doesn't matter if it costs 2% or 20%; it needs to be done.  
If the cache can get it right, okay, but it needs to get it
right in order to be a viable solution.

View source is *important*.  It's _especially_ important for
advocacy, because if you're going to be telling a skeptical
webmaster who has never heard of you that his site is feeding
you different content based on an incorrectly sniffed User Agent
string, you had *better* not screw up when you tell him exactly
what content it is that you're getting.  I would say this even
if ALL other browsers got it wrong, because advocacy is more
important to Mozilla than it is to (for example) IE.

However, regarding the cache solution:  if it gets 100% of cases
right when the cache is sufficiently large, I'm okay with that.
Disk space is cheaper than RAM, byte for byte.  

To get the real source in Netscape 4, use view-source:URL.
This refetches the page though.

I am not sure of the results with top, as it may have to load dynamic code when it loads the page the first time. Try to do the measures with no cache, after loading the page once, then about:blank. Compare before/after reloading the page.
Let's settle the debate and fix this bug. We *need* to be able to view the
original source and *need* to be able to save unmangled source. 
I'm resummarizing this bug as suggested, because nobody else seems to and it's a
good idea. For historical record, the previous summary was "Need to pin source
files to browser windows via cache tokens for view source". If you were
searching for any words in that summary that are not in the new one, please take
note and change your searches (I couldn't think of a viable title that included
all the useful words from the original).

I'm not too happy with the new summary either because it doesn't cover Save Page
As, but it was already getting overly long. If anyone can come up with a better
summary, please do.

With respect to the issue itself, I honestly don't think that using the cache
versus using memory all the time is a terribly big issue - both solutions are
pretty easy to get right for 100% of cases. The only problem (in both cases!) is
that getting it right in 100% of cases involves making tradeoffs with other
constraints we don't want to violate.

I think most people here are agreed that fixing this issue for 100% of cases is
paramount - that is, *one* of the other constraints will have to be violated,
whether it be cache size, memory usage, or something else. The decision that
needs to be made is *which* other constraint is to be violated, and while that
obviously needs to be decided on, I don't think it should be the subject of such
vicious flamewars as we've had so far!

Nobody (at least, so far in this bug) has suggested that "getting view source
wrong in some cases" is ever acceptable behavior (except maybe for special
purposes like embedding where low footprint is paramount); the implementation
details don't really matter that much, and we have a number of promising
approaches to achieve that goal.

Al, I think that using for reference is going to be misleading.
We're trying to figure out how the pathological cases behave, not the "normal"
ones. is a pretty big document, you could try
testing that (make sure your memory cache is disabled when testing, or your
memory increase figures will be skewed). is another, with much of the content
hidden by default. Test on the biggest documents you can find. Try huge
plaintext documents like RFCs or mailing list digests for high-traffic lists.
Try huge html documents where the bulk of text is in a <pre>. If you get good
figures for *those* cases, and with the memory cache disabled, then you might be
able to make a case.

Also, remember when using top to only count the figures for *one* mozilla
process. People who use top have a tendency to produce values multiplied by the
number of mozilla threads, when in fact the vast majority of memory is shared
between the various threads.
Summary: Need to pin source files to browser windows via cache tokens for view source → View Source gets wrong source when same URL is open in two windows simultaneously with different content
Both solutions appear to be viable; the debate seems to mostly be a matter of

1) Source attached to DOM :
  Pro : speed, no disk cache required
  Con : requires more RAM than #2a or #2b

2a) RAM cache pinning :
  Pro : speed, no disk cache required
  Con : requires more RAM than #2b, but possibly less than #1

2b) Disk Cache pinning :
  Pro : less RAM usage
  Con : disk access slower than RAM access, requires sufficient disk cache

*true multi-tiered cache implementation could combine #2a and #2b

I realize that this a rather simplified view, but are there any other issues?

Ideally Mozilla would provide both methods; developers, embedders, and users
could then choose which trade-off was best for them.  Some might even choose
"none of the above" if RAM and disk were both very limited and source accuaracy
was not important (especially embedders whose products do not need View Source,
Save Page, Send Page, etc).

Most of your average web users are going to have plenty of RAM and disk space
for either solution; probably more likely to have an extra abundance of HD space
than RAM though I suppose.

Personally I have no signifigant preference of which solution is implemented as
long as at least one of them does.   :)
so how about:

    leave current behavior as the default
        mail-page/view-source/save-as mostly works, but sometimes errs 

    create a pref which is turned *off* by default
        pref enables stashing of zip'ed source in DOM
        pref enables all m-p/v-s/s-a to use that source
        pref's UI explains the speed/RAM/correcness tradeoffs

Embeders can decide whether to enable/expose the pref. Most users will never
know it's there, and web developers and evangelizers will enable it to meet
their needs.

A future enhancement could make the behavior enabled for certain conditions but
not others, but initially it'd be all-or-nothing with the DOM-stashed-source.

                                             Mozilla Mem   Page Size   Adtl Mem                  3388K         182K        5.4%    5400K         514K        9.5%

  So in the /pathological/ case, we'd be adding perhaps up to 10% extra "bloat"
for currently viewed pages, which may definitely impact my in-memory proposal,
but let me add three caveats: 1) the html for pathological pages is /still/ at
least an order of magnitude smaller than the footprint of the internal DOM
representation, 2) let me stress that the additional footprint is released as
soon as the page goes out of view -- we aren't talking about a 2-10% tax on
EVERY page viewed, just the currently active page views, and 3) pathological
cases are, well, not the norm (but I'm glad Stuart pointed them out).

  For the record, I hope I haven't been a jerk about my views.  The work Rick
did on getting this working by pinning is absolutely fabulous and I can't tell
you how pleased I am that this is getting in for 1.0.  Above all, I just want
this fixed, and the implementation is rightly left up to the implementor, i.e.,
not someone like me who just cheers from the sidelines.  But the reason I've
cheering so loudly is that it really seems like the in-memory solution is
simpler, more maintainable, less fragile, and the memory cost is almost free
anyway.  I'd hate to see this become a sore point for future expansion and
maintenance.  But, if it is decided that the cache-pinning strategy is the best
way to go, I'm behind that.  I'm just trying to impose the dialectic process on
the decision.
Well, the reason that storing the source in the DOM is considered excessive is
that in the 95-99% of cases that *already work*, the source is already in the
cache. Storing it in the DOM would mean storing it twice. Storing the same thing
twice *is* bloat, regardless of how small a percentage it adds to the overall
size of the browser. So if we were to use the DOM approach, we should probably
ensure that documents visible in windows should *not* be in the cache at all.
Which has its own problems.

This is why I believe that pinning is the solution, because pinning means that
we just tell the cache that it has to hold on to the entries the other 5% of the
time as well, rather than building a whole new mechanism and messing with the
existing cache to avoid duplication. The cache already has logic to decide
whether to hold things in memory or on disk, and presumably to move things
between them. If we do decide that expanding the disk cache beyond it's stated
bounds is absolutely verboten, we could just insist that only memory cache is
used for this purpose. Or provide the checkbox pref, as I've previously

I definitely don't think the proposed pref to behave "correctly" should be off
by default. It will only impact pathological situations (where the cache is set
too small to hold all currently open windows) and the only people I can imagine
not wanting it are embeddors and people with exacting disk-quota or disk-space
requirements, who will probably already be tuning their cache preferences so
dealing with an extra checkbox on the cache prefs panel won't hurt them much.

Consider the user who discovers this "misbehavior" and can't get the source to a
page they are viewing. For that user, it's *already too late* to fix the problem
by changing the pref - the source is gone for good. On the other hand, if the
user discovers the pref because their disk quota is exceeded, it's not too late
to change the pref and fix the problem.

Storing cache entries zipped would quite possibly be a good idea (IMHO), but
it's separate from this bug. Anyone want to file it?
Oh, and FWIW, I'm just cheering from the sidelines too, and all of Al's last
paragraph (except for the part advocating the in-DOM solution :) ) applies for
me too :)
In response to comment #20 :

However the patch for this bug is implemented, the default preference should be
that it is on.  Correct behavior should be the default with lossy optimization
as an option.  The average user who tries to "Send Page" or "Save Page" for what
he is looking at is not going to think it reasonable that he has to set a pref
to make it work right.  Anyone willing to sacrifice accuracy for a little less
RAM or disk usage can set the pref.

(mid-air collision with Stuart)
> Storing cache entries zipped would quite possibly be a good idea (IMHO), but
> it's separate from this bug. Anyone want to file it?

This needs to be done very carefully, since plugins currently use direct access
to a cache file to run (assuming the cache is big enough, of course; if it's not
there's a fallback mechanism).  You don't want to feed plugins zipped versions
of the data they want.
> This needs to be done very carefully, since plugins currently use direct access
> to a cache file to run (assuming the cache is big enough, of course; if it's not
> there's a fallback mechanism).

Ah, so there already exists mechanism for dealing with a situations when cache
is not big enough! Can it be used here as well? And if we end up using the pref
allowing the disk cache to grow bigger than the limit when necessary, should
plugins be allowed to use it as well?
> Ah, so there already exists mechanism for dealing with a situations when cache
> is not big enough!

Sort of.  The plugin code asks the cache to stick the data in a file.  The cache
either says it can or it can't.  If it can't, the plugin code splits the data
stream and saves a copy to a temp file that it attempts to create in $TMPDIR (at
this point, if your temp is small we just fail to view the content in the plugin).

I suppose we could do something like that for every single pageload...  it seems
suboptimal, somehow, but I can't put my finger on it.

> should plugins be allowed to use it as well?

No, imo.  The disk cache limit is there for a reason, and the plugin code
already deals with it...
Everybody is taking about the HTML source, but what's about the images? If a
user saves a page as 'Web Page, complete', are the images taken from the DOM,
the cache or are they refeched? If there are cases where they actually are
refeched, this images should be pinned in the cache too for offline saving or

Additional I opened two Slashdot windows with IE 5.0 and each one showed it's
own source. (And I found no way to disable the disk cache in IE)
In response to comment 20: NO NO NO NO NO NO NO
The current behavior is WRONG.  We want the default behavior to be CORRECT.  If
you want a pref which causes INCORRECT behavior, fine.  But the default behavior
MUST be correct.

As I said over in 40867, the HTML-in-DOM solution would work.  The 'cache
pinning'  (in memory) solution would work.  Both will fail if we run out of memory.

The size studies given earlier in this bug show that the overhead of getting it
right using cache pinning is *insignificant*.  The only case where we'd have
problems is the case where the user is opening an enormous number of windows at
the same time, in which case we hit many other problems first.

The 'cache pinning' solution *cannot* be implemented until SetCacheToken is
finished.  Since the owner of that bug doesn't seem to give a damn, I'm going to
 start looking at that code and see if I can figure out how to implement it myself.

(off topic, GetCacheToken and SetCacheToken are named oddly -- the interface to
the outside should be, say, RequestCacheToken and ReleaseCacheToken, or some
OK, I've been spending hours trying to understand the undocumented code of
Mozilla's cache.

It doesn't look like the cache actually supports pinning!  Note that a
CacheEntry can be overwritten by someone with READ_WRITE access on a descriptor.

If the cache does support pinning, I'd appreciate someone explaining what object
represents a pinned cache entry and how to tell it to stay pinned.
*** Bug 137304 has been marked as a duplicate of this bug. ***
This is vaguely related to bug 85165.
-> Rick 
Assignee: adamlock → rpotts
You know, I'm not sure I agree with bz's comment #27 (that when this is
implemented plugins should not be able to use it). At first glance I did, but
now that I've thought about it a bit...

The whole reason we're dealing with this situation at all is to hold the source
of a file that is *currently being viewed*. The aim is to make sure that content
sticks around, and (one of) the proposed solution(s) is to provide a pref to
allow the disk cache to grow beyond it's nominal limits if necessary to store
all currently-viewed documents.

Content being viewed by a plugin could also be considered currently-viewed. So
perhaps the same mechanism *should* be used for plugin data versus data viewed
natively. Of course, this is moot if a memory-cache solution is used since the
plugin can't access the memory cache...
I currently suspect that this bug, and 85165, are due to *overwriting* of an
existing cache entry when a new version of the same URL with the same postdata
is fetched.  It should instead create a new entry and mark the old entry dead. 
(The facilities exist in the cache for both methods.)

I don't know precisely where the overwriting code is, but it would be somewhere
which uses a cache entry descriptor with READ_WRITE access.  If it turns out
that nothing uses READ_WRITE access, then I'm wrong and I'll tell you the
second-most-likely culprit.
As I understand it this bug is due to three things; the first is what you
mentioned; the second is the fact that an open window does not hold a cache
token for the source of what it's currently viewing; and the third is that the
setCacheToken method is not implemented and therefore it's impossible to load
data based on a cache token anyway. There also might be a fourth problem which
is that the cache might expunge an entry due to the cache being full even if
said entry is open in a window, but I'm not sure whether that happens if a cache
token is held for the entry.
> I don't know precisely where the overwriting code is

nsHttpChannel::OpenCacheEntry in nsHttpChannel.cpp
This should be 4xp.
I just performed the following test in Netscape 4.x:

1)  Create a file called text.html containing the text "aa"
2)  Open this file in NS4.
3)  Open a new NS4 window
4)  Edit the file to say "bb".
5)  Load the file in the new window
6)  View source on the first window.

It said "bb".  So this is most certainly not 4xp.

Any more misinformation we want to spread?
bz, I'm not sure if this is 4xp or not, but I'm not sure your testcase proves
it: it sounds like you're using a file:/// URL to view the page, and I wouldn't
be surprised if file:/// URLs aren't cached at all.

A more reasonable test would be, say, (watch the message counts on
the topmost story, they go up pretty fast).
I just repeated the test with an http:// url with the same exact results.  And
yes, I do have my memory and disk cache enabled in NS4.  And no, the website
sent no special headers as regards cache.  This is Linux Netscape 4.78 if that
I just tripped over this bug, and swear I've seen the same thing elsewhere
recently. If not, I remember having this behavior in the past few weeks or at
least since June.
Note bug 288462.
Blocks: 288462
Note Bug 115174.

Also note for historical purposes, Bug 40867 (as originally mentioned here in 2002!)
requires bug 85165?
Commenting, in hopes that someone notices that the owner of this issue is gone.
Assignee: rpotts → nobody
QA Contact: adamlock → docshell
Tested this with Firefox on Slashdot and Digg, and could not replicate. Resolving as WFM.
Closed: 18 years ago
Resolution: --- → WORKSFORME
I have no idea how you tested, but this is still a problem.  Not only that, but trivial code inspection indicates that it _has_ to be a problem.
Resolution: WORKSFORME → ---
My test:
1. Open two new windows, one tab to and one tab to on each.
2. Go do other things...
3. Reload both tabs in one of the windows. Noticed new content.
4. View source on the old window, search for strings found only in the new version (headlines).
5. The old windows does not have the new headlines, and the new windows have the new headlines.

I apologize for acting without doing further tests.
Replicated using Firefox I set up a simple example of this bug using the following php code.

<?php print "<br>" . md5(uniqid(mt_rand(), true)); ?>

Sample script can be found here

We can use here (  for test of this bug.
Because "Bug List" contains timestamp of server.

1. Search bug in Window-1.
2. Wait a moment (It needs more than 1 second).
3. Open new window(Window-2)
4. Execute same search in Window-2
5. View Source at Window-1.
QA Contact: docshell → nobody
Internet Explorer just WORKS as it should. Mozilla hasn't since 2002! Sheesh this REALLY sucks.
I designed a simple testcase that shows this.

This page prints a random number.

Open it in two tabs and then check source for both.both show the new number and I am using 
Mozilla/5.0 (Windows; U; Windows NT 5.1; sv-SE; rv: Gecko/20091201 Firefox/3.5.6 (.NET CLR 3.5.30729)

So unless this simple testcase works, this is still a problem.

And I tested this in other browsers.

FF 5.5: Fails
IE8: Works, shows right source for both windows.
Opera 9.63: Fails same as FireFox.
Chrome Fails like FireFox
Safari 40 (530.17): Works as it should.

I believe this needs to be fixed, I have had numerous occasions where I have had problems finding an error due to wrong source where I missed that I had opened the page again in another window.

As it is, I am forced to use one other browser than FF in some tests to be sure that I can see the exact difference between two loadings.
Since this bug is getting no attention I suggest adding the keyword "dataloss" to it.

We are in fact loosing data when 2 pages with the same URL but different content does not store separate sources for "View Source".

Se my previous post for example.
QA Contact: nobody → docshell
Interestingly enough, the testcase in bug 56 actually gives a different result every time you open up View Source on a single tab - it looks like it's constantly being refetched?  Is that something that should be filed separately?
Blocks: 182712
Funny, I just ran into this a couple times recently. It usually happens when I middle-click to open multiple tabs.  Even doing a right-click and "Reload All Tabs" won't correct it.
Severity: normal → S3
Attachment #9384995 - Attachment is obsolete: true

Using a page from, I can verify that this is still open in Firefox 123 in both regular and private mode. Chromium 121 exhibits the similar behavior. When right-clicking and selecting "View Page Source" a new fetch is performed (bug 288462).

Based on Comment 36, I have a couple thoughts. Firstly, if a cache page is expunged ("case 4") I would suggest letting that happen and filing a separate bug for that edge case so that this one can be closed. Secondly, perhaps the "View-Source" page could display a meta-header indicating a timestamp of when the page was first requested, and when it was last changed. If the page is refetched because of cache issues, at least the timestamp would make that clear. (The two timestamps could account for the dynamic nature of page updates on the modern web.) This "meta-header" could display other data as appropriate, such as the final URL in case there are redirects, the on-disk status of the cache, any controlling cache-control directives, etc.

And contrary to Comment 59, if I "View Source" on the same page without opening the same site in a new tab, the same source is displayed. In this scenario a second fetch does not occur. Further, if I "View Source" on the second tab without reloading it, the same source is also displayed. That is:

  • Load RNG page in Tab 1 (assume RNG=1)
  • Load RNG page in Tab 2 (assume RNG=2)
  • "View Source" of Tab 1 -> See RNG=3
  • "View Source" of Tab 2 -> See RNG=3
  • Reload Tab 2 (assume RNG=4)
  • "View Source" of Tab 2 -> See RNG=5
  • "View Source" of Tab 1 -> See RNG=5
  • "Inspect (Q)" the RNG in Tab 1 -> See RNG=1

Chrome behaves similarly, except that "view source" of the two tabs always produces new RNGs. It appears that Firefox caches the source page while Chromium does not. Chromium will also show the correct RNG data when using "Inspect" on the first tab.

For reference, this is the exact URL I'm loading:

You need to log in before you can comment on or make changes to this bug.