Click on link in zombie doc during "Transferring", onload won't fire for loaded document

RESOLVED FIXED in mozilla1.3alpha

Status

P3
normal
RESOLVED FIXED
16 years ago
14 years ago

People

(Reporter: aaronlev, Assigned: aaronlev)

Tracking

Trunk
mozilla1.3alpha

Firefox Tracking Flags

(Not tracked)

Details

Attachments

(2 attachments, 1 obsolete attachment)

(Assignee)

Description

16 years ago
If you click on a link to load a document, and then on a different link before
the first doc was loaded (during the "Transferring" stage), the new doc will
load but the onload handler won't fire.
(Assignee)

Updated

16 years ago
Status: NEW → ASSIGNED
Priority: -- → P3
Target Milestone: --- → mozilla1.3alpha
(Assignee)

Comment 1

16 years ago
Created attachment 107389 [details] [diff] [review]
Stops content as soon as link clicked

This fixes the problem, but is it the best way?

Who knows enough about these issues to review?
(Assignee)

Comment 2

16 years ago
Created attachment 107390 [details]
Test case

To reproduce, click on "mba" and during "Transferring" click on "Google".

The caret won't be automatically in google's edit box, as it would be if the
onload was fired.
(Assignee)

Comment 3

16 years ago
An alternate fix might be do test to see if the current document is a zombie (by
looking to see if there is a previous viewer still active). If there is, then do
the full Stop(STOP_ALL), otherwise just do the STOP_NETWORK as it currently does.
(Assignee)

Updated

16 years ago
Attachment #107389 - Flags: review?(rpotts)
Here's what's going on here...

Using the testcase, you click on the mba link and we start loading mba.com,
which for whaterver reason seems like a really slow site. Before mba.com is done
loading you click on the link to google.com, now we'll go ahead and cancel all
loads that are currently part of the document's loadgroup, this includes
external scripts n' so on from mba.com. When canceling the load of an external
script (which is the case I saw in the debugger using the testcase) we
asynchronously fire the script loaders OnStreamComplete() code with an error
code indicating that the load was canceled, this ends up telling the sink that
the script was loaded (or failed to load, rather) and the sink tells the parser
to continue parsing. The parser then continues feeding the already received data
(the data that came in while we were waiting for the script to load) and that
data ends up containing yet one more script tag, so we start loading it, and we
now of course add that load to the loadgroup that should now only contain the
requests for google.com. All the data from google.com ends up coming in, but the
script from mba.com still hasn't arrived, so we don't fire the onload handler
for google.com, and we won't do that until the script from mba.com actually
arrives and is removed from the loadgroup.

IOW, we *do* fire the onload handler, it's just significantly delayed by a
request that shouldn't have been added to the loadgroup in the first place.

I think this fix is "correct", since it ends up telling the parser in the
document that whose load is canceled by clicking on the second link to
terminate, thus the second script is never loaded by the data that came in over
the wire before the second link was clicked. I do, however, think that it would
be worth making this change take effect only when the current document is a
zombie document, as Aaron suggested.

Rick, it turns out that the presshell can't leave dummy requests in the
loadgroup as I thought, they're all guaranteed to be removed already, so no need
for that change.



I had a look at what's really happening here, the onload handler *does* fire,
even if you follow the steps lined out for the testcase in this bug, however,
the onload event does *not* even if all the loads that google.com initiates are
done, there's more stuff in the loadgroup that's left over from the target of
the initial link that was clicked on. That's the problem. The one time I caught
this in the debugger I saw a JS file from mba.com be the last thing to be
removed from the loadgroup, and 
Um, ignore the last paragraph in my last comment...
(Assignee)

Comment 6

16 years ago
Created attachment 108499 [details] [diff] [review]
Stops content only when in zombie

Same stop content fix in nsDocShell::InternalLoad, but now only does it if
we're in a zombie document (otherwise does what it used to do).
Attachment #107389 - Attachment is obsolete: true
(Assignee)

Updated

16 years ago
Attachment #108499 - Flags: superreview?(jst)
Attachment #108499 - Flags: review?(rpotts)
Comment on attachment 108499 [details] [diff] [review]
Stops content only when in zombie

+    // Stop any current network activity.
+    // Also stop content if this is a zombie doc. otherwise 
+    // the onload won't fire for the new document.

This comment is incorrect. The onload handler *will* fire, but it will be
delayed by other loads initiated in the background by the first document that
didn't fully load before the next load was initiated.

>+    rv = Stop(zombieViewer? (nsIWebNavigation::STOP_ALL) : (nsIWebNavigation::STOP_NETWORK));

For readability's sake I'd rather see this go in as:

+    if (zombieViewer) {
+	 rv = Stop(nsIWebNavigation::STOP_ALL);
+    } else {
+	 rv = Stop(nsIWebNavigation::STOP_NETWORK);
+    }

Other than that, sr=jst
Attachment #108499 - Flags: superreview?(jst) → superreview+
(Assignee)

Updated

16 years ago
Attachment #107389 - Flags: review?(rpotts)

Updated

16 years ago
Attachment #108499 - Flags: review?(rpotts) → review+
(Assignee)

Comment 8

16 years ago
fixed
Status: ASSIGNED → RESOLVED
Last Resolved: 16 years ago
Resolution: --- → FIXED
Product: Browser → Seamonkey
You need to log in before you can comment on or make changes to this bug.