The unknown content type handler is currently cancelling the nsIChannel it needs to download in order to stop the AsyncRead that the doc loader initially started on it. It's cancelling it because it needs to start a new, but synchronous, download on the same channel using OpenInputStream(). Once AsyncOpen is in place this extraneous Cancel() goes away.
Bulk move of all Necko (to be deleted component) bugs to new Networking component.
Is this a beta stopper?
yes. The unknown content type handler needs to become an nsIStreamListener and handle data asyncronously.
Hey Jud, without AsyncOpen are we just going to live with this for beta? As long as the uri loader is using AsyncRead to open the url, I don't see how the unknown content handler can make blocking reads on the same channel....
Jud, without async open does this bug belong back to you? i'm going to re-assign to you for now....lemme know if it should go somewhere else.
Bill, the unk-content-viewer itself needs to impl nsIStreamListener. The URI dispatcher will pump it with OnData() calls. When it gets an OnStart() request it should reset itself as the notification callback listener (so it gets status info), and throw the file picker dialog.
I'm not sure I understand this new plan of attack. The unknown content type handler doesn't download anything. It's just a layer that directs the user to one of 3 options (only two of which work, BTW): Fetch a plug-in, pick a helper app, or save into a file. The actual saving of the data is done by the nsIStreamTransfer code. I think it should stay there since that code is also used to save data in other circumstances not related to "unknown content." I don't think that it much matter which object actually does the work, does it? In either case, we need to take an already open nsIChannel and redirect it's content (OnDataAvailable?) to another nsIStreamListener. Aside from the fact that I'm still not sure *how* to redirect the data (I thought that was going to be predicated on something that came along with AsyncOpen) that *could* happen within nsIStreamTransfer, right? I need to understand this better in order to make it happen. Moving out to M14 (at least).
You're right. The Unk-con-handler should remain as a seperate layer. AsyncOpen has fallen by the wayside. Although it solves our problem here, we're not persuing it so we need another solution. As far as re-directing the data is concerned, the unk handler is a stream listener (or at least has one), so if the user selects the file picker (from within its OnStartRequest() callback), it's just a matter of having the unk handler's OnData() call through to the streamxferop's OnData(). If it's not possible, or desireable, to block the OnData()'s coming into the unk handler then it could buffer up the data (or write it do disk in a temp file and cross when the user finally picks a file the unk handler could rename the temp file to the one selected by the picker (I believe IE does this)). Does this help?
we need to address thus puppy. I can't download anything (http or ftp) using today's build. We cancel the channel, then the streamxfer object gets an OnProgress() and crashes because mInputChannel is null. I'm upping this to beta1.
Hey Jud, what changed yesterday that broke ftp and http download? If something like that regressed, I think we should find out what and mark that as a blocker bug for keeping the tree closed today. A lot of use need ftp so we can download the next day's version =).
I'd love to know :-/. But I have no clue what changed, or when.
I'd suggest filing a separate blocker bug for todays builds so we can find the source of the regression. It may be related to the nsIFile carpool that landed yesterday. I actually haven't been able to get my linux build to run at all since that happened....
done http://bugzilla.mozilla.org/show_bug.cgi?id=24996 . please help.
Moving this to M15. I can't discern exactly what the problem is at the moment, but I think it's an issue of how to *better* implement this code. It basically works, which means it's not a beta thing.
Target Milestone: M14 → M15
because the underlying channel gets canceled we return a stop to the original loader. In the case of downloading a file, when keywords are turned on, this causes the webshell to load a keyword url (based on the failed/cancelled load), *and* we throw the save as dialog.
http://bugzilla.mozilla.org/show_bug.cgi?id=30635 This cancelling is causuing yet another problem.
Bill, doen't another file download bug cover this problem?
Priority: P3 → P1
Target Milestone: M15 → M16
if the feature has been implemented, can we close this out and file other bugs?
Whiteboard: [NEED INFO]
[nsbeta2-], due to a lack of a clear, user-visible scenario as to what harm is done by this. Please explain if you want PDT to reconsider.
Move to M20 target milestone.
Target Milestone: M17 → M20
Considering its serious effect on File download problem, this should be brought back in to a closer milestone. I am retargetting it for M17.
Target Milestone: M20 → M17
reassigning to Gagan for more info. Mcafee and nav triage team don't understand the user scenario - please explain in detail.
Assignee: law → gagan
Status: ASSIGNED → NEW
this might be gone with mscotts temp file implementation. mscott? As for the user experience, whats happening right now is that when you try and save a file from the server (thru an Unknown Content Type Handler-- ie. you click on a link which we don't handle but would like mozilla to save to disk/open with some app) we do this by aborting an existing transfer and starting a fresh one. Besides the performance issues this could be a problem with cases where someone downloads stuff as a result of a cgi script that could potentially change the behaviour in two successive requests. Hope I did justice to the bug :)
Oh BTW this should really be law.
Assignee: gagan → law
nav triage team: more questions .... Please verify the following: We run into this problem (aka use the Unknown Content Type handler) anytime we try to download a file that is not displayed in the content area. When you run into this problem, everything works, it is just slow performance. Do we have one example of your cgi possible problem where the script may lead the second download request to download the wrong thing.
This is no longer a problem (in general); specific issues/scenarios might still be lurking but those should have their own bugs. Closing this one out.
Status: NEW → RESOLVED
Last Resolved: 18 years ago
Resolution: --- → FIXED
marking verified per previous comments
Status: RESOLVED → VERIFIED
You need to log in before you can comment on or make changes to this bug.