Download via ReadableStream : cancel is dispatched late
Categories
(Core :: DOM: Service Workers, defect, P3)
Tracking
()
People
(Reporter: zurldan, Assigned: saschanaz)
References
Details
Attachments
(3 files)
User Agent: Mozilla/5.0 (X11; Linux x86_64; rv:66.0) Gecko/20100101 Firefox/66.0
Steps to reproduce:
In my index.html I register a service worker and I display 2 links to download a txt file and a png file. The txt file will be generated via a ReadableStream in the service worker. This stream has a CountQueuingStrategy with a highWaterMark of 3.
I have a pull counter and each pull log the pull counter in the console. For this test, each pull resolves after 2s, making it easier to see the problem. I open the console and then click on the txt file link. The browser shows a popup asking if I want to open / save / cancel the file. I click cancel right away or accept and cancel the download later.
(I attached a zip archive containing the index.html and service worker in order to reproduce)
Actual results:
We can see in the console that the pull counter always increments by 2 after canceling the download and then logs 'canceled'. This means that the ReadableStream finish its current pull() and then calls pull() once again and when this one finishes it then goes into cancel().
Expected results:
The cancel() method in the object representing the underlying source should be called right away when the stream is canceled and not continue pulling.
Updated•6 years ago
|
Comment 1•6 years ago
|
||
I've tried to test this report on Windows 10 and Ubuntu 18.04 using the latest Nightly and Firefox release build. When I load the provided index.html file, open the console and click on the txt file link, I get "File not found". I don't get any prompt to open/save/cancel the file. If I create empty files then the link is displaying a blank page. If possible please provide more explicit steps in order to test it properly.
Hello, this requires a web server in order to work, if opened directly the service worker registration fails with this message in the console :
Registration failed with SecurityError: The operation is insecure.
If you need a simple web server to test it, you can go into the case directory, open a terminal there and run one of those commands :
// python 2.7
python -m SimpleHTTPServer
or
// python 3
python -m http.server
or
// php
php -S localhost:8000
Then you can open your browser and go to http://localhost:8000 and it should work.
Since I submitted multiple bugs / test cases and you responded to all of them : There can be only one sw.js service worker active at one time for the domain (localhost:8000), meaning that for each test case of these bugs, to be safe you will need to close the page of the test case, then go to about:serviceworkers and unregister the service worker for Origin: http://localhost:8000. Then you can open the next test case and the service worker for that test case will be fetched and registered.
Comment 4•6 years ago
|
||
Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Firefox/68.0 (20190325095153)
Thanks. I was able to reproduce the mentioned behavior using the provided steps. The pull counter increments by 2 after canceling the download and then logs 'canceled'.
Updated•6 years ago
|
Comment 5•6 years ago
|
||
I see the same behavior. In theory, at least, JS::ReadableStreamReaderCancel
should be called as soon as the user clicks Cancel, regardless of any in-flight read requests. It's not; we have two bugs:
-
We aren't calling it immediately.
-
In fact, we even call
JS::ReadableStreamDefaultReaderRead
(which calls.pull()
) an extra time before cancelling, after the user clicks Cancel.
We ultimately cancel the stream from here.
FetchStreamReader basically reads from the JS ReadableStream as fast as possible and pipes the bytes to a C++ nsIAsyncOutputStream. Comments on nsIAsyncOutputStream say that FetchStreamReader::OnOutputStreamReady
is called when the C++ stream is closed... or ready for data. The FetchStreamReader has no way to know which situation we're in!
That explains bug #1 -- FetchStreamReader needs clearer signals from nsIAsyncOutputStream. Right now FetchStreamReader optimistically assumes the nsIAsyncOutputStream is asking for more data, so it does another read and plows ahead until there's an actual error writing to the C++ stream.
I'm not sure about bug #2. I think the FetchStreamReader could still avoid the extra read request, maybe just by tracking whether it already has one in flight? baku, do you want to take a look?
Comment 6•6 years ago
|
||
I think this behavior is permissible as far as the spec is concerned; we're talking about a quality-of-implementation issue here. Ideally cancellation would be reported to the Service Worker promptly.
Updated•6 years ago
|
Comment 7•6 years ago
|
||
I agree with comment 5, but I don't know when I have time to work on it. Keeping the NI.
Comment 9•5 years ago
|
||
:baku is of course capable and has unique expertise in this area, but needn't be the person to implement the fix. I wrote a test for browser download cancellation some time ago that explicitly tests this scenario (https://searchfox.org/mozilla-central/source/dom/serviceworkers/test/browser_download_canceled.js is the test and handleStream
in https://searchfox.org/mozilla-central/source/dom/serviceworkers/test/download_canceled/sw_download_canceled.js is the SW stream logic). This should be something we can augment to make sure the cancel happens in a timely fashion.
Step 1 would likely be to run the test under rr/pernosco and then see how the analysis in comment 5 applies to what's happening.
Comment 10•3 years ago
|
||
Given your involvement in streams in general, would you mind to verify this with DOM streams?
Updated•3 years ago
|
Assignee | ||
Comment 11•3 years ago
|
||
The DOM Streams impl still shows the same behavior. Per comment #5 it's rather a Fetch problem than Streams, so I think it'd be better if a Fetch expert can take a look.
I can still take a look if no one is available.
Comment 12•3 years ago
|
||
Eden, I assume you are such an expert.
Comment 13•2 years ago
|
||
Kagami, Eden will not be able to look into this for a while. May you?
Assignee | ||
Comment 14•2 years ago
|
||
Putting into my backlog...
Updated•2 years ago
|
Assignee | ||
Comment 15•2 years ago
|
||
FetchStreamReader basically reads from the JS ReadableStream as fast as possible and pipes the bytes to a C++ nsIAsyncOutputStream. Comments on nsIAsyncOutputStream say that FetchStreamReader::OnOutputStreamReady is called when the C++ stream is closed... or ready for data. The FetchStreamReader has no way to know which situation we're in!
That explains bug #1 -- FetchStreamReader needs clearer signals from nsIAsyncOutputStream. Right now FetchStreamReader optimistically assumes the nsIAsyncOutputStream is asking for more data, so it does another read and plows ahead until there's an actual error writing to the C++ stream.
Revisited this and nsIAsyncOutputStream indeed does not expose why OnOutputStreamReady
is called. Internally nsPipeOutputStream::OnOutputException
sets mWritable
to false but this member is not exposed, so FetchStreamReader doesn't know what to do even when it's pinged for cancellation.
Jens, turns out you are an XPCOM peer. Any idea what to do here?
Comment 16•2 years ago
|
||
(In reply to Kagami [:saschanaz] from comment #15)
Jens, turns out you are an XPCOM peer. Any idea what to do here?
No, not really. I see nika changed that function recently and asuth reviewed it.
Comment 17•2 years ago
|
||
(In reply to Kagami [:saschanaz] from comment #15)
Revisited this and nsIAsyncOutputStream indeed does not expose why
OnOutputStreamReady
is called. InternallynsPipeOutputStream::OnOutputException
setsmWritable
to false but this member is not exposed, so FetchStreamReader doesn't know what to do even when it's pinged for cancellation.
This is intentional. You can find out if the stream is able to accept more data by trying to write into it, which is how you tell the difference between closure and non-closure.
We already have a type which handles closure and copying in this way with the nsAStreamCopier
, which also handles hand-offs between each side as waiting vs. not waiting correctly. It is not possible for us to give you "more accurate signals" from OnOutputStreamReady
, because we could be in the situation where when we dispatched the callback, we thought there was more space available, but then by the time it had arrived, the stream had actually been closed, so it should be treated as a closure instead.
I'll leave the ni? for myself to look into the behaviour of FetchStreamReader, and perhaps will write up a patch at some point to fix the issues so that it is more well-behaved as a stream listener, similar to how nsAStreamCopier
behaves. It's probably not calling AsyncWait
correctly, as that API is unfortunately very difficult to use correctly.
Assignee | ||
Comment 18•2 years ago
•
|
||
That would be a great help, thanks! (Not only to fix this issue but probably also for bug 1810107.
Comment 19•2 years ago
|
||
Before these changes, there were a number of cases where it would be
possible for the pipe to be closed or otherwise encounter an error, and
for there to be a very large or potentially indefinite delay before that
error is reported to the ReadableStream.
This patch avoids changing too much of the structure of the existing
FetchStreamReader's design, while avoiding the potential pitfalls, and
ensuring that the stream is being consistently watched.
A more polished patch in the future may be able to perform a more
efficient and direct adaptation from a ReadableStream
or
WritableStream
to the nsIAsyncInputStream
and nsIAsyncOutputStream
types, but that was not tackled here.
Updated•2 years ago
|
Assignee | ||
Comment 20•2 years ago
|
||
r+ for the patch but it needs a test.
Assignee | ||
Comment 21•2 years ago
•
|
||
Hmm. I guess there's still some space to improve even with the patch.
Currently
Requests via fetch()
needs one round of read request to get cancelled, and the browser download needs two round of it
With the patch applied
Requests via fetch()
needs zero round of read request to get cancelled (immediate cancellation 👍), and the browser download needs one round of it.
We are making it better, though!
Assignee | ||
Comment 22•2 years ago
|
||
For other browsers:
Safari
Requests via fetch()
cancels immediately, while the browser download never gets canceled 🤔
Chrome
Requests via fetch()
cancels immediately, while the browser download 1) needs one round on stable channel and 2) cancels immediately on canary
Assignee | ||
Comment 23•2 years ago
|
||
Comment 24•2 years ago
|
||
Comment 26•2 years ago
|
||
bugherder |
Assignee | ||
Comment 28•2 years ago
|
||
Forgot to remove leave-open.
Description
•