Transform FetchStreamReader into a pipe to WritableStreamToOutput
Categories
(Core :: DOM: Streams, task)
Tracking
()
People
(Reporter: saschanaz, Assigned: saschanaz)
References
(Depends on 1 open bug)
Details
Attachments
(3 files)
That should make things cleaner. But we need to make sure Object.prototype.then
won't interfere. Since we use custom read request for piping I guess it should be fine?
Assignee | ||
Comment 1•1 year ago
|
||
Also the pipe should handle WorkerRef properly.
Assignee | ||
Comment 2•11 months ago
|
||
Depends on D193092
Assignee | ||
Comment 3•11 months ago
|
||
Note for me: The listener of BodyConsumer somehow just resolves the promise when WritableStream is errored: https://wpt.live/fetch/api/response/response-stream-bad-chunk.any.html
Assignee | ||
Comment 4•11 months ago
•
|
||
Note for me:
- The reason behind comment #3 is that WritableStream calls ReleaseObjects() on error which currently is not error-aware and thus the WritableStreamToOutput implementation just closes the stream without a proper error code. I'd like to let the ReleaseObjects aware the error condition instead of having
ErrorPropagate()
function as InputToReadableStreamAlgorithms does, but the error from the stream is returned in a JS::Value form and we need a native error code here. Not clear what to do. - One notable behavior change here is that the chunk type check now happens in WritableStream write algorithm of WritableStreamToOutput instead of in ReadableStream chunk steps of FetchStreamReader. That's technically a diversion from the spec as the consume body algorithm of Fetch spec uses read all bytes algorithm. That means it takes extra time to get the error as we need to wait for WritableStream to process the write queue. Maybe we can pass an extra parameter to PipeToPump to do the check in PipeToReadRequest::ChunkSteps.
Assignee | ||
Comment 5•11 months ago
|
||
Depends on D193555
Assignee | ||
Comment 6•11 months ago
|
||
Depends on D193556
Updated•11 months ago
|
Assignee | ||
Updated•11 months ago
|
Assignee | ||
Comment 7•10 months ago
•
|
||
Hmm. I should have expected this; it has major performance problem because stream piping by design heavily depends on promises. So cases like WPT /fetch/api/response/many-empty-chunks-crash.html will have to wait on at least 40,000 microtasks sequentially to finish, while it finishes instantly with the current FetchStreamReader.
Points where we wait on promises:
- PipeToPump::OnReadFulfilled waits on a
Promise.resolve()
to make it async; we can skip this. - PipeToPump::Read waits on
writer.ready()
to make sure the previous write is ended. Can we somehow allowWritableStreamToOutput::OnOutputStreamReady
to trigger the next read immediately when done? 🤔 - WritableStream also will wait for inFlightWriteRequest to end, which is also a promise that is resolved after a write promise resolves.
Assignee | ||
Comment 8•10 months ago
|
||
Hmm, given that FetchStreamReader turns out to be quite more performant at least for some edge-y cases (in general streams generate chunks asynchronously and probably will never contain 40,000 chunks, but still), maybe the work item should change here to use FetchStreamReader in stream pipe in case the writable stream is based on nsIInputStream so that we can skip some JS and microtask. 🤔
Assignee | ||
Comment 9•10 months ago
|
||
Hmm, waiting for a million promises sequentially only takes a few seconds, so taking minutes for only 40k chunks doesn't make sense... This needs more investigation.
Comment 10•10 months ago
•
|
||
One note: Depending on how you benchmarked, you may have run into an optimization that exists within the JS engine, where promises that are created while the job queue is empty don't actually pass through it, and instead can be directly processed.
(not disagreeing that this needs more investigation, just want to caution that measuring the fundamental performance characteristics of promises can be affected by other engine pieces)
Assignee | ||
Comment 11•10 months ago
|
||
Thanks, that's interesting!
Description
•