Open Bug 1825763 Opened 2 years ago Updated 10 months ago

Transform FetchStreamReader into a pipe to WritableStreamToOutput

Categories

(Core :: DOM: Streams, task)

task

Tracking

()

People

(Reporter: saschanaz, Assigned: saschanaz)

References

(Depends on 1 open bug)

Details

Attachments

(3 files)

That should make things cleaner. But we need to make sure Object.prototype.then won't interfere. Since we use custom read request for piping I guess it should be fine?

Also the pipe should handle WorkerRef properly.

Note for me: The listener of BodyConsumer somehow just resolves the promise when WritableStream is errored: https://wpt.live/fetch/api/response/response-stream-bad-chunk.any.html

Note for me:

  1. The reason behind comment #3 is that WritableStream calls ReleaseObjects() on error which currently is not error-aware and thus the WritableStreamToOutput implementation just closes the stream without a proper error code. I'd like to let the ReleaseObjects aware the error condition instead of having ErrorPropagate() function as InputToReadableStreamAlgorithms does, but the error from the stream is returned in a JS::Value form and we need a native error code here. Not clear what to do.
  2. One notable behavior change here is that the chunk type check now happens in WritableStream write algorithm of WritableStreamToOutput instead of in ReadableStream chunk steps of FetchStreamReader. That's technically a diversion from the spec as the consume body algorithm of Fetch spec uses read all bytes algorithm. That means it takes extra time to get the error as we need to wait for WritableStream to process the write queue. Maybe we can pass an extra parameter to PipeToPump to do the check in PipeToReadRequest::ChunkSteps.
Attachment #9362652 - Attachment description: WIP: Bug 1825763 → WIP: Bug 1825763 - Replace FetchStreamReader with piping
Assignee: nobody → krosylight

Hmm. I should have expected this; it has major performance problem because stream piping by design heavily depends on promises. So cases like WPT /fetch/api/response/many-empty-chunks-crash.html will have to wait on at least 40,000 microtasks sequentially to finish, while it finishes instantly with the current FetchStreamReader.

Points where we wait on promises:

  1. PipeToPump::OnReadFulfilled waits on a Promise.resolve() to make it async; we can skip this.
  2. PipeToPump::Read waits on writer.ready() to make sure the previous write is ended. Can we somehow allow WritableStreamToOutput::OnOutputStreamReady to trigger the next read immediately when done? 🤔
  3. WritableStream also will wait for inFlightWriteRequest to end, which is also a promise that is resolved after a write promise resolves.

Hmm, given that FetchStreamReader turns out to be quite more performant at least for some edge-y cases (in general streams generate chunks asynchronously and probably will never contain 40,000 chunks, but still), maybe the work item should change here to use FetchStreamReader in stream pipe in case the writable stream is based on nsIInputStream so that we can skip some JS and microtask. 🤔

Hmm, waiting for a million promises sequentially only takes a few seconds, so taking minutes for only 40k chunks doesn't make sense... This needs more investigation.

One note: Depending on how you benchmarked, you may have run into an optimization that exists within the JS engine, where promises that are created while the job queue is empty don't actually pass through it, and instead can be directly processed.

(not disagreeing that this needs more investigation, just want to caution that measuring the fundamental performance characteristics of promises can be affected by other engine pieces)

Thanks, that's interesting!

You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: