Now that ArrayBuffer are the de-facto standard for binary data in JS, they should be the default mechanism for getting data using a nsIScriptableInputStream.
I would expect this to break add-ons. Is there a good reason why we should fix this?
I am talking about adding one more method to nsIScriptableInputStream. How could this break stuff? Now, I just remembered that we also have nsIBinaryInputStream that happens to be scriptable and that can kind of return an ArrayBuffer. So I could be satisfied with just adding documentation and suggesting to not use the binary parts of nsIScriptableInputStream.
(In reply to comment #3) > I am talking about adding one more method to nsIScriptableInputStream. How > could this break stuff? Updating the IID of the interface will break binary add-ons using this interface.
It is not a problem to change the IID of most interfaces (except a few basics like nsISupports/nsIInterfaceRequestor/nsIComponent*) during the nightly cycle and let it ride the trains. Most binary components recompile for each release and the two that don't are aware of these changes and can work around them.
(In reply to :Ehsan Akhgari from comment #2) > I would expect this to break add-ons. Is there a good reason why we should > fix this? We no longer have binary components. Soon we won't have JS-based XPCOM extensions. However, not having this blocks the refactoring of our own chrome JS to use TextDecoder/TextEncoder instead of the old, crufty and duplicative nsIScriptableUConv or nsIConverterInputStream/nsIConverterOutputStream. The new method should probably take an ArrayBuffer to fill as an argument instead of returning a new one. nsIBinaryStream already has this: https://searchfox.org/mozilla-central/source/xpcom/io/nsBinaryStream.cpp#828 It's unclear to me though: 1) Why does that implementation need an intermediate buffer instead of reading directly into the ArrayBuffer? Is the code worried about the underlying stream being JS-based and triggering GC? 2) Why is the intermediate buffer heap-allocated instead of being a fixed-size stack-allocated buffer (potentially resulting in multiple read calls to the underlying stream)?
> Is the code worried about the underlying stream being JS-based and triggering GC? Yes, but the underlying stream doesn't even have to be JS-based to trigger GC. That said, given https://groups.google.com/d/msg/mozilla.dev.platform/PChACMh1W8Y/LbrRbWj7BQAJ maybe we can simply push the AutoCheckCannotGC out of the loop once Steve's changes to the rooting analysis land and we mark nsIInputStream builtinclass. If that succeeds, then we will know GC is in fact not possible here and can do the read directly into the arraybuffer. > Why is the intermediate buffer heap-allocated instead of being a fixed-size stack-allocated buffer Doesn't seem like there's a really good reason. > potentially resulting in multiple read calls to the underlying stream Don't think so. The current heap allocation is of size min(4096, aLength). Just doing a stack allocation of size 4096 and then reading min(4096, aLength) bytes into it would have the same pattern of Read() calls as we have now.
I doubt this needs to be tracked for QuantumFlow...
You need to log in before you can comment on or make changes to this bug.