Open Bug 1757066 Opened 3 years ago Updated 2 years ago

cloneInto cannot clone a Promise()

Categories

(Core :: XPConnect, defect)

Firefox 97
defect

Tracking

()

People

(Reporter: minfrin, Unassigned)

References

Details

User Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.3 Safari/605.1.15

Steps to reproduce:

It appears to be impossible to return a Promise from a function made visible using exportFunction().

Actual results:

Using the following code from within a function exported by exportFunction:

const response = "";
const promise = new Promise(
  function (resolve, reject) {
      resolve(response);
  }
);
return cloneInto(promise, window, {cloneFunctions: true});

the cloneInto fails with the following exception:

Error: Encountered unsupported value type writing stack-scoped structured clone

Expected results:

The Promise should have been successfully cloned.

Simplifying this down to one line:

  return cloneInto(Promise.resolve(""), document.defaultView, {cloneFunctions: true});

The cloneInto fails with:

Error: Encountered unsupported value type writing stack-scoped structured clone

Stumbled on an offhand comment that pointed out the behaviour of the async keyword.

In my case the function is declared async as follows:

myFunction = async function doFunction(source, options, ...CAs) {

  const manifest = browser.runtime.getManifest();
  const response = cloneInto({ name: manifest.name, version: manifest.version }, window.wrappedJSObject);

  return new window.wrappedJSObject.Promise(
    exportFunction(
      function (resolve, reject) {
        resolve(response);
      }
      , window.wrappedJSObject
   )
  );

}

exportFunction(myFunction, window, {defineAs: "myFunction"});

It appears that the Promise that is added to an async function when exportFunction'ed is created in the wrong scope, and has the effect of wrapping the result in something that cannot be accessed in page scope.

To fix this, if an exportFunction'ed function is declared async, the Promise so created must be in the same scope as the exported function.

I have been trying to use an exportFunction to allow the page script to call the content script.

In the exportFunction a ReadableStream is used to generate content and return it to the caller as a Promise. I cannot find any combination of code that returns any promise that is callable from page scope code. The error remains as follows:

Error: Permission denied to access property "then"

Is there a concrete example of a content script returning a Promise to page code where content script code can respond to content scipt code events?

From this bug:

https://bugzilla.mozilla.org/show_bug.cgi?id=1436276#c5

In theory, specs should be defining how this happens. In practice, it's a mess.

Is Promise + ReadableStream + exportFunction viable, or should promises be avoided for now?

Moving this to Core:: DOM: Core & HTML, so that our dev team can take a look at this - if this is not the right component, please set it to a more suitable one.
Thanks!

Component: Untriaged → DOM: Core & HTML
Product: Firefox → Core

It's unclear to me how a structured clone of a promise would look like, tbh.

Yes.

Most specifically, if a web extension content script exportFunction'd a function that returns a ReadableStream, the idea being that the content-script would drip feed data to the ReadableStream, which in turn would be processed by the page script, is that currently possible and what would that look like?

Where the wheels seem to fall off is that the ReadableStream returned to the page script contains code defined in the content script which in turn returns a Promise defined in the content script.

I've been trying for some time to find a combination of code that will avoid either a "permission denied" calling the Promise then function, or a "Encountered unsupported value type writing stack-scoped structured clone" if trying to cloneInto the Promise into the page scope, and so far it looks like neither scenario is possible.

Status: UNCONFIRMED → NEW
Ever confirmed: true

I will try to spend some time looking at this closer later today; two comments until then

  1. I'd be fascinated to hear more about your use case. I've recently replaced the implementation of ReadableStreams in Firefox Nightly, and while I did some local add-on testing, I found even aside from the promise issue you've raised here it required so much coordination between content and page script to get something working it seemed like we wouldn't see any usage. I honestly thought it was a bad idea (bug 1750290), so I'm really interested to see what you're trying to build.

  2. The replacement of the ReadableStreams implementation (Bug 1752206) will not make this specific problem better or worse; but I will note that currently there is a further regression (see again, Bug 1750290) that will render one possible work-around (exporting the controller's enqueue function) no longer functional come nightly.

Native extensions have arbitrary limits on messages sizes:

https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/Native_messaging

The maximum size of a single message from the application is 1 MB. The maximum size of a message sent to the application is 4 GB.

As a result, there is a need for a mechanism to handle streaming, with ReadableStream being the obvious fit.

Ideally what I'm after is an interface very similar to the Fetch API, allowing streams to send data and receive data back, respectively, but that could be used with web extensions. I have an interface that works great on Safari, but I cannot get it to work on Firefox.

I've not made much progress here. I am discovering other issues with ReadableStreams and Add-ons , but I've yet to figure out how to do quite what you're looking for.

One question; given that the page and content scripts are collaborating here, is it possible for you to arrange things a little differently? i.e. could you create the ReadableStream in the page using window.eval, then have the content script invoke the captured controller's enqueue method?

something like this (un-tested)

window.eval(`window.rs = new ReadableStream(start(c) { window.controller = c; }); function queue(v) { window.controller.enqeue(v); }`)
window.unwrappedJS.queue("chunk");

The idea here being to keep all the promises stuff on the page side and just push values in from the content script?

What I need to come up with is a formal interface between the page and the web extension. The web extension is generic, it's not specific to a single site.

It's possible to do Promises on the page, but that's just another of saying - from the perspective of the interface between web extension and page - don't use Promises at all, just callbacks.

(In reply to Graham Leggett from comment #9)

Native extensions have arbitrary limits on messages sizes:

https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/Native_messaging

The maximum size of a single message from the application is 1 MB. The maximum size of a message sent to the application is 4 GB.

As a result, there is a need for a mechanism to handle streaming, with ReadableStream being the obvious fit.

Ideally what I'm after is an interface very similar to the Fetch API, allowing streams to send data and receive data back, respectively, but that could be used with web extensions. I have an interface that works great on Safari, but I cannot get it to work on Firefox.

I have not tested Firefox extensions. What I do to stream real-time audio output on Chromium is utilize "web_accessible_resources" to append an <iframe> to the arbitrary web page, wherein chrom.* API's are defined. Then I use Transferable Streams to write to the WritableStream side from Native Messaging host, where the ReadableStream side is transferred to the arbitrary web page, e.g., https://github.com/guest271314/captureSystemAudio/blob/master/native_messaging/capture_system_audio/transferableStream.js

onload = () => {
  const { readable, writable } = new TransformStream({
    transform(value, controller) {
      controller.enqueue(value);
    },
    flush() {
      console.log('Flush.');
    },
  });
  const writer = writable.getWriter();
  const id = 'capture_system_audio';
  const port = chrome.runtime.connectNative(id);
  port.name = id;
  async function handleMessage(value, port) {
    if (!Array.isArray(value)) {
      value = JSON.parse(value);
    }
    try {
      await writer.ready;
      await writer.write(new Uint8Array(value));
    } catch (e) {
      console.error(e.message);
    }
    return true;
  }
  port.onDisconnect.addListener(async (e) => {
    console.log(e.message);
    await chrome.storage.local.clear();
  });
  port.onMessage.addListener(handleMessage);
  onmessage = async (e) => {
    const { type, message } = e.data;
    if (type === 'start') {
      port.postMessage(message);
      parent.postMessage(readable, name, [readable]);
    }
    if (type === 'stop') {
      try {
        port.disconnect(id);
        console.log(writer.desiredSize, message);
        while (writer.desiredSize < 1) {
          await scheduler.postTask(() => {});
        }
        await writer.close();
        await writer.closed;
        console.log(writer.desiredSize);
        parent.postMessage(0, name);
        onmessage = null;
        await chrome.storage.local.clear();
      } catch (err) {
        console.error(err.message);
      }
    }
  };
  parent.postMessage(1, name);
};

On Firefox, which the last time I checked did not support Transferable Streams or WritableStream you can substitute using MessageChannel to achieve the same effective result, e.g., https://github.com/guest271314/AudioWorkletStream/blob/message-port-post-message/worker.js

let port;
async function* stream(urls) {
  while (urls.length) {
    yield (await fetch(urls.shift())).body;
  }
}
async function* process(reader) {
  while (true) {
    const { value, done } = await reader.read();
    if (done) {
      break;
    }
    yield port.postMessage(value, [value.buffer]);
  }
}
onmessage = async e => {
  'use strict';
  if (!port) {
    [port] = e.ports;
    port.onmessage = event => postMessage(event.data);
  }
  const { urls, codec } = e.data;
  for await (const readable of stream(urls)) {
    for await (const _ of process(readable.getReader()));
  }
  console.log('read/write done');
};

The severity field is not set for this bug.
:edgar, could you have a look please?

For more information, please visit auto_nag documentation.

Flags: needinfo?(echen)
Severity: -- → S3
Flags: needinfo?(echen)

Went digging again.

In essence, it seems the problem is that the streams API doesn't take into account the existence of Firefox x-ray. With x-ray, you use either window.wrappedJSObject or exportFunction to control the realm of objects and functions.

In the case of ReadableStream, the implementation creates a hidden controller, and but the caller has no means to control what realm this controller is created in. The controller is created in a different realm, and so the enqueue function does not exist, and the code fails with "TypeError: controller.enqueue is not a function".

The most ideal use for ReadableStream is to obscure the details of data transfer between code in different realms, it is a real pity this can't work.

Further digging on the original issue, which was the inability of a Promise object to cross the content/page script boundary.

Seems neither the promise API nor the stream API are compatible with web extensions. Neither object can cross the boundary.

Transferable streams are on the way in bug 1659025. I wonder it will help here.

FYI: documentation about the inability to clone Promises in content scripts of WebExtensions was added at https://github.com/mdn/content/issues/15059

If Promise becomes cloneable (or similarly: async function can be awaited on), then we'd need to update the documentation.

(In reply to Emilio Cobos Álvarez (:emilio) from comment #5)

It's unclear to me how a structured clone of a promise would look like, tbh.

A promise is resolved asynchronously, so it is not always possible to immediately resolve it to a value. For scenarios like postMessage across different windows, a structured clone of a Promise does not make much sense.

For the specific use case of "cloning" between content scripts and web pages that share the same document, it would be nice if the cloned promise is a Promise in the target scope, that resolves to whatever the input promise holds. It is probably not necessary to apply structured cloning to the resolution. Anyone who wants to structurally clone the resolution should wrap the promise and clone the resolution.

Here is an example of expectations:

// In content script:
let inputPromise = Promise.resolve([1, 2, 3]);
window.myUnreadablePromise = cloneInto(inputPromise, window);
window.myGoodPromise = cloneInto(inputPromise.then(v => cloneInto(v, window)), window);

// In web page:
(async () => {
  let val = await myUnreadablePromise;
  console.log(val.length); // should throw error; cannot read (Array) object from higher-privileged context.
})();
((async () => {
  let val = await myGoodPromise;
  console.log(val.length); // 3
})();
You need to log in before you can comment on or make changes to this bug.