Open Bug 1200875 Opened 9 years ago Updated 2 years ago

Support bulk data across process boundaries

Categories

(DevTools :: Framework, defect)

defect

Tracking

(Not tracked)

People

(Reporter: jryans, Unassigned)

References

Details

Currently bulk data transfers are unimplemented across process boundaries, making them hard to use in general, as you lose support for e10s windows and FxOS apps.

However it looks like bug 1093357 (cross-process pipe) will be implemented soon.  This should fill in some of the infrastructure to make this possible.

After that, we can build the rest on the DevTools side to make this work.
Which direction does this data need to flow?  Child-to-parent or parent-to-child?
Flags: needinfo?(jryans)
(In reply to Ben Kelly [:bkelly] from comment #1)
> Which direction does this data need to flow?  Child-to-parent or
> parent-to-child?

Both directions are possible.  Bulk data is used in DevTools today for the remote case, based on TCP or Unix domain sockets, but we've not yet had an approach for cross-process.

Does only one matter for you?
Flags: needinfo?(jryans)
The use cases I knew about only required child-to-parent so that was what I was working on.

I'll think about ways to support parent-to-child as well.  It will probably require a slightly different API because the parent cannot generally assume the child process exists when first creating the actor.
(In reply to Ben Kelly [:bkelly] from comment #3)
> The use cases I knew about only required child-to-parent so that was what I
> was working on.
> 
> I'll think about ways to support parent-to-child as well.  It will probably
> require a slightly different API because the parent cannot generally assume
> the child process exists when first creating the actor.

I do think there are *more* DevTools use cases that would be child-to-parent, so that at least matches what you were planning.  For example, pulling profiling info, large image / media files, etc. are all things that would move child-to-parent.  If we had only that direction at first, we could still get a lot out of that.

The other direction (parent-to-child) would come up if we were pushing large changes / edits from the DevTools to the child process.  There are less use cases in this bucket.  In the TCP socket version of bulk data we make use of this general direction (desktop Firefox to FxOS parent process) to install new FxOS apps, but that does not cross process boundaries, so it avoids this issue.
I probably won't implement parent-to-child yet since I'm under time pressure.  The API I have in mind though would be something like:

1) child process knows its going to need to receive pipe data from the parent
2) child process creates a RecvStreamChild and sends it over to the parent
3) parent process gets the RecvStreamParent actor
4) parent process writes to RecvStreamParent
5) child process reads from RecvStreamChild

So it would have to be initiated from the child.

Would this work for your use case?
(In reply to Ben Kelly [:bkelly] from comment #5)
> I probably won't implement parent-to-child yet since I'm under time
> pressure.  The API I have in mind though would be something like:
> 
> 1) child process knows its going to need to receive pipe data from the parent
> 2) child process creates a RecvStreamChild and sends it over to the parent
> 3) parent process gets the RecvStreamParent actor
> 4) parent process writes to RecvStreamParent
> 5) child process reads from RecvStreamChild
> 
> So it would have to be initiated from the child.
> 
> Would this work for your use case?

This general flow sounds reasonable.  The DevTools code involved currently operates on XPCOM streams, either from TCP sockets or nsIPipes, as appropriate.  Will your work have an XPCOM stream interface, or will it use some different API?
Flags: needinfo?(bkelly)
The API will provide an IPC actor type with a C++ interface for consuming/producing the XPCOM stream types.

For example, see how the current Cache API "push stream" actor takes an nsIAsyncInputStream in the constructor on the child side and provides a TakeReader() method on the parent side:

  https://dxr.mozilla.org/mozilla-central/source/dom/cache/CachePushStreamChild.h#34
  https://dxr.mozilla.org/mozilla-central/source/dom/cache/CachePushStreamParent.h#28

Note, in general this API will require async, non-blocking streams.  That should not be a problem, however, since all code can create such a stream by creating a pipe, filling it on STS, and passing the async pipe reader to the stream actor.  This is assuming the original stream isn't serializable directly already.
Flags: needinfo?(bkelly)
(In reply to Ben Kelly [:bkelly] from comment #7)
> The API will provide an IPC actor type with a C++ interface for
> consuming/producing the XPCOM stream types.

Okay, DevTools may wrap your work so it can be invoked from JS (all current DevTools code involved is JS today).  Or if that can't be done, we'll write some C++ as needed to link things together.

> For example, see how the current Cache API "push stream" actor takes an
> nsIAsyncInputStream in the constructor on the child side and provides a
> TakeReader() method on the parent side:
> 
>  
> https://dxr.mozilla.org/mozilla-central/source/dom/cache/
> CachePushStreamChild.h#34
>  
> https://dxr.mozilla.org/mozilla-central/source/dom/cache/
> CachePushStreamParent.h#28
> 
> Note, in general this API will require async, non-blocking streams.  That
> should not be a problem, however, since all code can create such a stream by
> creating a pipe, filling it on STS, and passing the async pipe reader to the
> stream actor.  This is assuming the original stream isn't serializable
> directly already.

DevTools already restricts ourselves to async XPCOM streams, so this sounds fine to me.

Overall, sounds like it should work for us.
Product: Firefox → DevTools
Severity: normal → S3
You need to log in before you can comment on or make changes to this bug.