Closed Bug 761935 (nested-processes) Opened 12 years ago Closed 10 years ago

Tracking: Support nested content processes

Categories

(Core :: General, defect)

defect
Not set
normal

Tracking

()

RESOLVED DUPLICATE of bug 1020135
blocking-kilimanjaro +
blocking-basecamp -

People

(Reporter: cjones, Unassigned)

References

Details

(Whiteboard: [tech-p1])

In b2g, the Browser app must load its tabs in content processes.  Ideally, we want to load the Browser *app* in a content process as well, like we load other apps in content processes.  To do so we need to enable content processes to spawn child content processes.

This bug tracks the work necessary to enable that.
Today we had a brief meeting and decided that the easiest way to implement cookie jars is by giving each process a separate profile.  (We'd manually share things like prefs between profiles.)

If we go down that route, then the cookie jar boundaries must line up exactly with the process boundaries.  If we were not to fix this bug and instead run the browser app in the top-level gecko process, then the browser app would share a cookie jar with the main system app, which would be undesirable.

IOW, it seems that we must either fix this bug, or choose a different means of implementing cookie jars.
blocking-basecamp: --- → ?
I think that this work is too risky for v1 to rely on.  The risk here is *every* DOM API has to be "recursively remoteable".  We don't even have all DOM APIs "nonrecursively remoted" yet.

IMHO we should have a solution that allows "apps" to share an OS process.
Too risky for basecamp, but very helpful for security and stability.
blocking-basecamp: ? → -
blocking-kilimanjaro: --- → ?
cjones - This bug is basecamp-. Can you comment on whether this is required for k9o and provide some background/justification for inclusion/exclusion?
I wouldn't say that it's absolutely required for kilimanjaro but we would lose a lot without it.

There are two problems we face from not having this technology
 (1) We're forced to run "browser apps" in the same OS process as core gecko code.  This means that if the browser app is somehow able to break out of the "Gecko sandbox", with some crash exploit say, the impact is far higher.  So because of that ....
 (2) ... the browser app must be trusted nearly to the same degree as Gecko.  It has to go through an equivalently stringent code review / certification process.  This limits the potential for third-party browser apps, which we absolutely want to encourage.
Thanks cjones. cc clee for his input as well.
I'd add

 (3) Having a process boundary between the system and app makes various parts of the implementation simpler.  Without nested content processes, we either have to accept bugs in the in-process app (for example, at the moment, session history -- window.history.back(), for instance -- doesn't work quite right for an in-process app), or I have to burn cycles fixing behavior for a case that won't matter for 99% of apps.

Since some of the bugs would keep the browser from working properly, we'll likely choose a mix of these two approaches.  But if we have other people writing browser apps, we really should expose a bug-free interface to them.
I really don't think that we should consider third-party browsers at all in terms of the kilimanjaro timeline and priorities. Any browser app is going to have to have pretty stringent security review in some form or another, and we certainly aren't going to standardize all of the details of a browser-type iframe with all the security and other UI notifications that must go with it in time for k9o.

It isn't clear to me that the premise of comment 0 is correct, though. We really have *two* options in the current timeframe: we could either load the browser app as part of the root gecko process, *or* the browser could be all one process (content and browser chrome).
> We really have *two* options in the current timeframe: we could either load the browser 
> app as part of the root gecko process, *or* the browser could be all one process (content 
> and browser chrome).

That's correct.

From the perspective of comment 7, loading the browser app inside the root gecko process is much simpler, because it means that we don't have to fix all the in-process mozbrowser bugs.
k9o+ I spoke with cjones and clee about this bug. This is important enough to mark as blocking at this point.
blocking-kilimanjaro: ? → +
FYI it looks like some people are going to try to use <iframe mozbrowser remote=false> to make apps.  If this is long-term code (as opposed to demoware), I'll still have to fix all the in-process bugs.  I think <iframe mozbrowser remote=false> is the right approach for an app that e.g. wrapps Wikipedia.
Which people are these?
Telefonica people.
Apps don't choose whether they're remote=false or remote=true.  That the flag exists now is a temporary hack.

Interested to hear use cases.
(In reply to Chris Jones [:cjones] [:warhammer] from comment #14)
> Apps don't choose whether they're remote=false or remote=true.  That the
> flag exists now is a temporary hack.

We can automatically make all mozbrowsers remote=false if they're loaded from an OOP app.
 
> Interested to hear use cases.

For explicit remote=false?  Well, Gaia needs that so long as we want the browser app to run in process.  Elsewhere, I don't think it's necessary.
It's not clear to me why
 - these apps want to use mozbrowser
 - why they want to use mozbrowser=false

We don't /want/ the browser to run in-process.  We /want/ browser tabs to run in their own process(es).  It just so happens that for us to achieve that goal in the v1 timeframe, the browser will need to run in-process.  That's bad.
> It's not clear to me why
> - these apps want to use mozbrowser

They're framing cross-origin content and want back/forward buttons.  We can't do that with an iframe.

> - why they want to use remote=false

We don't /want/ to run this frame in-process, but my argument is that it's OK here.  The main benefit of running browser tabs OOP from the browser app is that the browser UI remains responsive if one tab is doing something crazy.

But in these apps, they only have one tab, and anyway, the shell around the tab is so light weight that if the tab is hung, we're not losing a whole lot by hanging the whole UI.
(Similarly, if the tab crashes, it's not the end of the world if the whole app crashes, because the app isn't doing anything other than showing the one.)
blocking-basecamp: - → +
Chris, did you mean to set blocking-basecamp here?  I thought we were explicitly not blocking on this for basecamp.
I can't unset, so moving back to "?".
blocking-basecamp: + → ?
With our powers combined...
blocking-basecamp: ? → ---
By unset I meant "-".
I've set to basecamp- to ensure that this shows up in the correct query. clee had told me that this does block basecamp. If clee, cjones, and jlebar have not already spoken you all should very soon to ensure that you're all in agreement on this bug wrt basecamp.
blocking-basecamp: --- → -
Depends on: 812403
Alias: nested-processes
Competitive parity on major content use case.
Whiteboard: [tech-p1]
The linchpin of this work, the only possible way we have to pull it off, is bug 845669.  Hopefully many DOM APIs will work "out of the box".

The ones that won't, we'll have to work through on a case-by-case basis.  I think there will be two general approaches to fixing them
 - make their implementations recursive; proxy "up".  Usually pretty easy.  Not the most performant option.
 - move required resources "down" into the process that needs them; don't proxy.  This is what we did for file-backed blobs, for example.
No longer blocks: 845401
Depends on: 845401
One simple thing that we really need to do is to add globally unique identifiers for content processes and for windows within those processes.

Right now the only way we can build these IDs is by walking up and down the process chain over IPC.  It's awful.  So instead, we just use the ContentChild ID in a few places where we really need the global ID.

Simple thing, just wanted to note it.
It is important to talk to the people working on sandboxing before doing this work, because AFAICT nested content processes are not good for sandboxing. In particular, while bug 846047 deals with children attacking each other and their parent, there is also an issue where the parent can attack its children that should be prevented. Instead of having a parent/child process relationship, it seems better to look at having the "parent" process be a peer to the "child" processes that has just a few extra permissions for navigating the child processes' locations.

In other words, I am hoping that we don't have to do this at all, while still meeting the goals we have so that the bugs depending on this bug can make progress.
No longer blocks: 883362
Depends on: 883362
Status: NEW → RESOLVED
Closed: 10 years ago
Resolution: --- → DUPLICATE
You need to log in before you can comment on or make changes to this bug.