Closed Bug 841495 Opened 12 years ago Closed 11 years ago

implement background service for page thumbnailing

Categories

(Firefox :: General, defect, P1)

defect

Tracking

()

RESOLVED FIXED
Firefox 23

People

(Reporter: Gavin, Assigned: adw)

References

(Depends on 1 open bug, Blocks 1 open bug)

Details

Attachments

(2 files, 13 obsolete files)

4.74 KB, patch
smaug
: review+
Details | Diff | Splinter Review
31.57 KB, patch
Details | Diff | Splinter Review
The goal is to address bug 756881 by allowing us to take snapshots of sites loaded without cookies (i.e. load the page with nsIRequest::LOAD_ANONYMOUS), and to eventually also potentially address bug 755996, by avoiding thumbnailing pages that contain e.g. filled in form data.

A nice side effect is that we may be able to have this process occur entirely off the main thread, such that the thumbnailing process in general does not cause main thread responsiveness issues.

Some initial thoughts about implementation details are at https://etherpad.mozilla.org/background-thumbnailing .
Depends on: 720575
Blocks: 756881
No longer blocks: 756881
Blocks: 756881
Attached patch WIP (obsolete) — Splinter Review
This makes a new BackgroundPageThumbs.jsm.  Originally I modified PageThumbs.jsm, but Gavin said he imagined this being a new module.  I still think it would be better to add to PageThumbs, and especially to use a lot of the work that Tim did there.  See below.  Anyhow, I'm not focused on that right now.

When BackgroundPageThumbs is init()ed, it creates a remote (like Gavin and I discussed) xul:browser and sticks it in the hidden window.  This browser is used to capture all thumbnails.

captureURL() takes a URL and loads it in the browser.  When load completes, a content script captures the window to a canvas and sends the image as a data URI to the chrome process.  From there, I envision that PageThumbs would take over and cache the image.  PageThumbs would need to be modified to handle or adapt data URIs.  Right now it treats images as input streams.

Questions (not exhaustive):

I'm on a Mac, where the hidden window is a XUL window.  I don't know how well it will work to create a xul:browser and add it to the HTML hidden window on Windows.  I'm building in a VM but it's taking forever.  I also pushed to try and am waiting for results (https://tbpl.mozilla.org/?tree=Try&rev=b7c50fc89d6e).

There are a ton of warnings and errors printed to the terminal when the remote browser is added to the hidden window, even though I'm ultimately able to capture an image.

I'm not sure whether it would be better for the browser to exist always after init() or only when a thumbnail is being captured.  Revolves around how expensive it is in terms of time and memory to keep around the content process vs. spawning it frequently.  Maybe destroy it after N seconds of no use.
(In reply to Drew Willcoxon :adw from comment #1)

Thanks for the update!

> I'm on a Mac, where the hidden window is a XUL window.  I don't know how
> well it will work to create a xul:browser and add it to the HTML hidden
> window on Windows.  I'm building in a VM but it's taking forever.  I also
> pushed to try and am waiting for results
> (https://tbpl.mozilla.org/?tree=Try&rev=b7c50fc89d6e).

See http://mxr.mozilla.org/mozilla-central/source/addon-sdk/source/lib/sdk/frame/hidden-frame.js#56 for how the jetpack code deals with this. AIUI, it basically either uses the hidden window directly (on Mac), or creates a child XUL doc iframe and uses that.
Tim, what do you think about my idea in comment 1 of reusing PageThumbs.jsm as much as possible, as opposed to basically duplicating in a content script all the work that PageThumbs does?
Flags: needinfo?(ttaubert)
(In reply to Drew Willcoxon :adw from comment #1)
> There are a ton of warnings and errors printed to the terminal when the
> remote browser is added to the hidden window, even though I'm ultimately
> able to capture an image.

It looks the same with a debug build of Firefox OS where we use remote browsers extensively. Nothing we should really worry about, but there may come a time (for the Platform team) to fix that stuff :)

> I'm not sure whether it would be better for the browser to exist always
> after init() or only when a thumbnail is being captured.  Revolves around
> how expensive it is in terms of time and memory to keep around the content
> process vs. spawning it frequently.  Maybe destroy it after N seconds of no
> use.

That's a very good question. I suppose it's not too expensive if we switch to 'about:blank' after we captured a screenshot. Destroying after N seconds sounds also good to me.

(In reply to Drew Willcoxon :adw from comment #3)
> Tim, what do you think about my idea in comment 1 of reusing PageThumbs.jsm
> as much as possible, as opposed to basically duplicating in a content script
> all the work that PageThumbs does?

Yes, I'm all for DRY. Regarding IPC, we should really try to keep it to a minimum as it is really expensive. Shouldn't it be possible to load PageThumbs.jsm in the remote browser's content script and re-use those APIs? That we don't have send messages around to re-use the crop area algorithm etc. but could just re-use the same JSM. We can then call PageThumbs.capture(aWindow, aCallback) in the content script and send a message when we're done and the thumbnail was saved.
Flags: needinfo?(ttaubert)
Thanks Tim.

(In reply to Tim Taubert [:ttaubert] from comment #4)
> minimum as it is really expensive. Shouldn't it be possible to load
> PageThumbs.jsm in the remote browser's content script and re-use those APIs?
> That we don't have send messages around to re-use the crop area algorithm
> etc. but could just re-use the same JSM. We can then call
> PageThumbs.capture(aWindow, aCallback) in the content script and send a
> message when we're done and the thumbnail was saved.

Well, tell me if I'm wrong here, but the problem with doing I/O in the remote browser content script is the potential for contention for thumbnail files and the thumbnail directory, contention between the remote browser process and PageThumbsStorage (which actually already does a mix of different kinds of I/O: NetUtil.asyncCopy and whatever thread that uses, nsIFile methods on the main thread, and delegation to PageThumbsWorker on a ChromeWorker thread).  Sending the image over to the chrome process would avoid that contention by routing all I/O through PageThumbsStorage.
Status: NEW → ASSIGNED
(In reply to Drew Willcoxon :adw from comment #5)
> PageThumbsStorage (which actually already does a mix of
> different kinds of I/O: NetUtil.asyncCopy and whatever thread that uses,
> nsIFile methods on the main thread, and delegation to PageThumbsWorker on a
> ChromeWorker thread)

Bug 753768 should fix that, which might address this concern?
(In reply to :Gavin Sharp (use gavin@gavinsharp.com for email) from comment #6)
> (In reply to Drew Willcoxon :adw from comment #5)
> > PageThumbsStorage (which actually already does a mix of
> > different kinds of I/O: NetUtil.asyncCopy and whatever thread that uses,
> > nsIFile methods on the main thread, and delegation to PageThumbsWorker on a
> > ChromeWorker thread)
> 
> Bug 753768 should fix that, which might address this concern?

Yeah, we should really get this finally landed. I do wonder though if this just leads to two background threads on two processes trying to write thumbnails. I think it shouldn't be too hard to implement some kind of OS-level locking before writing thumbnails in the background, no?
Attached patch WIP 2 (obsolete) — Splinter Review
I was having some trouble getting the remote browser to work when the hidden window isn't XUL, but it seems to work now.  Was setting mozframetype to "content" on the host iframe, which prevented the hosted xul:browser from creating the content process.

Calls to captureURL() are now queued so that the call it's currently servicing isn't clobbered by the next call.  Right now the queue lives in the chrome script.  I'm not sure whether it would be better for it to live in the content script, but maybe one choice will become more obviously right than the other.

captureURL() now takes options that are set on the remote browser's docshell, like allowPlugins, etc.  The chrome script simply sends a "capture" message with the options, and the content script loads the page from there.  (In the previous patch, the chrome script set the src attribute on the browser to load a page.)  Still need to investigate loading without cookies.  Gavin mentioned nsIRequest::LOAD_ANONYMOUS, but I don't know yet which API that's involved with.

Adds a queue test and uses Task.jsm in the tests.
Attachment #717446 - Attachment is obsolete: true
Attached patch WIP 3 (obsolete) — Splinter Review
This hooks up BackgroundPageThumbs to PageThumbsStorage.  Data URIs are converted to input streams which are then passed to PageThumbsStorage.write.

It also makes page loads private, which seems to be as easy as

> docShell.QueryInterface(Ci.nsILoadContext).usePrivateBrowsing = true;

Needs cleanup, more tests.

(In reply to Drew Willcoxon :adw from comment #8)
> Calls to captureURL() are now queued so that the call it's currently
> servicing isn't clobbered by the next call.  Right now the queue lives in
> the chrome script.  I'm not sure whether it would be better for it to live
> in the content script, but maybe one choice will become more obviously right
> than the other.

Chrome is the better choice, since initialization of the service is async when the hidden window is not chrome, and calls to captureURL should be queued while initialization is pending.
Attachment #722048 - Attachment is obsolete: true
>It also makes page loads private, which seems to be as easy as
>
>> docShell.QueryInterface(Ci.nsILoadContext).usePrivateBrowsing = true;

While this _may_ work in most cases, it's totally unsupported. I'm having a really hard time reading the WIP patch, what with all of the /* */ blocks, otherwise I would try to offer more constructive suggestions.
Thanks Josh.  What about it is unsupported?  Is the line I quoted the wrong way to turn on private browsing on a browser?

Gavin describes the goal in comment 0.  I just need a way to load pages in a browser anonymously.  That's the purpose of private browsing, and as I say setting the load context seems to work.  Do you know a better way to load pages anonymously?
Changing the value of usePrivateBrowsing after the initial creation of a tab/window is unsupported, in that some ill-defined set of content may not report the updated value when their privacy status is checked. The goal is to eventually make the property read-only, too. The supported way of enabling private browsing is to request it at creation time, via the "private" feature in window.open.
(In reply to Drew Willcoxon :adw from comment #8)
> Still need to investigate loading without
> cookies.  Gavin mentioned nsIRequest::LOAD_ANONYMOUS, but I don't know yet
> which API that's involved with.

LOAD_ANONYMOUS is a load flag that can be set on the request's http channel but afaik there's no way to load a URL in a docShell with this special flag, yet. We may need to add this flag to nsIWebNavigation so that we can use it for nsIDocShell.loadURI().
Comment on attachment 723044 [details] [diff] [review]
WIP 3

Review of attachment 723044 [details] [diff] [review]:
-----------------------------------------------------------------

::: toolkit/components/thumbnails/BackgroundPageThumbs.jsm
@@ +137,5 @@
> +
> +    ctx.scale(scale, scale);
> +
> +    try {
> +      ctx.drawWindow(content, 0, 0, sw, sh, "white", ctx.DRAWWINDOW_DO_NOT_FLUSH);

How about loading PageThumbs.jsm in the content script and calling PageThumbs.captureToCanvas()? This avoids a lot of code duplication.

@@ +145,5 @@
> +      return;
> +    }
> +    /*ctx.createImageData(sw, sh);*/
> +    /*log(canvas.toDataURL());*/
> +    sendAsyncMessage("didCapture", canvas.toDataURL());

I'm not sure we should really send base64 encoded thumbnails as messages. This seems very expensive and we're in general quite bad about having too many copies of strings and GC'ing strings a little too late. It's definitely good for a start but I still think we should consider requesting an exclusive file lock and just write from the other process.
(In reply to Drew Willcoxon :adw from comment #9)
> It also makes page loads private, which seems to be as easy as
> 
> > docShell.QueryInterface(Ci.nsILoadContext).usePrivateBrowsing = true;

(In reply to Josh Matthews [:jdm] from comment #10)
> While this _may_ work in most cases, it's totally unsupported.

I see commented out references to hiddenPrivateDOMWindow in the patch - does using that not work for some reason?
Blocks: 849819
(In reply to Tim Taubert [:ttaubert] from comment #14)

> How about loading PageThumbs.jsm in the content script and calling
> PageThumbs.captureToCanvas()? This avoids a lot of code duplication.

Yeah, this is just a messy work in progress.

> I'm not sure we should really send base64 encoded thumbnails as messages.
> This seems very expensive and we're in general quite bad about having too
> many copies of strings and GC'ing strings a little too late.

Is there any evidence that it's expensive, and by what metric?  The content process needs to message the chrome process when the capture is done at any rate so that the consumer is notified, and I'm not sure that sending the encoded image along with the message is much more expensive than sending the message without it.

(In reply to :Gavin Sharp (use gavin@gavinsharp.com for email) from comment #15)

> I see commented out references to hiddenPrivateDOMWindow in the patch - does
> using that not work for some reason?

It doesn't work, but I'm not sure why.  It looks like the documentReady state of the private DOM window document is uninitialized when I add the iframe to it, even some time after app startup, and the host iframe's load listener is never called, so the service's initialization never completes.
(In reply to Drew Willcoxon :adw from comment #16)
> (In reply to :Gavin Sharp (use gavin@gavinsharp.com for email) from comment
> #15)
> 
> > I see commented out references to hiddenPrivateDOMWindow in the patch - does
> > using that not work for some reason?
> 
> It doesn't work, but I'm not sure why.  It looks like the documentReady
> state of the private DOM window document is uninitialized when I add the
> iframe to it, even some time after app startup, and the host iframe's load
> listener is never called, so the service's initialization never completes.

That could be bug 849723. The on-demand initialization of the hidden private window can be probematic.
No longer blocks: 849819
Blocks: 850909
Quick status update.  Hope to have a for-review patch tomorrow.

After asking Josh more about anonymous loading, got something set up.  xul:browser has a "privatebrowsing" attribute that if "true" sets usePrivateBrowsing on the docShell right after it's created (for remote browsers, anyway).

I cargo-culted from the Add-on SDK the code that determines whether the host window is suitable for hosting the browser directly.  I didn't really understand it, which I realized when I tried to comment it, so I looked into it and ended up finding better criteria.

I had wanted to implement a timeout mechanism so that the whole service isn't hosed until app restart if for some reason the current capture isn't serviced in a timely manner.  So I'm working on that now, and it requires more sophisticated message handling and queueing.
Attached patch patch (obsolete) — Splinter Review
I'll ask Josh to review the back-end changes later.

There are some loose ends:

Bug 759964 is outstanding, so pages in the off-screen thumbnail browser can play media.

Pages should be unloaded right after capture since they're no longer needed, to free up memory and prevent unforeseen problems with off-screen pages from festering (or foreseen ones, like the aforementioned media problem).  I played around with loading about:blank and various bfcache and session history purging stuff, but I couldn't get pages to disappear from about:memory.

There should be test that makes sure pages are loaded anonymously, and a test that makes sure the thumbnail contents are right, like a reftest.

I made the "privatebrowsing" xul:browser attribute only work for remote browsers.
Attachment #723044 - Attachment is obsolete: true
Attachment #725287 - Flags: review?(ttaubert)
Depends on: 753768
No longer depends on: 720575
Comment on attachment 725287 [details] [diff] [review]
patch

Review of attachment 725287 [details] [diff] [review]:
-----------------------------------------------------------------

The patch does unfortunately not apply anymore but I reviewed it anyway without trying it out. The platform code looks good to me but we'll need to find another reviewer for this. (Can we please split this out into another patch?)

I think a better model would be to just provide BackgroundPageThumbs.capture(). The capture happens asynchronously anyway so we should then lazily initialize the browser in the hidden window and screenshot the page. We could release all objects held alive when the browser shuts down. We don't need explicit init/uninit here.

BTW, I talked to Yoric and he agrees that writing the thumbnail directly from the non-main process should be the way to go. With bug 753768 landed, we'll use OS.File.writeAtomic() which basically would avoid race conditions. The only race condition would be if the main process writes a thumbnail for the same URL that we just captured (and overwrites ours shortly after) - which is very unlikely because this bug is about pages we have no thumbnails for :)

::: toolkit/components/thumbnails/BackgroundPageThumbs.jsm
@@ +5,5 @@
> +const EXPORTED_SYMBOLS = [
> +  "BackgroundPageThumbs",
> +];
> +
> +const DefaultCaptureTimeout = 30000; // ms

A common convention is to name it like |DEFAULT_CAPTURE_TIMEOUT_MS|.

@@ +6,5 @@
> +  "BackgroundPageThumbs",
> +];
> +
> +const DefaultCaptureTimeout = 30000; // ms
> +const FrameScriptURL = "chrome://global/content/backgroundPageThumbsContent.js";

The consts should be in upper case, i.e. |FRAME_SCRIPT_URL|.

@@ +23,5 @@
> +   *
> +   * @param parentWin  The window to host the xul:browser that the service uses
> +   *                   to load pages.  If not given, the hidden window is used.
> +   */
> +  init: function init(parentWin) {

When do we need the |parentWin| variable? Isn't that always the |hiddenDOMWindow|?

@@ +27,5 @@
> +  init: function init(parentWin) {
> +    // * !("_isInitialized" in this) => neither initialized nor initializing
> +    // * !this._isInitialized => initializing
> +    // * this._isInitialized => initialized
> +    if ("_isInitialized" in this)

This kind of tri-state variable here is a little confusing. I think it would be much clearer if we'd use a second variable.

@@ +143,5 @@
> +  _onCapture: function _onCapture(capture) {
> +    let idx = 0;
> +    for (; idx < this._captureQueue.length; idx++)
> +      if (this._captureQueue[idx] == capture)
> +        break;

Maybe |let idx = this._captureQueue.indexOf(capture)|?
Also, shouldn't this always be the first entry?

@@ +181,5 @@
> +
> +  /**
> +   * Sends a message to the content script to start the capture.
> +   */
> +  start: function start() {

Instead of passing a callback when creating, couldn't |start()| return a Promise?

::: toolkit/components/thumbnails/PageThumbs.jsm
@@ +146,5 @@
> +    telemetry.getHistogramById("FX_THUMBNAILS_CAPTURE_TIME_MS")
> +      .add(new Date() - telemetryCaptureTime);
> +  },
> +
> +  _captureToCanvas: function PageThumbs__captureToCanvas(aWindow, aCanvas) {

Why do we need a second method here? Just to skip the telemetry measurement? If so we could maybe add an |aSkipTelemetry = false| argument or |aOptions = {}|?

::: toolkit/components/thumbnails/content/backgroundPageThumbsContent.js
@@ +32,5 @@
> +
> +  _onLoad: function _onLoad(event) {
> +    if (event.target != content.document)
> +      return;
> +    this._sizeViewport();

Looks like we can just do this once when initializing? We can avoid the extra cost if we assume that users switching primary screens with different screen sizes is not a very common occurrence.

@@ +33,5 @@
> +  _onLoad: function _onLoad(event) {
> +    if (event.target != content.document)
> +      return;
> +    this._sizeViewport();
> +    let canvas = PageThumbs._createCanvas(content);

We should probably make |_createCanvas| "public", i.e. rename it to |createCanvas|.

@@ +36,5 @@
> +    this._sizeViewport();
> +    let canvas = PageThumbs._createCanvas(content);
> +    PageThumbs._captureToCanvas(content, canvas);
> +    sendAsyncMessage("didCapture", canvas.toDataURL());
> +  },

I think we should load 'about:blank' when we're done to release memory. Alternative you could also use |docShell.createAboutBlankContentViewer()| like in https://hg.mozilla.org/releases/mozilla-aurora/rev/6eccd9aabcfc

::: toolkit/components/thumbnails/test/browser_thumbnails_background.js
@@ +60,5 @@
> +    yield deferred.promise;
> +  },
> +
> +  function timeout() {
> +    let url = "http://mochi.test:8888/browser/toolkit/components/thumbnails/test/thumbnails_background_slow.sjs";

This base URL should be a const at the top of the file.

::: toolkit/components/thumbnails/test/thumbnails_background_slow.sjs
@@ +15,5 @@
> +  let ms = undefined;
> +  if (req.queryString)
> +    ms = Number(req.queryString);
> +  if (isNaN(ms))
> +    ms = 5000;

Can't we just do |let ms = parseInt(req.queryString || 5000, 10)|?
Attachment #725287 - Flags: review?(ttaubert)
(In reply to Tim Taubert [:ttaubert] (at the Performance work week, may not respond until Mar 25th) from comment #20)
> The platform code looks good to me but we'll need to find
> another reviewer for this. (Can we please split this out into
> another patch?)

Like I said, I'll ask Josh to review that part.

> I think a better model would be to just provide
> BackgroundPageThumbs.capture(). The capture happens asynchronously anyway so
> we should then lazily initialize the browser in the hidden window and
> screenshot the page. We could release all objects held alive when the
> browser shuts down. We don't need explicit init/uninit here.

OK, init and uninit can be private, no big deal.  I modeled those two methods after PageThumbs.

> BTW, I talked to Yoric and he agrees that writing the thumbnail directly
> from the non-main process should be the way to go. With bug 753768 landed,
> we'll use OS.File.writeAtomic() which basically would avoid race conditions.

Does it avoid races when another thread/process is trying to remove the file, as PageThumbsStorage does?  Does it avoid races when another is trying to remove the whole directory, as PageThumbsStorage also does?

No, right?

> @@ +23,5 @@
> > +   *
> > +   * @param parentWin  The window to host the xul:browser that the service uses
> > +   *                   to load pages.  If not given, the hidden window is used.
> > +   */
> > +  init: function init(parentWin) {
> 
> When do we need the |parentWin| variable? Isn't that always the
> |hiddenDOMWindow|?

I was using it to test, creating a window that mimics the HTML hidden window on Windows and Linux and passing it to init.  I don't think it's hurting anyone.

> @@ +27,5 @@
> > +  init: function init(parentWin) {
> > +    // * !("_isInitialized" in this) => neither initialized nor initializing
> > +    // * !this._isInitialized => initializing
> > +    // * this._isInitialized => initialized
> > +    if ("_isInitialized" in this)
> 
> This kind of tri-state variable here is a little confusing. I think it would
> be much clearer if we'd use a second variable.

But there are only three states.  Two variables would mean four states, and one would be unused, which is confusing.  If you like I could make this enum-like, with three different constants, instead of treating !("_isInitialized" in this) as a state.

> @@ +143,5 @@
> > +  _onCapture: function _onCapture(capture) {
> > +    let idx = 0;
> > +    for (; idx < this._captureQueue.length; idx++)
> > +      if (this._captureQueue[idx] == capture)
> > +        break;
> 
> Also, shouldn't this always be the first entry?

If the capture times out, _onCapture is called, and the capture could be anywhere in the queue.

> @@ +181,5 @@
> > +
> > +  /**
> > +   * Sends a message to the content script to start the capture.
> > +   */
> > +  start: function start() {
> 
> Instead of passing a callback when creating, couldn't |start()| return a
> Promise?

It could, but I guess that's a matter of what's conventional in new APIs these days.  I'd prefer a callback so that consumers that don't want to deal with task.js don't have to.  Other consumers could easily write a wrapper function.

> ::: toolkit/components/thumbnails/PageThumbs.jsm
> @@ +146,5 @@
> > +    telemetry.getHistogramById("FX_THUMBNAILS_CAPTURE_TIME_MS")
> > +      .add(new Date() - telemetryCaptureTime);
> > +  },
> > +
> > +  _captureToCanvas: function PageThumbs__captureToCanvas(aWindow, aCanvas) {
> 
> Why do we need a second method here? Just to skip the telemetry measurement?
> If so we could maybe add an |aSkipTelemetry = false| argument or |aOptions =
> {}|?

To skip telemetry, yes, since I call it off the main process and I wasn't sure how Telemetry would react (and admittedly I didn't test), plus it seems to me that telemeterizing that very small part of the background thumbnail capture process would give a false impression of how long the process actually takes.

I'd prefer not to use a skipTelemetry argument, since normal consumers shouldn't have the option of skipping it.

> ::: toolkit/components/thumbnails/content/backgroundPageThumbsContent.js
> @@ +32,5 @@
> > +
> > +  _onLoad: function _onLoad(event) {
> > +    if (event.target != content.document)
> > +      return;
> > +    this._sizeViewport();
> 
> Looks like we can just do this once when initializing? We can avoid the
> extra cost if we assume that users switching primary screens with different
> screen sizes is not a very common occurrence.

I tried that first but it didn't work.  The window/viewport size always reset on page load.

> @@ +33,5 @@
> > +  _onLoad: function _onLoad(event) {
> > +    if (event.target != content.document)
> > +      return;
> > +    this._sizeViewport();
> > +    let canvas = PageThumbs._createCanvas(content);
> 
> We should probably make |_createCanvas| "public", i.e. rename it to
> |createCanvas|.

OK, but do normal consumers need it?

> @@ +36,5 @@
> > +    this._sizeViewport();
> > +    let canvas = PageThumbs._createCanvas(content);
> > +    PageThumbs._captureToCanvas(content, canvas);
> > +    sendAsyncMessage("didCapture", canvas.toDataURL());
> > +  },
> 
> I think we should load 'about:blank' when we're done to release memory.

I commented on this earlier.

> Alternative you could also use |docShell.createAboutBlankContentViewer()|
> like in https://hg.mozilla.org/releases/mozilla-aurora/rev/6eccd9aabcfc

I'll try that, thanks.
(In reply to Drew Willcoxon :adw from comment #21)
> (In reply to Tim Taubert [:ttaubert] (at the Performance work week, may not
> > Alternative you could also use |docShell.createAboutBlankContentViewer()|
> > like in https://hg.mozilla.org/releases/mozilla-aurora/rev/6eccd9aabcfc
> 
> I'll try that, thanks.

about:memory still lists the page that was previously loaded, same as all the other techniques I tried, so it doesn't look like that works.
(In reply to Drew Willcoxon :adw from comment #22)
> about:memory still lists the page that was previously loaded, same as all
> the other techniques I tried, so it doesn't look like that works.

Hmm. That's strange. We can probably solve this in a follow-up.
(In reply to Drew Willcoxon :adw from comment #21)
> OK, init and uninit can be private, no big deal.  I modeled those two
> methods after PageThumbs.

Yeah, PageThumbs just needs to be active to receive various observer notifications. We luckily don't have this here.

> Does it avoid races when another thread/process is trying to remove the
> file, as PageThumbsStorage does?  Does it avoid races when another is trying
> to remove the whole directory, as PageThumbsStorage also does?
> 
> No, right?

writeAtomic() writes a tempfile and moves it after finishing writing. If the file gets removed while writing it should just appear again when the bg thread finishes writing. If the whole directory is removed writeAtomic() just fails when trying to move.

> > When do we need the |parentWin| variable? Isn't that always the
> > |hiddenDOMWindow|?
> 
> I was using it to test, creating a window that mimics the HTML hidden window
> on Windows and Linux and passing it to init.  I don't think it's hurting
> anyone.

No it indeed doesn't hurt, I was just wondering.

> > This kind of tri-state variable here is a little confusing. I think it would
> > be much clearer if we'd use a second variable.
> 
> But there are only three states.  Two variables would mean four states, and
> one would be unused, which is confusing.  If you like I could make this
> enum-like, with three different constants, instead of treating
> !("_isInitialized" in this) as a state.

Good point, enum-like would do the trick for me.

> > Also, shouldn't this always be the first entry?
> 
> If the capture times out, _onCapture is called, and the capture could be
> anywhere in the queue.

Doesn't this work like a stack and isn't there only ever one capture active? If that succeeds or times out we shift() and proceed with the first index. New captures coming in are always appended to the queue. I don't see a case where the capture could be anywhere. (I may be missing something.)

> > Instead of passing a callback when creating, couldn't |start()| return a
> > Promise?
> 
> It could, but I guess that's a matter of what's conventional in new APIs
> these days.  I'd prefer a callback so that consumers that don't want to deal
> with task.js don't have to.  Other consumers could easily write a wrapper
> function.

That's a valid point for APIs in general - if we expect promises we force them onto everyone. As |Capture| is internal to the content script only and we're moving forward with having more Promise-based APIs in the browser (like PageThumbs soon will be) I don't think it's a big deal though.

> > Why do we need a second method here? Just to skip the telemetry measurement?
> > If so we could maybe add an |aSkipTelemetry = false| argument or |aOptions =
> > {}|?
> 
> To skip telemetry, yes, since I call it off the main process and I wasn't
> sure how Telemetry would react (and admittedly I didn't test), plus it seems
> to me that telemeterizing that very small part of the background thumbnail
> capture process would give a false impression of how long the process
> actually takes.
> 
> I'd prefer not to use a skipTelemetry argument, since normal consumers
> shouldn't have the option of skipping it.

Fair enough.

> > Looks like we can just do this once when initializing? We can avoid the
> > extra cost if we assume that users switching primary screens with different
> > screen sizes is not a very common occurrence.
> 
> I tried that first but it didn't work.  The window/viewport size always
> reset on page load.

Ok, I don't exactly know much about the internals here or how expensive it actually is. We don't capture too often though so I think we can leave it as is and maybe file a follow-up to investigate and ask some people?

> > We should probably make |_createCanvas| "public", i.e. rename it to
> > |createCanvas|.
> 
> OK, but do normal consumers need it?

Hmm probably not if one doesn't count the background thumbnailing service as a "normal consumer". If we don't then we can also leave it "private".
(In reply to Tim Taubert [:ttaubert] from comment #24)
> writeAtomic() writes a tempfile and moves it after finishing writing. If the
> file gets removed while writing it should just appear again when the bg
> thread finishes writing. If the whole directory is removed writeAtomic()
> just fails when trying to move.

Suppose a different process or thread, say the main thread, has the file open for reading and is in the middle of reading it, and then content process's writeAtomic move concurrently starts.  What happens?  Will the move fail, thereby losing the new thumbnail for no good reason?  Will the read continue?  If so, will it continue reading the old file, or will it result in a mix of the old and new?  If it fails but the reader is reading in chunks, will the reader know to discard previous chunks?  Whatever the behavior is, will it be consistent across operating systems, file systems, and OS.File.writeAtomic/move implementations?

> Doesn't this work like a stack and isn't there only ever one capture active?
> If that succeeds or times out we shift() and proceed with the first index.
> New captures coming in are always appended to the queue. I don't see a case
> where the capture could be anywhere. (I may be missing something.)

This part is kind of subtle, and there's a comment in the code about it, but captures can time out, and the timeout is configurable per capture, and the timeout timer starts when the caller initiates the capture by calling captureURL.  So it can be the case that the capture request at the front of the queue, the one that's being serviced, is taking a long time, and some more recent capture request later in the queue times out.

I'll add a comment here about it too.

> That's a valid point for APIs in general - if we expect promises we force
> them onto everyone. As |Capture| is internal to the content script only and
> we're moving forward with having more Promise-based APIs in the browser
> (like PageThumbs soon will be) I don't think it's a big deal though.

Yeah, sorry, I didn't pay close enough attention to the fact that you were talking about Capture.start and not BackgroundPageThumbs.captureURL.

It seems like the main benefit of using task.js and promises is avoiding callbacks by using generators and yield.  I wouldn't be able to use yield here since I shouldn't block on the capture finishing, so I'd have to write something like capture.start().then(function () ...), using a callback anyway.  Hmm, but maybe that is better since the Capture wouldn't have to hold on to the callback.

> > I'd prefer not to use a skipTelemetry argument, since normal consumers
> > shouldn't have the option of skipping it.
> 
> Fair enough.

Gavin mentioned telemeterizing this to see how it performs in the wild and to compare it, as much as possible, to PageThumbs, which may imply adding more or different probes to PageThumbs, so maybe we can tackle all of that in a follow-up bug.

> Ok, I don't exactly know much about the internals here or how expensive it
> actually is. We don't capture too often though so I think we can leave it as
> is and maybe file a follow-up to investigate and ask some people?

Sounds good.

> Hmm probably not if one doesn't count the background thumbnailing service as
> a "normal consumer". If we don't then we can also leave it "private".

Yeah, by "normal" I meant not this weird BackgroundPageThumbs thing that shares some of PageThumbs's implementation.
(In reply to Drew Willcoxon :adw from comment #25)
> Suppose a different process or thread, say the main thread, has the file
> open for reading and is in the middle of reading it, and then content
> process's writeAtomic move concurrently starts.  What happens?  Will the
> move fail, thereby losing the new thumbnail for no good reason?  Will the
> read continue?  If so, will it continue reading the old file, or will it
> result in a mix of the old and new?  If it fails but the reader is reading
> in chunks, will the reader know to discard previous chunks?  Whatever the
> behavior is, will it be consistent across operating systems, file systems,
> and OS.File.writeAtomic/move implementations?

David/Yoric, can you please comment on this? I don't know enough about OS.File to answer. Thanks!

> This part is kind of subtle, and there's a comment in the code about it, but
> captures can time out, and the timeout is configurable per capture, and the
> timeout timer starts when the caller initiates the capture by calling
> captureURL.  So it can be the case that the capture request at the front of
> the queue, the one that's being serviced, is taking a long time, and some
> more recent capture request later in the queue times out.

Ah, I get it now. The timeout is started when the Capture is being created, not when it's actually started. That way we don't only cover page load timeouts but also timeouts because of huge queue sizes. Makes sense to me, thanks for explaining.

> Gavin mentioned telemeterizing this to see how it performs in the wild and
> to compare it, as much as possible, to PageThumbs, which may imply adding
> more or different probes to PageThumbs, so maybe we can tackle all of that
> in a follow-up bug.

Sounds good to me. followupCount++

> > Hmm probably not if one doesn't count the background thumbnailing service as
> > a "normal consumer". If we don't then we can also leave it "private".
> 
> Yeah, by "normal" I meant not this weird BackgroundPageThumbs thing that
> shares some of PageThumbs's implementation.

Heh, ok. Let's leave it "private".
(In reply to Tim Taubert [:ttaubert] from comment #26)
> (In reply to Drew Willcoxon :adw from comment #25)
> > Suppose a different process or thread, say the main thread, has the file
> > open for reading and is in the middle of reading it, and then content
> > process's writeAtomic move concurrently starts.  What happens?  Will the
> > move fail, thereby losing the new thumbnail for no good reason?  Will the
> > read continue?  If so, will it continue reading the old file, or will it
> > result in a mix of the old and new?  If it fails but the reader is reading
> > in chunks, will the reader know to discard previous chunks?  Whatever the
> > behavior is, will it be consistent across operating systems, file systems,
> > and OS.File.writeAtomic/move implementations?
> 
> David/Yoric, can you please comment on this? I don't know enough about
> OS.File to answer. Thanks!

Well, I know enough about OS.File but not about the thumbnails protocol. I suspect that this protocol uses nsIFileInputStream somewhere, and and tend to believe that nsIFileInputStream will produce random garbage in case of race condition. My expectation, though, is that:
- such race conditions will be very uncommon;
- the worst that can happen is a thumbnail image that cannot display.
I haven't had time to work on this the past couple of days, but I'm planning on posting an updated patch early next week that keeps the I/O routed through PageThumbsStorage in the main process but addresses Tim's other comments.
I was hoping to quickly put up a new patch, but PageThumbs changed how it writes thumbnails.  It used to read from an input stream, and creating an input stream from a data URI off the main thread is simple.  Now it takes an ArrayBuffer, and I don't know how to easily create an ArrayBuffer from a string representation of a thumbnail entirely off the main thread in JS.  (Doing it on the main thread would be easier but would require O(num bytes) work, which is no good on the main thread.)

It seems simple to create an object that implements nsIBinaryOutputStream and wraps an ArrayBuffer, and use NetUtil.asyncCopy with it and the input stream, but when the async copier tries to write to the output stream, I get not-same-thread errors, I think in XPConnect.

So I think I'll just write directly from the remote process after all.  I still think that without proper synchronization it's basically bad engineering, but oh well.  PageThumbs and and PageThumbsProtocol don't synchronize reads and writes anyway.
_calculateMD5Hash can't be used from the content process because PSM/NSS can't be initialized in a non-chrome process.  It's not enough to get the hash in the chrome process when the capture starts and then send it to the content process because the URL may redirect to another URL, and in that case we copy the thumbnail to another file, and the file name depends on _calculateMD5Hash.  Since the redirection happens in the content process after capture starts, there's no way to get the hash of the new URL in the chrome process -- without a roundtrip message.  Maybe the content process could tell the chrome process, "copy the file using this new URL" when it sends its "capture complete" message.  What a mess.  I give up for today.
(In reply to Drew Willcoxon :adw from comment #30)
> Since the redirection happens in the content process
> after capture starts, there's no way to get the hash of the new URL in the
> chrome process -- without a roundtrip message.  Maybe the content process
> could tell the chrome process, "copy the file using this new URL" when it
> sends its "capture complete" message.

A separate round-trip message just to obtain the filename to save to would be fine, I think - what's the downside, beyond just a bit of additional complexity?
(In reply to :Gavin Sharp (use gavin@gavinsharp.com for email) from comment #31)
> (In reply to Drew Willcoxon :adw from comment #30)
> > Since the redirection happens in the content process
> > after capture starts, there's no way to get the hash of the new URL in the
> > chrome process -- without a roundtrip message.  Maybe the content process
> > could tell the chrome process, "copy the file using this new URL" when it
> > sends its "capture complete" message.
> 
> A separate round-trip message just to obtain the filename to save to would
> be fine, I think - what's the downside, beyond just a bit of additional
> complexity?

Another option would be to send the hash along with the 'capture' message in BackgroundPageThumbs.jsm. That way we wouldn't even need the round-trip. It's not really nice but way easier than making NSS work in a non-chrome process.
(In reply to Tim Taubert [:ttaubert] from comment #32)
> Another option would be to send the hash along with the 'capture' message in
> BackgroundPageThumbs.jsm. That way we wouldn't even need the round-trip.

As Drew mentions, the URL can change after the fact due to redirects, and the hash is based on the URL, so that wouldn't work.
Attached patch patch 2 (obsolete) — Splinter Review
I found that XMLHttpRequest can asynchronously convert data URIs to ArrayBuffers, so this patch uses that.
Attachment #725287 - Attachment is obsolete: true
Attachment #734851 - Flags: review?(ttaubert)
This makes ContentParent::CreateBrowserOrApp look for a "privatebrowsing" attribute on the frame element, and if it's present, it passes CHROME_PRIVATE_WINDOW to the PBrowser ctor.
Attachment #734852 - Flags: review?(josh)
Comment on attachment 734851 [details] [diff] [review]
patch 2

Oops, this doesn't handle URL redirection.  New patch coming.
Attachment #734851 - Attachment is obsolete: true
Attachment #734851 - Flags: review?(ttaubert)
Attached patch patch 3 (obsolete) — Splinter Review
Attachment #735549 - Flags: review?(ttaubert)
Comment on attachment 735549 [details] [diff] [review]
patch 3

Review of attachment 735549 [details] [diff] [review]:
-----------------------------------------------------------------

Thanks Drew, this looks really good so far but didn't run on my machine so I needed a little more time to investigate why that was. I also tested the DOMFileReader approach I described below on my machine and could verify that the thumbnail was properly written to disk. I didn't take a look at *everything* for now but I thought the review is big enough as is and we can certainly do this iteratively.

::: toolkit/components/thumbnails/BackgroundPageThumbs.jsm
@@ +38,5 @@
> +   *               the capture timed out.  url is the captured URL.  If the
> +   *               cache option is true, then onDone is called when the
> +   *               thumbnail has been cached.
> +   * @opt cache    A boolean that indicates whether the thumbnail should be
> +   *               cached to disk.  Defaults to false.

I think the default operation should be to always write the thumbnail to disk because we'll use the thumbnail protocol to retrieve it anyway. I don't think there should be an option to not cache a file, let's keep the API simple for now and focused on the actual use case.

@@ +43,5 @@
> +   * @opt timeout  The capture will time out after this many milliseconds have
> +   *               elapsed after calling this method.  Defaults to 30000 (30
> +   *               seconds).
> +   */
> +  capture: function capture(url, options) {

The function name should be |BackgroundPageThumbs_capture| or |BPT_capture| or the like.

@@ +46,5 @@
> +   */
> +  capture: function capture(url, options) {
> +    this._init();
> +    let cap = new Capture(url, options, this._thumbBrowser.messageManager,
> +                          this._onCaptureOrTimeout.bind(this));

This initialization workflow doesn't work on non-Mac machines. |this._thumbBrowser| is null here if |canHostBrowser()| returns false in |_init()|. Capture needs to be created after the iframe has loaded.

@@ +68,5 @@
> +    this._captureQueue = [];
> +
> +    parentWin = parentWin || Cc["@mozilla.org/appshell/appShellService;1"].
> +                             getService(Ci.nsIAppShellService).
> +                             hiddenDOMWindow;

parentWin = parentWin || Services.appShell.hiddenDOMWindow

@@ +95,5 @@
> +   */
> +  _uninit: function _uninit() {
> +    if (this._initialization == UNINITIALIZED)
> +      return;
> +    this._captureQueue.forEach(function (cap) cap.destroy());

this._captureQueue.forEach(cap => cap.destroy());

@@ +106,5 @@
> +      Cu.reportError(err);
> +    }
> +    delete this._thumbBrowser;
> +    delete this._captureQueue;
> +    delete this._initialization;

Needs to be |this._initialization = UNINITIALIZED| otherwise I couldn't initialize this module again.

@@ +118,5 @@
> +    win.document.documentElement.appendChild(this._thumbBrowser);
> +
> +    let msgMan = this._thumbBrowser.messageManager;
> +    msgMan.loadFrameScript(FRAME_SCRIPT_URL, false);
> +    msgMan.addMessageListener("ready", function onReady(msg) {

We should unregister the message handler if we're only listening for the message once.

@@ +126,5 @@
> +
> +    win.addEventListener("unload", function (event) {
> +      if (event.target != win.document)
> +        return;
> +      this._uninit();

if (event.target == win.document)
  this._uninit();

@@ +165,5 @@
> + * @param captureCallback  A function you want called when the capture
> + *                         completes.
> + */
> +function Capture(url, options, msgMan, captureCallback) {
> +  options = options || {};

function Capture(url, msgMan, captureCallback, options = {}) {

If |options| is optional it should probably be last.

@@ +168,5 @@
> +function Capture(url, options, msgMan, captureCallback) {
> +  options = options || {};
> +
> +  this.url = url;
> +  this.options = options;

This is only used internally so maybe |this._options = options|?

@@ +187,5 @@
> +
> +  /**
> +   * Sends a message to the content script to start the capture.
> +   */
> +  start: function start() {

The function name should be |function Capture_start() {| that makes it a lot easier to follow in stack traces.

@@ +273,5 @@
> +  // check not strictly necessary, but it's here for robustness.)
> +  let principal = win.document.nodePrincipal;
> +  if (!Cc["@mozilla.org/scriptsecuritymanager;1"].
> +       getService(Ci.nsIScriptSecurityManager).
> +       isSystemPrincipal(principal))

if (!Services.scriptSecurityManager.isSystemPrincipal(principal))

@@ +277,5 @@
> +       isSystemPrincipal(principal))
> +    return false;
> +  let permResult = Cc["@mozilla.org/permissionmanager;1"].
> +                   getService(Ci.nsIPermissionManager).
> +                   testPermissionFromPrincipal(principal, "allowXULXBL");

let permResult = Services.perms.testPermissionFromPrincipal(principal, "allowXULXBL");

::: toolkit/components/thumbnails/content/backgroundPageThumbsContent.js
@@ +39,5 @@
> +    PageThumbs._captureToCanvas(content, canvas);
> +    sendAsyncMessage("didCapture", {
> +      imageDataURI: canvas.toDataURL(),
> +      finalURL: this._webNav.currentURI.spec,
> +    });

I heard from many people that we're not particularly good at managing JS strings right now - we keep multiple copies of them and discard them not early enough. That's why I don't particularly like spending time on converting the thumbnail to base64 and sending that string that's now even bigger than the original to the main process using the message manager. Instead, we can just send the ArrayBuffer directly like that:

let fr = Cc["@mozilla.org/files/filereader;1"].
           createInstance(Ci.nsIDOMFileReader);

fr.onloadend = () => {
  sendAsyncMessage("didCapture", {
    imageData: fr.result,
    finalURL: this._webNav.currentURI.spec,
  });
};

canvas.toBlob(b => fr.readAsArrayBuffer(b));

::: toolkit/components/thumbnails/test/browser_thumbnails_background.js
@@ +34,5 @@
> +  function basic() {
> +    let url = "http://www.example.com/";
> +    let [data, capturedURL] = yield capture(url);
> +    is(capturedURL, url, "Captured URL should be URL passed to capture");
> +    isDataURI(data);

I'd really like if those tests would also ensure and wait until the thumbnail file has been written to disk.

@@ +89,5 @@
> +    // The expected callback order is the reverse of the above, and the reverse
> +    // of the order in which the captures are made.
> +    let expectedOrder = urls.slice();
> +    expectedOrder.reverse();
> +    expectedOrder = expectedOrder.map(function (u) u.url);

expectedOrder = expectedOrder.map(u => u.url);
Attachment #735549 - Flags: review?(ttaubert)
Attached patch patch 4 (obsolete) — Splinter Review
(In reply to Tim Taubert [:ttaubert] from comment #38)
> I think the default operation should be to always write the thumbnail to
> disk because we'll use the thumbnail protocol to retrieve it anyway.

Yeah, I removed the cache option altogether and now always cache.

> @@ +46,5 @@
> > +   */
> > +  capture: function capture(url, options) {
> > +    this._init();
> > +    let cap = new Capture(url, options, this._thumbBrowser.messageManager,
> > +                          this._onCaptureOrTimeout.bind(this));
> 
> This initialization workflow doesn't work on non-Mac machines.
> |this._thumbBrowser| is null here if |canHostBrowser()| returns false in
> |_init()|. Capture needs to be created after the iframe has loaded.

Fixed by passing the browser's message manager on Capture.start() instead of Capture construction.

> @@ +168,5 @@
> > +function Capture(url, options, msgMan, captureCallback) {
> > +  options = options || {};
> > +
> > +  this.url = url;
> > +  this.options = options;
> 
> This is only used internally so maybe |this._options = options|?

I ended up removing the underscores of all Capture instance variables since Capture is only used internally, and there's no harm in BackgroundPageThumbs accessing some of them if it were to need to.

> The function name should be |function Capture_start() {| that makes it a lot
> easier to follow in stack traces.

I used to agree, but if a stack has function names it probably also has file names and line numbers, so prefixing everything with OMG_ only adds clutter in the end.  But I added the prefixes since you asked.

> ::: toolkit/components/thumbnails/content/backgroundPageThumbsContent.js
> Instead, we can just send the ArrayBuffer directly like that:

Cool.

The new asynchronicity here made me realize that it's possible for a timed out capture to finish, send the didCapture message, and confuse a pending capture listening for that message in the chrome process.  That's possible even without asynchronicity.  So I added an ID to each capture.  When the chrome process receives a didCapture, it discards it if it doesn't identify the current capture.

> ::: toolkit/components/thumbnails/test/browser_thumbnails_background.js
> @@ +34,5 @@
> > +  function basic() {
> > +    let url = "http://www.example.com/";
> > +    let [data, capturedURL] = yield capture(url);
> > +    is(capturedURL, url, "Captured URL should be URL passed to capture");
> > +    isDataURI(data);
> 
> I'd really like if those tests would also ensure and wait until the
> thumbnail file has been written to disk.

There were two cache tests that tested that, but since I removed the cache option, all the tests now test caching.
Attachment #735549 - Attachment is obsolete: true
Attachment #736498 - Flags: review?(ttaubert)
Severity: normal → critical
Priority: -- → P1
Comment on attachment 734852 [details] [diff] [review]
ContentParent/TabChild privatebrowsing patch

Review of attachment 734852 [details] [diff] [review]:
-----------------------------------------------------------------

The code here looks pretty good, but I'm going to hold off giving it a full review until I think carefully about how it impacts the lifetime of a private browsing session. Does the remote process hang around indefinitely? If it prolongs the private session and therefore the private data retention period, that's not going to fly and we'll have to exclude it from lifetime calculations.
Yes, it hangs around until the app quits basically.  Tim and I talked about possibly destroying the browser after N seconds of inactivity -- meaning, (re)start an N-second timer after every capture, and when it fires, destroy the browser, recreating it for the next capture.  Would that work?
Attachment #736498 - Attachment is obsolete: true
Attachment #736498 - Flags: review?(ttaubert)
(In reply to Drew Willcoxon :adw from comment #41)
> Yes, it hangs around until the app quits basically.  Tim and I talked about
> possibly destroying the browser after N seconds of inactivity -- meaning,
> (re)start an N-second timer after every capture, and when it fires, destroy
> the browser, recreating it for the next capture.  Would that work?

Any updates on this?  If any UX can help this move along, particularly on private browsing cases, I'm all ears
Gavin and I talked to Josh on IRC.  He's concerned that the thumbnail browser may possibly prolong the lifetime of private-browsing state.  Gavin and I don't think it does, since the only PB sessions that affect PB state lifetime are those in "navigator:browser" XUL windows, and the hidden window is not such a window.  (In my testing on a Mac, "last-pb-context-exited" is indeed still broadcasted when the final PB window is closed.)

Gavin thought that Josh would be more likely to r+ if we destroy the browser after inactivity, as we previously discussed doing anyway, so I'm working on a patch to do that.  If it turns out that the browser does affect PB lifetime, then we can file a follow-up about what else to do about it.  Josh has mentioned possibly special-casing this browser in particular.

Now, my patch only recognizes a "privatebrowsing" attribute on frame elements.  There's nothing stopping someone from using a privatebrowsing frame in a navigator:browser window, which would presumably affect PB lifetime.  Josh might r- based on that, I don't know.
Flags: needinfo?(josh)
Attached patch patch 5 (obsolete) — Splinter Review
This destroys the browser 60 seconds after the last capture was serviced.
Attachment #742684 - Flags: review?(ttaubert)
Attached patch patch 6 (obsolete) — Splinter Review
Fixes an error creating the browser when a host iframe has to be used.
Attachment #742684 - Attachment is obsolete: true
Attachment #742684 - Flags: review?(ttaubert)
Attachment #742847 - Flags: review?(ttaubert)
The cross-process private browsing account code that I'm worried about will only kick in if the private content is hosted inside of a tree that includes a XUL window with windowtype="navigator:browser" (in the content process, that is). It does not look like that's the case, but if you could run gdb and ensure that ContentParent::RecvPrivateDocShellsExist is never hit in the chrome process, that would make me happy.
Flags: needinfo?(josh)
Josh, ContentParent::RecvPrivateDocShellsExist is in fact hit when the browser is created (although that's not apparent from the stack).  Now what?
Flags: needinfo?(josh)
Comment on attachment 742847 [details] [diff] [review]
patch 6

Review of attachment 742847 [details] [diff] [review]:
-----------------------------------------------------------------

Looking good! r=me with all issues addressed and/or disputed.

::: toolkit/components/thumbnails/BackgroundPageThumbs.jsm
@@ +70,5 @@
> +    }
> +
> +    // Otherwise, create an html:iframe, stick in the parent document, and use
> +    // it to host the browser.  about:blank will not have the system principal,
> +    // so it can't host, but a document with a chrome URI will.

There's a race condition here as I could call capture() multiple times before the iframe has loaded and would create another iframe every time I do that. We should probably just bail out early of _init() if this._hostIframe is defined - which means we're waiting for an iframe to load.

@@ +95,5 @@
> +    this._captureQueue.forEach(cap => cap.destroy());
> +    this._destroyBrowser();
> +    if (this._hostIframe) {
> +      try {
> +        this._hostIframe.parentNode.removeChild(this._hostIframe);

this._hostIframe.remove();

Also, can this throw? When does it? Do we need the try-catch statement here?

@@ +120,5 @@
> +    let msgMan = browser.messageManager;
> +    msgMan.loadFrameScript(FRAME_SCRIPT_URL, false);
> +
> +    let onReady = function onReadyFn(msg) {
> +      msgMan.removeMessageListener("ready", onReady);

We don't need to wait for a 'ready' message. The 'capture' message is queued until the frame script has loaded and therefore isn't lost.

@@ +131,5 @@
> +  _destroyBrowser: function BPT__destroyBrowser() {
> +    if (!this._thumbBrowser)
> +      return;
> +    try {
> +      this._thumbBrowser.parentNode.removeChild(this._thumbBrowser);

this._thumbBrowser.remove();

Also, can this throw? When does it? Do we need the try-catch statement here?

@@ +218,5 @@
> +   */
> +  start: function Capture_start(messageManager) {
> +    this._msgMan = messageManager;
> +    this._msgMan.sendAsyncMessage("capture", { id: this.id, url: this.url });
> +    this._msgMan.addMessageListener("didCapture", this);

It just came to my mind that we probably should rename those message to be a little more explicit, like 'BackgroundPageThumbs:capture' etc. There's quite some e10s work going on currently and we'll have a bazillion of different messages flying around soon so it's beneficial to quickly be able to tell where it came from.

@@ +257,5 @@
> +    // Note that _done will be called only once, by either receiveMessage or
> +    // notify, since it calls destroy, which cancels the timeout timer and
> +    // removes the didCapture message listener.
> +    this.destroy();
> +    this.captureCallback(this);

We should also unset this.captureCallback to not hold onto it longer than needed. I think destroy() should take care of that.

@@ +297,5 @@
> +  let principal = win.document.nodePrincipal;
> +  if (!Services.scriptSecurityManager.isSystemPrincipal(principal))
> +    return false;
> +  let permResult = Cc["@mozilla.org/permissionmanager;1"].
> +                   getService(Ci.nsIPermissionManager).

You can use Services.perms.

::: toolkit/components/thumbnails/Makefile.in
@@ +16,5 @@
>  
>  EXTRA_JS_MODULES = \
>  	PageThumbsWorker.js \
>  	PageThumbs.jsm \
> +	BackgroundPageThumbs.jsm \

Nit: Let's keep them in alphabetical order.

::: toolkit/components/thumbnails/PageThumbs.jsm
@@ +305,5 @@
> +   */
> +  _store: function PageThumbs__store(aOriginalURL, aFinalURL, aData) {
> +    let deferred = Promise.defer();
> +    let telemetryStoreTime = new Date();
> +    PageThumbsStorage.writeData(aFinalURL, aData).then(function onWrite() {

Why not use TaskUtils as before?

return TaskUtils.spawn(function () {
  yield PageThumbsStorage.writeData(...);

  // ...
});

@@ +327,5 @@
> +          deferred.resolve();
> +        });
> +        return;
> +      }
> +      deferred.resolve();

Nit: I don't really like the 'return' statement here when we could just use an else clause:

if (aFinalURL != aOriginalURL) {
  // ...
} else {
  deferred.resolve()
}

@@ +485,5 @@
>     * reused after the copy.
>     *
>     * @return {Promise}
>     */
>    writeData: function Storage_write(aURL, aData) {

Can you please correct that to 'function Storage_writeData' while you're at it?

::: toolkit/components/thumbnails/content/backgroundPageThumbsContent.js
@@ +24,5 @@
> +  },
> +
> +  _onCapture: function BPTC__onCapture(msg) {
> +    if (this._onLoad)
> +      removeEventListener("load", this._onLoad, true);

Should we also stop the current load like in init() here?

::: toolkit/components/thumbnails/test/browser_thumbnails_background.js
@@ +1,3 @@
> +/* This Source Code Form is subject to the terms of the Mozilla Public
> + * License, v. 2.0. If a copy of the MPL was not distributed with this
> + * file, You can obtain one at http://mozilla.org/MPL/2.0/. */

Our test files usually use the public domain header:

/* Any copyright is dedicated to the Public Domain.
 * http://creativecommons.org/publicdomain/zero/1.0/ */

@@ +93,5 @@
> +    let deferred = imports.Promise.defer();
> +    let urls = [
> +      { url: testPageURL({ wait: 2000 }), timeout: 4000 },
> +      { url: testPageURL({ wait: 2001 }), timeout: 1000 },
> +      { url: testPageURL({ wait: 2002 }), timeout: 0 },

I'm not really worried about the timeouts that are too small, they should time out as expected. I'm not quite sure though that on a slow win build machine with a full event queue we'll be able to capture a thumbnail before the 4000ms timeout. This might introduce another intermittent orange.

@@ +164,5 @@
> +    let file2 = fileForURL(url2);
> +    ok(!file2.exists(), "Second file should not exist yet.");
> +
> +    let defaultTimeout = imports.BackgroundPageThumbs._destroyBrowserTimeout;
> +    imports.BackgroundPageThumbs._destroyBrowserTimeout = 1000;

I don't really like that we have a property only to hold the value of DESTROY_BROWSER_TIMEOUT. I don't see a better way to do this but fiddling around with internals isn't great. Introducing a pref for only this purpose doesn't feel right as well...

::: toolkit/components/thumbnails/test/thumbnails_background.sjs
@@ +3,5 @@
> + * file, You can obtain one at http://mozilla.org/MPL/2.0/. */
> +
> +// The timer never fires if it's not declared and set to this variable outside
> +// handleRequest, as if it's getting GC'ed when handleRequest's scope goes away.
> +// Shouldn't the timer thread hold a strong reference to it?

Yeah, it definitely should. It's a known foot gun of the nsITimer API :(

@@ +9,5 @@
> +
> +function handleRequest(req, resp) {
> +  resp.processAsync();
> +  resp.setHeader("Cache-Control", "no-cache", false);
> +  resp.setHeader("Content-Type", "text/html", false);

Using 'text/html;charset=utf-8' avoids some errors in the console.
Attachment #742847 - Flags: review?(ttaubert) → review+
(In reply to Tim Taubert [:ttaubert] from comment #48)
> >    writeData: function Storage_write(aURL, aData) {
> 
> Can you please correct that to 'function Storage_writeData' while you're at
> it?

The new fad these days is to not name functions at all, given bug 433529 (see also http://javascript-reverse.tumblr.com/post/34328529526/javascript-function-name-inference-aka-stop-function).
(In reply to Josh Matthews [:jdm] (on vacation until 5/7) from comment #46)
> The cross-process private browsing account code that I'm worried about will
> only kick in if the private content is hosted inside of a tree that includes
> a XUL window with windowtype="navigator:browser" (in the content process,
> that is).

Can you point to the code that enforces this windowtype restriction? I couldn't find anything like that when I last looked.

Given that the auto-timing-out functionality implemented here should mitigate most of the private-browsing-lifetime concern, I think we need to move forward here and address any of these issues in a followup as needed.
(In reply to :Gavin Sharp (use gavin@gavinsharp.com for email) from comment #49)
> (In reply to Tim Taubert [:ttaubert] from comment #48)
> > >    writeData: function Storage_write(aURL, aData) {
> > 
> > Can you please correct that to 'function Storage_writeData' while you're at
> > it?
> 
> The new fad these days is to not name functions at all, given bug 433529
> (see also
> http://javascript-reverse.tumblr.com/post/34328529526/javascript-function-
> name-inference-aka-stop-function).

Right, I just wanted to keep a little consistency in this module without telling Drew to modify all function names again :)
Comment on attachment 734852 [details] [diff] [review]
ContentParent/TabChild privatebrowsing patch

Do we really want eCaseMatters for the AttrValueIs check? It doesn't seem like it to me, but I don't know what the common practice is for this kind of attribute.

Perhaps smaug can help with review in Josh's absence?
Attachment #734852 - Flags: review?(bugs)
(In reply to :Gavin Sharp (use gavin@gavinsharp.com for email) from comment #50)
> (In reply to Josh Matthews [:jdm] (on vacation until 5/7) from comment #46)
> > The cross-process private browsing account code that I'm worried about will
> > only kick in if the private content is hosted inside of a tree that includes
> > a XUL window with windowtype="navigator:browser" (in the content process,
> > that is).
> 
> Can you point to the code that enforces this windowtype restriction? I
> couldn't find anything like that when I last looked.

You'll want to search for SetAffectsPrivate in mxr. Sorry, on a phone.

> 
> Given that the auto-timing-out functionality implemented here should
> mitigate most of the private-browsing-lifetime concern, I think we need to
> move forward here and address any of these issues in a followup as needed.

Having read the code, my concern is now that we'll send spurious last-pb-context-exited notifications that will blow away existing private session data whenever the xul browser is recreated.
Flags: needinfo?(josh)
Attached patch patch 7 (obsolete) — Splinter Review
This has some small architectural changes (hopefully improvements) around how the service is initialized and the browser is created, so I'm asking for another review:

* _init() is now _ensureParentWindowInitBegun() and focuses only on setting up the parent window.

* capture() creates the capture queue if necessary.

* _processCaptureQueue() does the remainder of what _init() used to do: when there is a capture to start, it first ensures that the parent window and browser are set up.

* The destroy-browser timer is started when a capture completes or times out, not when _processCaptureQueue() is called with an empty queue.

(In reply to Tim Taubert [:ttaubert] from comment #48)
> There's a race condition here as I could call capture() multiple times
> before the iframe has loaded and would create another iframe every time I do
> that.

Good catch.

> Also, can this throw? When does it? Do we need the try-catch statement here?

It was just defensive -- in case the window was already gone and the removal raises an exception or who knows what -- but I've removed both of these try-catches.

> Why not use TaskUtils as before?

Changed.

> Can you please correct that to 'function Storage_writeData' while you're at
> it?

I removed all the function names given the recent conversation here.

> ::: toolkit/components/thumbnails/content/backgroundPageThumbsContent.js
> @@ +24,5 @@
> > +  },
> > +
> > +  _onCapture: function BPTC__onCapture(msg) {
> > +    if (this._onLoad)
> > +      removeEventListener("load", this._onLoad, true);
> 
> Should we also stop the current load like in init() here?

The new URL is immediately loaded, which should stop any pending load, right?

> Our test files usually use the public domain header:

I never saw the point of that.  I think someone misread the license advice on the MPL pages and it became group-think.

> @@ +93,5 @@
> > +    let deferred = imports.Promise.defer();
> > +    let urls = [
> > +      { url: testPageURL({ wait: 2000 }), timeout: 4000 },
> > +      { url: testPageURL({ wait: 2001 }), timeout: 1000 },
> > +      { url: testPageURL({ wait: 2002 }), timeout: 0 },
> 
> I'm not really worried about the timeouts that are too small, they should
> time out as expected. I'm not quite sure though that on a slow win build
> machine with a full event queue we'll be able to capture a thumbnail before
> the 4000ms timeout. This might introduce another intermittent orange.

Good point, upped to 30s.

> @@ +164,5 @@
> > +    let file2 = fileForURL(url2);
> > +    ok(!file2.exists(), "Second file should not exist yet.");
> > +
> > +    let defaultTimeout = imports.BackgroundPageThumbs._destroyBrowserTimeout;
> > +    imports.BackgroundPageThumbs._destroyBrowserTimeout = 1000;
> 
> I don't really like that we have a property only to hold the value of
> DESTROY_BROWSER_TIMEOUT. I don't see a better way to do this but fiddling
> around with internals isn't great. Introducing a pref for only this purpose
> doesn't feel right as well...

Yes, but these kind of backdoor allowances to accommodate tests are common, no?
Attachment #742847 - Attachment is obsolete: true
Attachment #743986 - Flags: review?(ttaubert)
(In reply to Josh Matthews [:jdm] (on vacation until 5/7) from comment #53)
> Having read the code, my concern is now that we'll send spurious
> last-pb-context-exited notifications that will blow away existing private
> session data whenever the xul browser is recreated.

Surely if the thumbnail browser is not the final PB context, then last-pb-context-exited will not be broadcast when it is destroyed?

I still don't understand why ContentParent::RecvPrivateDocShellsExist is hit, since as far as I can tell the hidden window on OS X is not of type navigator:browser, if indeed ContentParent::RecvPrivateDocShellsExist should not be hit for browsers that don't live in navigator:browser windows.
The cross-process private browsing code was never intended to operate in tandem with in-process private browsing. That means that ContentParent watches for when all subprocesses report no private docshells and then triggers the last-pb-context-exited notification, regardless of the number of private docshells in the chrome process.
It would be worth seeing if RecvPrivateDocShellsExist is triggered again shortly after the first time (with a parameter of false instead of true); it could be a blip that exists during the creation of the browser.
(In reply to Drew Willcoxon :adw from comment #54)
> The new URL is immediately loaded, which should stop any pending load, right?

Yes, but not necessarily synchronously (I don't know whether that is a problem here).

> > Our test files usually use the public domain header:
> 
> I never saw the point of that.  I think someone misread the license advice
> on the MPL pages and it became group-think.

The point of what? Generally, code files need license headers. The license policy (http://www.mozilla.org/MPL/license-policy.html) states:

"Trivial bits of Mozilla Code, such as testcases or snippets of code used in documentation, should be put in the public domain in order to facilitate re-use."
(In reply to :Gavin Sharp (use gavin@gavinsharp.com for email) from comment #58)
> The point of what? Generally, code files need license headers.

You don't say.

> The license policy (http://www.mozilla.org/MPL/license-policy.html) states:

I'd never seen that page before.  Thanks for pointing it out to me.
(In reply to Josh Matthews [:jdm] (on vacation until 5/7) from comment #57)
> It would be worth seeing if RecvPrivateDocShellsExist is triggered again
> shortly after the first time (with a parameter of false instead of true); it
> could be a blip that exists during the creation of the browser.

It's hit each time the browser is created (aExist=true) and destroyed (aExist=false).
(In reply to :Gavin Sharp (use gavin@gavinsharp.com for email) from comment #58)
> The point of what? Generally, code files need license headers. The license
> policy (http://www.mozilla.org/MPL/license-policy.html) states:

This is the license page I had seen and which is much more ambivalent: http://www.mozilla.org/MPL/headers/

> Public Domain
> 
> This license can be used for test scripts and other short code snippets,
> at the discretion of the author.
Part of the reasoning for introducing the PD header for tests originally was that it was shorter, and so for small test files it didn't end up being some large percentage of the file. That particular concern is not an issue anymore with MPL2, but the "promote wider use" reason remains. It doesn't matter much in practice and still left to the author's discretion, of course.
(In reply to Drew Willcoxon :adw from comment #54)
> > Can you please correct that to 'function Storage_writeData' while you're at
> > it?
> I removed all the function names given the recent conversation here.

Cool, thanks for doing that.

> > Should we also stop the current load like in init() here?
> The new URL is immediately loaded, which should stop any pending load, right?

I wasn't exactly sure if there might be a load event or something if a load is canceled because a new one starts. Also, Gavin's point about async loading might be important.

> > I don't really like that we have a property only to hold the value of
> > DESTROY_BROWSER_TIMEOUT. I don't see a better way to do this but fiddling
> > around with internals isn't great. Introducing a pref for only this purpose
> > doesn't feel right as well...
> 
> Yes, but these kind of backdoor allowances to accommodate tests are common,
> no?

Yes, they're unfortunately quite common. I personally like to not introduce test-only code but I don't have a better suggestion right now so let's keep the code.
Comment on attachment 743986 [details] [diff] [review]
patch 7

Review of attachment 743986 [details] [diff] [review]:
-----------------------------------------------------------------

I like the architectural changes, it's a little more readable now.

::: toolkit/components/thumbnails/BackgroundPageThumbs.jsm
@@ +51,5 @@
> +   *                   to load pages.  If not given, the hidden window is used.
> +   * @return  True if the parent window is completely initialized and can be
> +   *          used, and false if initialization is async.
> +   */
> +  _ensureParentWindowInitBegun: function (parentWin) {

Shouldn't this rather be _ensureParentWindowIsReady() according to how it's being used?

::: toolkit/components/thumbnails/test/Makefile.in
@@ +24,5 @@
>  	background_red.html \
>  	background_red_scroll.html \
>  	background_red_redirect.sjs \
>  	privacy_cache_control.sjs \
> +	browser_thumbnails_background.js \

Nit: Please put the test file at the beginning of _BROWSER_FILES.
Attachment #743986 - Flags: review?(ttaubert) → review+
Comment on attachment 734852 [details] [diff] [review]
ContentParent/TabChild privatebrowsing patch

This looks wrong. We're adding a feature to xul:browser, and
shouldn't expose that in <html:iframe mozbrowser>.
So,
...
if (frameElement && frameElement->NodeInfo()->Equals(nsGkAtoms::browser, kNameSpaceID_XUL ) &&
    ...)

Event better would be to add IsXUL(nsIAtom* aTag) to nsIContent. Similar to inline bool IsHTML(nsIAtom* aTag).
Then if would be
if (frameElement && frameElement->IsXUL(nsGkAtoms::browser) && ...)


I know, the b2g stuff "IsBrowserElement()" makes this code quite odd.


In XUL case does matter, so using eCaseMatters looks ok.
Attachment #734852 - Flags: review?(bugs) → review-
(In reply to Tim Taubert [:ttaubert] from comment #64)
> Shouldn't this rather be _ensureParentWindowIsReady() according to how it's
> being used?

That is a better name, thanks.  I was trying to get across the idea that the window may not be completely ready by the time the method returns -- the only thing ensured is that the initialization process had begun.  But maybe that's splitting hairs.
Thanks, Olli.  How's this?
Attachment #734852 - Attachment is obsolete: true
Attachment #734852 - Flags: review?(josh)
Attachment #744713 - Flags: review?(bugs)
Comment on attachment 744713 [details] [diff] [review]
ContentParent/TabChild privatebrowsing patch 2

Someone more familiar with private browser could verify that just calling 
SetPrivateBrowsing on top level is enough. I'd expect it is.
Attachment #744713 - Flags: review?(bugs) → review+
(In reply to Olli Pettay [:smaug] from comment #68)
> Someone more familiar with private browser could verify that just calling 
> SetPrivateBrowsing on top level is enough. I'd expect it is.

Josh said the patch looked pretty good in comment 40, although he didn't review it.

By "top level" do you mean the docshell?
Yeah, it seems to call docshell stuff.
Attached patch last-pb-context-exited patch (obsolete) — Splinter Review
This makes ContentParent::RecvPrivateDocShellsExist delegate to nsDocShell::{Increase,Decrease}PrivateDocShellCount, instead of broadcasting last-pb-context-exited itself.  So the total number of live, private docshells, be they in the chrome process or content processes, is kept in one place.

I bet there's a reason it doesn't already work this way, but in my narrow testing it works for this use case.  last-pb-context-exited is broadcast for the thumb browser only if it truly is the final PB context.
Attachment #744798 - Flags: review?(josh)
Oh, I'm guessing the navigator:window check that presumably runs on the hidden window here [1] does not affect the thumb browser's docshell because the thumb browser is remote, and no "affects private session lifetime?" information is passed to the content process along with the CHROME_PRIVATE_WINDOW flag.  So maybe that should happen.

[1] http://mxr.mozilla.org/mozilla-central/source/content/xul/content/src/nsXULElement.cpp#207
This can definitely now cause private browsing sessions to be leaked until the remote browser is deleted. I am not ok with this solution, since we make strict guarantees about when private data is purged from memory.
Josh, are you referring to the patch I just posted for your review?

I'm confused.  If the thumb browser affects PB lifetime, then PB lifetime lasts at least as long as the thumb browser, which you say is no good.  But if it does not affect PB lifetime, then when it outlives PB lifetime, won't consumers who manage parts of its state (caches, cookies, etc.?) get confused because they've already observed last-pb-context-exited and presumably have destroyed their temporary PB state?

(In reply to Drew Willcoxon :adw from comment #72)
> Oh, I'm guessing the navigator:window check that presumably runs on the
> hidden window here [1]

Hmm, nsXULElement::MaybeUpdatePrivateLifetime is not called on the hidden window, even though I'm on a Mac where the hidden window doc is a <xul:window>.  But anyway, I wouldn't expect it to be called on other platforms since their hidden window doc is HTML.  Which means the docshell containing the thumb browser will have affectPrivateSessionLifetime set to the default of true, so the fix here is not merely propagating the parent's affectPrivateSessionLifetime to the remote docshell.

Maybe mAffectPrivateSessionLifetime should default to false, and then navigator:browser windows get it turned on, instead of the inverse like now.

For this bug, SetAffectPrivateSessionLifetime(false) could just be hardcoded for private docshells in the content process.
(In reply to Josh Matthews [:jdm] (on vacation until 5/7) from comment #73)
> This can definitely now cause private browsing sessions to be leaked until
> the remote browser is deleted. I am not ok with this solution, since we make
> strict guarantees about when private data is purged from memory.

Can you elaborate on what these guarantees are, and why they're important?
(In reply to :Gavin Sharp (use gavin@gavinsharp.com for email) from comment #75)
> (In reply to Josh Matthews [:jdm] (on vacation until 5/7) from comment #73)
> > This can definitely now cause private browsing sessions to be leaked until
> > the remote browser is deleted. I am not ok with this solution, since we make
> > strict guarantees about when private data is purged from memory.
> 
> Can you elaborate on what these guarantees are, and why they're important?

Any data that exposes the user's activity in a private session should be purged as soon as the last private window is closed. This defeats any attack such as someone opening up about:cache immediately afterwards, and similar investigative practices. Comment 74 is going in the direction I'd like; we want to ensure that the private remote browser does not influence the length of the private session in any way. Drew, could you try using the hiddenPrivateWindow instead of the hiddenWindow (unfortunately via PrivateBrowsingUtils.whenHiddenPrivateWindowReady introduced in https://bugzilla.mozilla.org/attachment.cgi?id=725098&action=diff)? That hidden window has the proper bits flipped so it's independent of the private session lifetime, regardless of its XULness.
OK, this works: use the private hidden window in conjunction with supporting private remote docshells with propagated AffectPrivateSessionLifetime.

Josh, are you able to review this, or should I ask Olli again?  Your comments are welcome regardless.

This patch obsoletes the one Olli reviewed.  It propagates AffectPrivateSessionLifetime from the parent docshell to the remote docshell.  But it also propagates the load context's UsePrivateBrowsing instead of checking for a privatebrowsing attribute on xul:browser.  I think this is good.  Both AffectPrivateSessionLifetime and UsePrivateBrowsing should be propagated in the remote case like they are in the non-remote case, right?

I'm not sure if propagating UsePrivateBrowsing ruins b2g's stuff.  Olli said earlier that the privatebrowsing attribute should apply only to xul:browser, but propagation seems like the right behavior in all cases.

This kind of feels like an abuse of the CHROME_* flags, since they apply to top-level windows, but here they're used for docshells.

I'm not sure whether I should be doing more null ptr checks in CreateBrowserOrApp.
Attachment #744963 - Flags: review?(josh)
Attachment #744798 - Flags: review?(josh)
Comment on attachment 744963 [details] [diff] [review]
propagate private browsing patch


>     if (aContext.IsBrowserElement() || !aContext.HasOwnApp()) {
>         if (nsRefPtr<ContentParent> cp = GetNewOrUsed(aContext.IsBrowserElement())) {
>             nsRefPtr<TabParent> tp(new TabParent(aContext));
>             tp->SetOwnerElement(aFrameElement);
>+            uint32_t chromeFlags = 0;
>+
>+            // Propagate the private-browsing status of the element's parent
>+            // docshell to the remote docshell, via the chrome flags.
>+            nsCOMPtr<Element> frameElement = do_QueryInterface(aFrameElement);
>+            MOZ_ASSERT(frameElement);
>+            nsIDocShell* docShell =
>+                frameElement->OwnerDoc()->GetWindow()->GetDocShell();
>+            MOZ_ASSERT(docShell);
>+            nsCOMPtr<nsILoadContext> loadContext = do_QueryInterface(docShell);
>+            if (loadContext && loadContext->UsePrivateBrowsing()) {
>+                chromeFlags |= nsIWebBrowserChrome::CHROME_PRIVATE_WINDOW;
>+            }
>+            bool affectLifetime;
>+            docShell->GetAffectPrivateSessionLifetime(&affectLifetime);
>+            if (affectLifetime) {
>+                chromeFlags |= nsIWebBrowserChrome::CHROME_PRIVATE_LIFETIME;
>+            }
>+
This is ok, if it is ok to b2g. This makes us propagate privateness also child docshells of <iframe mozbrowser>.
Does b2g have private browsing?
b2g does not have private browsing.
Drew, I just realized that if you use hiddenPrivateWindow, I believe all of attachment 744963 [details] [diff] [review] should be unnecessary, since we would not need to forward any information to the child process. Could you give that a try?
(In reply to Josh Matthews [:jdm] (on vacation until 5/7) from comment #80)
> Drew, I just realized that if you use hiddenPrivateWindow, I believe all of
> attachment 744963 [details] [diff] [review] should be unnecessary, since we
> would not need to forward any information to the child process. Could you
> give that a try?

I tried that first, and it's not sufficient.  That shouldn't be surprising, right?  There's nothing in the content process that sets up the docshell to be private, or that sets AffectPrivateSessionLifetime.

You need those two things somehow.  My earlier patch used a privatebrowsing attribute on the xul:browser and ignored AffectPrivateSessionLifetime altogether.  This new patch is better I think because it makes PB status inherited -- but in that case the thumb browser has to live in the hidden private window, since as you say it has the AffectPrivateSessionLifetime bit flipped.
Also, this is obvious, but with this new patch, PB support in desktop e10s just falls out.
Oh right, the hidden window is used in the chrome process, not the content one, right? My mistake, I had it backwards in my head. In that case, the most recent patch is probably the right way to go about this.
Comment on attachment 744963 [details] [diff] [review]
propagate private browsing patch

Right.  OK, I'll ask Olli for review then.  Please take it if you want it, though.  Thanks, Josh!

Recapping a couple of questions I had about the patch:

(In reply to Drew Willcoxon :adw from comment #77)
> This kind of feels like an abuse of the CHROME_* flags, since they apply to
> top-level windows, but here they're used for docshells.
> 
> I'm not sure whether I should be doing more null ptr checks in
> CreateBrowserOrApp.
Attachment #744963 - Flags: review?(josh) → review?(bugs)
Comment on attachment 744963 [details] [diff] [review]
propagate private browsing patch

Please document in nsIWebBrowserChrome.idl what CHROME_PRIVATE_LIFETIME is about.
Attachment #744963 - Flags: review?(bugs) → review+
Attached patch patch 8Splinter Review
https://tbpl.mozilla.org/?tree=Try&rev=37bbdc67b650

This modifies _ensureParentWindowReady to use the private hidden window and PrivateBrowsingUtils.whenHiddenPrivateWindowReady.
Attachment #743986 - Attachment is obsolete: true
Attachment #744713 - Attachment is obsolete: true
Attachment #744798 - Attachment is obsolete: true
https://hg.mozilla.org/integration/mozilla-inbound/rev/9189df8e7d74
https://hg.mozilla.org/integration/mozilla-inbound/rev/47a4fc52a1e5

I'll wait for these to stick their landings before filing the follow-ups discussed here.
Flags: in-testsuite+
https://hg.mozilla.org/mozilla-central/rev/47a4fc52a1e5
https://hg.mozilla.org/mozilla-central/rev/9189df8e7d74
Status: ASSIGNED → RESOLVED
Closed: 11 years ago
Resolution: --- → FIXED
Target Milestone: --- → Firefox 23
Blocks: 870100
Depends on: 870101
Depends on: 870103
Depends on: 870104
Depends on: 870105
No longer depends on: 759964
Depends on: 870114
Depends on: 870179
Severity: critical → major
Flags: sec-review?
Summary: implement background tab thumbnailing service → implement background service for page thumbnailing
Jesse:[14:46:41] curtisk|afk: i tagged https://bugzilla.mozilla.org/show_bug.cgi?id=841495 for sec-review because (1) loading an extra copy of the page in the background scares me in all kinds of ways [scripts, dialogs, hangs, leaks], and (2) it is being done in part to bypass no-cache rules, which might not be appropriate in all cases
Flags: sec-review? → sec-review?(mgoodwin)
Depends on: 873046
Depends on: 875157
Depends on: 875986
No longer blocks: 756881
Depends on: 879687
Blocks: 878207
No longer blocks: 878207
No longer depends on: 903299
Depends on: 908277
Depends on: 914920
I'm not sure this is the place, but...

I'm using FF 25 beta, and according to the release notes http://www.mozilla.org/en-US/firefox/25.0beta/releasenotes/ , thumbnails should be fixed.

However, thumbnails are not showing up for Facebook or other sites with this issue.
I've tried resetting FF on 2 computers and also read on the net that people are having the same issue.

In case this isn't supposed to work yet, maybe the release notes should not mention it to avoid the confusion?
Horea, thanks for bringing that to our attention.  Thumbnails should not have been listed in the release notes.  I filed bug 920676 to fix them.  More details there.
Blocks: 927688
Flags: sec-review?(mgoodwin)
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: