Closed Bug 1167080 Opened 9 years ago Closed 9 years ago

HAR export automation

Categories

(DevTools :: Netmonitor, defect)

defect
Not set
normal

Tracking

(firefox41 fixed)

RESOLVED FIXED
Firefox 41
Tracking Status
firefox41 --- fixed

People

(Reporter: Honza, Assigned: Honza)

References

(Blocks 1 open bug)

Details

Attachments

(2 files, 3 obsolete files)

This is a follow up for Bug 859058 - Implement "Copy as HAR" and "Save all as HAR" 

It should be possible to export HAR file from within the content using the following API:

HAR.triggerExport(options);

The API should be exposed to the content only if the following preference is set.

devtools.netmonitor.har.secretToken

(or devtools.netmonitor.har.contentAPIToken?)

Honza
Assignee: nobody → odvarko
Attached patch bug1167080-1.patch (obsolete) — Splinter Review
A first try patch that allows integration with automated tools (such as Selenium) and automatically export HAR data from the Network panel.

Some comments:

1) The support for integration is done through the following content API:

HAR.triggerExport() : (part of the patch) allows to trigger HAR export at any time during the page life time.

HAR.clear() : (part of the patch) allows to clear content of the Network panel (to avoid duplicated data exports).

"HARPageLoaded" : (not part of the patch yet) an event sent when the page finishes loading. This is different from the existing 'DOMContentLoaded' and 'load' events. This new event is sent when there are no pending HTTP requests and no new HTTP requests has started for given period of time (e.g. 1000 ms).

There is a new preference: devtools.netmonitor.har.contentAPIToken that needs to be set to a token in order to have the above API available in the content. So, the API is *not* exposed most of the time - only if the pref is set. I am especially interested in feedback related to this approach.

An example:

// myWebApplication.html
// Helper function that exports data collected by the Network panel
// and then clears its content.
function onExportHar(e) {
  var options = {
    fileName: "myFile.har",  // Save into this file
    getData: true,  // get also HAR string back
  }

  HAR.triggerExport(options).then(result) {
    // Log HAR string into the console
    console.log(result.data);

    // Data exported, clean up the Network panel
    HAR.clear();
  });
}

// Handle "HARPageLoaded" event to export data related to
// page load phase
addEventListener("HARPageLoaded", onExportHar, true);


---


Integrated tools can choose whether they want to export HAR into local files and/or just get the HAR string and process it further themselves. This should give them enough flexibility without making this native feature too complex.


2) There is a new actor registered dynamically (only if the contentAPIToken is set). This actor exposes the HAR API into the content as well as handle its execution and sends RDP events back to the client.

The actor is registered when the toolbox is ready and this happens in a module called "toolbox-overlay". It basically follows "toolbox-ready" event.

3) Loading of the toolbox-overlay module happens in b/browser/devtools/main.js
Not sure if this is the best place, but this is the only code that is needed outside of the netmonitor/har directory atm (I'd like to fix this and have the entire feature living inside the har dir).

There is also a thread on Firebug newsgroup trying to get feedback for this feature.
https://groups.google.com/forum/#!topic/firebug/u3nbmQfiMHk

Thoughts?

Honza
Attachment #8611041 - Flags: review?(jsantell)
Attachment #8611041 - Flags: review?(jlong)
Comment on attachment 8611041 [details] [diff] [review]
bug1167080-1.patch

Review of attachment 8611041 [details] [diff] [review]:
-----------------------------------------------------------------

I don't really feel qualified to review this. This is very actor-heavy and I'm not familiar enough with how actors work (or have opinions on how they should be structured) to give any good feedback.

This looks good, and the token is an interesting idea. I don't see why it shouldn't work. My only gripe is adding this specific functionality to `main.js`, I'd like to separate concerns better, but I don't know what alternative to suggest. It would be good if someone more familiar with our init/actor code could look at this (jsantell perhaps, but I'll also ni? jryans in case he looks first)

::: browser/devtools/main.js
@@ +40,5 @@
>    }
>  };
>  Services.obs.addObserver(unloadObserver, "sdk:loader:destroy", false);
>  
> +require("devtools/netmonitor/har/toolbox-overlay.js");

It does feel out-of-place to load this here when it is netmonitor-specific. Can you not load it when the network monitor is loaded?

::: browser/devtools/netmonitor/har/har-driver.js
@@ +48,5 @@
> + * export has finished. This notification allows to resolve a promise
> + * the content can wait for, and so automated tools know when to move on
> + * and e.g. load & export another page.
> + */
> +var HarDriverActor = ActorClass(

nit: const instead of var

@@ +267,5 @@
> +  });
> +
> +  let props = Object.getOwnPropertyNames(obj);
> +
> +  for (var i=0; i<props.length; i++) {

nit: let instead of var

@@ +281,5 @@
> +
> +/**
> + * @front Client object for the HAR driver actor.
> + */
> +var HarDriverFront = FrontClass(HarDriverActor,

nit: const
Attachment #8611041 - Flags: review?(jlong) → review+
jryans, I'm sure you're busy so don't worry about this if you don't have time. Just to have more eyes on this, do you mind glancing at this patch? It does some complicated things with actors and I feel like it could use more eyes.
Flags: needinfo?(jryans)
Comment on attachment 8611041 [details] [diff] [review]
bug1167080-1.patch

Flagging myself for future review, hopefully soon.
Flags: needinfo?(jryans)
Attachment #8611041 - Flags: review?(jryans)
Comment on attachment 8611041 [details] [diff] [review]
bug1167080-1.patch

Clearing Jordan's review, I think I can review this work.
Attachment #8611041 - Flags: review?(jsantell)
(In reply to J. Ryan Stinnett [:jryans] (use ni?) from comment #5)
> Comment on attachment 8611041 [details] [diff] [review]
> bug1167080-1.patch
> 
> Clearing Jordan's review, I think I can review this work.
Great, thanks Ryan!

Honza
Sorry, been a bit busy with next week's release, thanks jryans!

With regard to actually exposing controls via content, I have some opinions, and there are other places where we've been planning to expose similar capabilities in perf tools, so maybe we can talk about those later
Comment on attachment 8611041 [details] [diff] [review]
bug1167080-1.patch

Review of attachment 8611041 [details] [diff] [review]:
-----------------------------------------------------------------

After looking at this and also the ongoing work in bug 859058, I am a bit puzzled by the current design, so I want to ask some questions before proceeding.

Here's the current design as I understand it:

1. On some tab, you open the toolbox and network monitor
2. If a token is set, a special new actor (har-driver) is registered by toolbox-overlay
3. har-driver also exposes the content API
4. Some content JS calls HAR.triggerExport() which the actor exposed
5. The actor sends a "trigger-export" event over the RDP to the client in toolbox-overlay
6. toolbox-overlay receives RDP event, finds the toolbox, and tells net monitor to run the export from bug 859058
7. Export results are (optionally) passed back down the RDP and returned via content API

Why do we want to do depend on the net monitor frontend for the HAR feature at all?  The server knows about the requests as they happen.  Could we offer this in a more server focused way, so we aren't dependent on the design of the net monitor UI for HAR export at all, but more specifically for a content API?  While the current approach may function correctly, it seems to entangle a lot of concepts that I'd rather keep separate if possible.  If it's not clear what I mean here, I can try to describe in more depth.

This patch injects an actor from desktop Firefox into the server to add the content API.  While this may be what Firebug must do since it's an add-on, I am reluctant to take this approach in the core tools.  We discussed this somewhat in the recent compatibility thread, and I documented pros and cons[1] as "Plan C: Client Installs Actors into Server".  In this context, the points I am most worried about are:

(a) Makes supporting older servers more difficult, in case they are missing platform features the injected actor requires
(b) This won't work by default on Firefox OS, since you need "full DevTools" mode to inject actors

Instead of injecting an actor, it could live on the server side, like all other core actors do today.  Is there a good reason to prefer injecting the actor here?

[1]: https://docs.google.com/document/d/1qg7EkMGNmvCho7YrenbR1LQrw6Zf6WsmBWh-SLUmy6M/edit

::: browser/devtools/netmonitor/har/har-driver.js
@@ +204,5 @@
> + * against the "contentAPIToken" that needs to be set in
> + * Firefox preferences. If the token doesn't match the API
> + * is not executed.
> + */
> +function ExportDriverApi(actor) {

I think it would be good to get an opinion from someone on the DOM team (Ehsan, maybe?) about us injecting a content API when the toolbox is open.  

I am curious what they think, since this is an unusual thing for us to do thus far.  Maybe they don't care?  Maybe it's a bad idea?  I don't know enough to be sure.

@@ +211,5 @@
> +  let exportsInProgress = actor.exportsInProgress;
> +
> +  function securityCheck(method) {
> +    return function(options) {
> +      if (options.token != actor.token) {

Tell me more about this token.  I assume it's meant to prevent exposing the API to content, but I am not sure it really provides a strong safety guarantee.

What is the token trying to enforce?  What is an example of an attack / malicious scenario that it prevents?
Attachment #8611041 - Flags: review?(jryans) → review-
(In reply to J. Ryan Stinnett [:jryans] (use ni?) from comment #8)
> Tell me more about this token.  I assume it's meant to prevent exposing the
> API to content, but I am not sure it really provides a strong safety
> guarantee.
> 
> What is the token trying to enforce?  What is an example of an attack /
> malicious scenario that it prevents?

Doesn't this API return data that you couldn't otherwise get due to the same origin policy?  Agree we should get some feedback from the DOM team about how this API is being exposed, and we should also get a security review to make sure the token pref is a safe way to handle this.
(In reply to Brian Grinstead [:bgrins] from comment #9)
> (In reply to J. Ryan Stinnett [:jryans] (use ni?) from comment #8)
> > Tell me more about this token.  I assume it's meant to prevent exposing the
> > API to content, but I am not sure it really provides a strong safety
> > guarantee.
> > 
> > What is the token trying to enforce?  What is an example of an attack /
> > malicious scenario that it prevents?
> 
> Doesn't this API return data that you couldn't otherwise get due to the same
> origin policy?  Agree we should get some feedback from the DOM team about
> how this API is being exposed, and we should also get a security review to
> make sure the token pref is a safe way to handle this.

I guess I am mainly interested in whether a token has any real value at all.  Why a token?  Could it just be a boolean pref?  Also, what are we worried about?  Malicious actors tricking you into giving up request data under the guise of some reward users want, similar to the self-XSS issues we've seen with Facebook? 

So, I want to understand what we think the risks are, in order to better evaluate our defense against them.  But yes, more review from all corners (security, DOM) seems best.
(In reply to J. Ryan Stinnett [:jryans] (use ni?) from comment #8)
> Comment on attachment 8611041 [details] [diff] [review]
> bug1167080-1.patch
> 
> Review of attachment 8611041 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> After looking at this and also the ongoing work in bug 859058, I am a bit
> puzzled by the current design, so I want to ask some questions before
> proceeding.
> 
> Here's the current design as I understand it:
> 
> 1. On some tab, you open the toolbox and network monitor
> 2. If a token is set, a special new actor (har-driver) is registered by
> toolbox-overlay
> 3. har-driver also exposes the content API
> 4. Some content JS calls HAR.triggerExport() which the actor exposed
> 5. The actor sends a "trigger-export" event over the RDP to the client in
> toolbox-overlay
> 6. toolbox-overlay receives RDP event, finds the toolbox, and tells net
> monitor to run the export from bug 859058
> 7. Export results are (optionally) passed back down the RDP and returned via
> content API
Correct


> Why do we want to do depend on the net monitor frontend for the HAR feature
> at all?  The server knows about the requests as they happen.  Could we offer
> this in a more server focused way, so we aren't dependent on the design of
> the net monitor UI for HAR export at all, but more specifically for a
> content API?  While the current approach may function correctly, it seems to
> entangle a lot of concepts that I'd rather keep separate if possible.  If
> it's not clear what I mean here, I can try to describe in more depth.

- The manual HAR export (Bug 859058) is using the Net panel UI to simply
export what's visible (the current content of the net panel).
In this case, all the data are already fetched from the backend (done by NetworkEventsHandler in netmonitor-controller.js, all stored within RequestsMenuView) and mostly only some long strings needs to be resolved.

- The automated export is currently wired to the manual export. It's simple and the patch rather focuses and the automation itself. So, currently both (manual and automation) are using one code patch for the export.

I absolutely agree that the export shouldn't depend on the UI. In fact I originally implemented it that way and I am planning to port it too, see: HarCollector https://github.com/firebug/firebug.next/blob/master/lib/net/export/har-collector.js

The collector registers its own "networkEvent" and "networkEventUpdate" listener, taking care of collecting and fetching all the data from the backend, so the Network monitor doesn't have to be even selected/active (which is great, no net-panel-ui-update slow downs). I don't know if the collector should be used for the manual export though. Manual export is about "export what I see" - re-fetching or reloading the page shouldn't be required.


> This patch injects an actor from desktop Firefox into the server to add the
> content API.  While this may be what Firebug must do since it's an add-on, I
> am reluctant to take this approach in the core tools.  We discussed this
> somewhat in the recent compatibility thread, and I documented pros and
> cons[1] as "Plan C: Client Installs Actors into Server".  In this context,
> the points I am most worried about are:
> 
> (a) Makes supporting older servers more difficult, in case they are missing
> platform features the injected actor requires
> (b) This won't work by default on Firefox OS, since you need "full DevTools"
> mode to inject actors
> 
> Instead of injecting an actor, it could live on the server side, like all
> other core actors do today.  Is there a good reason to prefer injecting the
> actor here?
> 
> [1]:
> https://docs.google.com/document/d/1qg7EkMGNmvCho7YrenbR1LQrw6Zf6WsmBWh-
> SLUmy6M/edit
Yes, I've seen the document. Sure, we can do it the way other actors are implemented atm.

Perhaps I should put the following questions in the doc (just let me know), but:
1. It's great if entire individual feature lives in one directory - not spread across the code base. Improves code maintenance a lot.
2. What it means "full DevTools" mode? Why it's needed for dynamic actor registration?
3. Why it doesn't work for Valence? The other solutions do work?
4. Can we rather fetch actor implementation from the same changset from a CDN and inject into the backend? That sounds simpler than fetching the entire frontend (but I might be missing something here)

Having at least 1. would be great.


> ::: browser/devtools/netmonitor/har/har-driver.js
> @@ +204,5 @@
> > + * against the "contentAPIToken" that needs to be set in
> > + * Firefox preferences. If the token doesn't match the API
> > + * is not executed.
> > + */
> > +function ExportDriverApi(actor) {
> 
> I think it would be good to get an opinion from someone on the DOM team
> (Ehsan, maybe?) about us injecting a content API when the toolbox is open.  
> 
> I am curious what they think, since this is an unusual thing for us to do
> thus far.  Maybe they don't care?  Maybe it's a bad idea?  I don't know
> enough to be sure.
Yes, absolutely agreed, we need more eyes on the API exposure.

> 
> @@ +211,5 @@
> > +  let exportsInProgress = actor.exportsInProgress;
> > +
> > +  function securityCheck(method) {
> > +    return function(options) {
> > +      if (options.token != actor.token) {
> 
> Tell me more about this token.  I assume it's meant to prevent exposing the
> API to content, but I am not sure it really provides a strong safety
> guarantee.
Boolean would be fine I guess (it's been also originally used that way)
My feeling was just that matching the secret current value in prefs with the value provided by the content is more secure. But perhaps it isn't necessary. Currently the pref is there just to enable the API and check API calls, that's it.


Honza
Flags: needinfo?(jryans)
Ehsan, this feature (automated export of data collected by the Network panel) is using content API. 
Having such API allows very nice/simple integration with 3rd party automation tools (such as Selenium) - we have already got positive feedback from David Burns (see: https://groups.google.com/forum/#!topic/mozilla.dev.developer-tools/v8B1N4T07o0)

Except of exporting HTTP related data (for page load performance analyses), this approach could be also great for exporting general performance data (collected by new Performance tools, page re-layout, js execution, etc.)! I already know about folks that would be excited to have this abilities.

Note that the API are exposed only if a pref in the current browser profile is explicitly set by the user.

This is the first time devtools are trying to expose API to the content, so we need to know what the DOM team thinks...

Mark: we are also interested in a security review. What do you think about the approach?

Honza
Flags: needinfo?(mgoodwin)
Flags: needinfo?(ehsan)
Hmm, I'm not really familiar with HAR, but reading over the first post of the thread you linked to, I don't understand why this needs to be exposed to normal Web pages.  Is this not about exporting the data you see in the devtools network panel?  Why does the page need to run some code as part of that?  I would appreciate if you can explain the problem that you're trying to solve, and why this can't be fully implemented in the devtools code without the involvement of the Web page.  Thanks!
Flags: needinfo?(ehsan) → needinfo?(odvarko)
(In reply to :Ehsan Akhgari (not reading bugmail, needinfo? me!) from comment #13)
> Hmm, I'm not really familiar with HAR, but reading over the first post of
> the thread you linked to, I don't understand why this needs to be exposed to
> normal Web pages.  Is this not about exporting the data you see in the
> devtools network panel?  Why does the page need to run some code as part of
> that?  I would appreciate if you can explain the problem that you're trying
> to solve, and why this can't be fully implemented in the devtools code
> without the involvement of the Web page.  Thanks!

Note that HAR (stands for HTTP Archive format) says how data collected by
the Net panel should be archived (it's JSON). The net data include various
timing information, which makes it great especially for page load
performance analyses. It's supported by Chrome and IE and many other tools [1]

There are two main use cases related to HAR export:


1. Manual Export
The user loads a page and wants to save the content of the Network panel
that has collected HTTP data. There are two context menu actions that
allow copying the data to the clipboard or save into a file.
This features is fully implemented in the devtools code and does *not*
involve the page at all.


2. Automated Export
The user wants to have a HAR file created for every loaded page automatically.
This is usually done together with 3rd party automation tools (such as Selenium).
For example, Selenium is used to run set of tests and the tester wants to
also get a HAR file for every loaded test page. This allows to make additional
page load performance analyses and see how the server has been slow/fast
during the testing.

There are various ways companies want to export HAR. Some prefer saving it
into a local file, some want to send bulk HAR data to a remote server through
XHR, etc.

We want to keep the HAR export feature simple and avoid various complexities
related to how the result HAR is processed - and exposing API to the page content
is perfect from this perspective. There are two new methods:

HAR.triggerExport()  // export HAR from the current net panel content
HAR.clear()          // clear net panel content

So, 3rd party automation tools can create simple script that is part
of the testing procedure and executed within the page scope. Such script
can trigger HAR export at any time (by calling the HAR API) and e.g.
get HAR data into a string variable. Consequently tools can decide to
send it to a remote server, or do immediate page load analyses, or
anything else. It's handy and flexible.


[1] http://www.softwareishard.com/blog/har-adopters/

Honza
Flags: needinfo?(odvarko)
Flags: needinfo?(ehsan)
(In reply to Jan Honza Odvarko [:Honza] from comment #14)
> So, 3rd party automation tools can create simple script that is part
> of the testing procedure and executed within the page scope. Such script
> can trigger HAR export at any time (by calling the HAR API) and e.g.
> get HAR data into a string variable. 

Am I right in thinking that a "HAR from the current net panel content" may include a bunch of data belonging to other origins, for example?
Flags: needinfo?(mgoodwin) → needinfo?(odvarko)
(In reply to Mark Goodwin [:mgoodwin] from comment #15)
> (In reply to Jan Honza Odvarko [:Honza] from comment #14)
> > So, 3rd party automation tools can create simple script that is part
> > of the testing procedure and executed within the page scope. Such script
> > can trigger HAR export at any time (by calling the HAR API) and e.g.
> > get HAR data into a string variable. 
> 
> Am I right in thinking that a "HAR from the current net panel content" may
> include a bunch of data belonging to other origins, for example?
Yes, it includes info about all HTTP requests executed by the page and all nested iframes.

Honza
Flags: needinfo?(odvarko) → needinfo?(m0good04)
Flags: needinfo?(m0good04) → needinfo?(mgoodwin)
(In reply to Jan Honza Odvarko [:Honza] from comment #11)
> (In reply to J. Ryan Stinnett [:jryans] (use ni?) from comment #8)
> > Why do we want to do depend on the net monitor frontend for the HAR feature
> > at all?  The server knows about the requests as they happen.  Could we offer
> > this in a more server focused way, so we aren't dependent on the design of
> > the net monitor UI for HAR export at all, but more specifically for a
> > content API?  While the current approach may function correctly, it seems to
> > entangle a lot of concepts that I'd rather keep separate if possible.  If
> > it's not clear what I mean here, I can try to describe in more depth.
> 
> - The manual HAR export (Bug 859058) is using the Net panel UI to simply
> export what's visible (the current content of the net panel).
> In this case, all the data are already fetched from the backend (done by
> NetworkEventsHandler in netmonitor-controller.js, all stored within
> RequestsMenuView) and mostly only some long strings needs to be resolved.
> 
> - The automated export is currently wired to the manual export. It's simple
> and the patch rather focuses and the automation itself. So, currently both
> (manual and automation) are using one code patch for the export.
> 
> I absolutely agree that the export shouldn't depend on the UI. In fact I
> originally implemented it that way and I am planning to port it too, see:
> HarCollector
> https://github.com/firebug/firebug.next/blob/master/lib/net/export/har-
> collector.js
> 
> The collector registers its own "networkEvent" and "networkEventUpdate"
> listener, taking care of collecting and fetching all the data from the
> backend, so the Network monitor doesn't have to be even selected/active
> (which is great, no net-panel-ui-update slow downs). I don't know if the
> collector should be used for the manual export though. Manual export is
> about "export what I see" - re-fetching or reloading the page shouldn't be
> required.

Ah, the har-collector.js approach looks much better to me!  I think that's what we want to use, at least for the automated export path we are discussing in this bug.

I see what you mean for the manual HAR export in bug 859058.  Since you want exactly what's already in the net monitor view, it seems logical to base the export on the items that were already collected there.  So, it seems safe enough for bug 859058 to continue as is.

> > This patch injects an actor from desktop Firefox into the server to add the
> > content API.  While this may be what Firebug must do since it's an add-on, I
> > am reluctant to take this approach in the core tools.  We discussed this
> > somewhat in the recent compatibility thread, and I documented pros and
> > cons[1] as "Plan C: Client Installs Actors into Server".  In this context,
> > the points I am most worried about are:
> > 
> > (a) Makes supporting older servers more difficult, in case they are missing
> > platform features the injected actor requires
> > (b) This won't work by default on Firefox OS, since you need "full DevTools"
> > mode to inject actors
> > 
> > Instead of injecting an actor, it could live on the server side, like all
> > other core actors do today.  Is there a good reason to prefer injecting the
> > actor here?
> > 
> > [1]:
> > https://docs.google.com/document/d/1qg7EkMGNmvCho7YrenbR1LQrw6Zf6WsmBWh-
> > SLUmy6M/edit
> Yes, I've seen the document. Sure, we can do it the way other actors are
> implemented atm.
> 
> Perhaps I should put the following questions in the doc (just let me know),
> but:
> 1. It's great if entire individual feature lives in one directory - not
> spread across the code base. Improves code maintenance a lot.

I understand and agree we should strive to keep the files for a feature together.

As part of a Q3 architecture project[1], we hope to move /browser/devtools and /toolkit/devtools into one /devtools directory.  However, there will still need to be some mechanism to express "only desktop Firefox gets the UI files", the equivalent of placing files in /browser/devtools today.

So, we'll get to this point soon, but for now we should follow the existing approaches and conventions used throughout DevTools core today.

> 2. What it means "full DevTools" mode? Why it's needed for dynamic actor
> registration?

Currently FxOS devices support only a "limited" DevTools mode out of the box.  In the limited mode, the actor registry actor is not available[2] for security reasons.  To get "full" access, you either need a rooted device to be able to flip a pref, or in 2.2+ you can reset to enable "full" access[3] (this process wipes the device).  Some day, I hope we won't have these limitations... but for now, the core tools should continue shipping actors with the device, so they can be enabled in "limited" mode.

> 3. Why it doesn't work for Valence? The other solutions do work?

I updated the document to be more specific on this point, which I am copying here:  I guess it's more accurate to say it make Valence's work more complex.  The front end would now assume the server has the latest actors since it installs them, but Valence has to translate from Gecko actors to requests for other runtimes. This would mean any actor-side change could require a Valence update too, instead of today where the RDP isolates Valence from this.

Plan A (what we do today) works for Valence, since it's isolated by the RDP, and signals what things it supports via the feature flags we check in various places.  Plan B (load matching frontend from CDN) would make Valence harder if feature checking is removed, but potentially Valence could advertise a specific frontend commit that it's known to work with under the Plan B model.

So, plan C seems more complex for Valence than the other two.

> 4. Can we rather fetch actor implementation from the same changset from a
> CDN and inject into the backend? That sounds simpler than fetching the
> entire frontend (but I might be missing something here)

The seems riskier than plan B (load matching frontend from CDN) and has similar cons to plan C (client install actors into server), since the main difference form plan C is just that the server downloads the actors instead of the client installing them.  This variation would likely avoid the security concerns of plan C on Firefox OS, since we can control where we download from.

It's harder for the server to deal with different versions because the actors are tied to platform features which may or may not exist.  I've documented this idea as plan D in the doc.  Feel free to discuss it further there with comments on the doc.  It's hard to balance all the trade-offs involved.

[1]: https://docs.google.com/a/mozilla.com/document/d/1Dpy_w2ZEVzjV4dl1cQnfUXjKsmO5_NLXcVsjCBEs2aA/edit
[2]: https://dxr.mozilla.org/mozilla-central/source/toolkit/devtools/server/main.js#379-391
[3]: https://developer.mozilla.org/en-US/Firefox_OS/Debugging/Developer_settings#Reset_and_enable_full_DevTools
Flags: needinfo?(jryans)
(In reply to J. Ryan Stinnett [:jryans] (use ni?) from comment #10)
> I guess I am mainly interested in whether a token has any real value at all.
> Why a token?  Could it just be a boolean pref?  Also, what are we worried
> about?  Malicious actors tricking you into giving up request data under the
> guise of some reward users want, similar to the self-XSS issues we've seen
> with Facebook? 

There are reasons we wouldn't want to expose this API to all web content. For example, if we're allowing callers of the API to get the HAR, this could be used to read data from other origins.

(In reply to Jan Honza Odvarko [:Honza] from comment #11)
> Boolean would be fine I guess (it's been also originally used that way)
> My feeling was just that matching the secret current value in prefs with the
> value provided by the content is more secure. But perhaps it isn't
> necessary. Currently the pref is there just to enable the API and check API
> calls, that's it.

Given that it's possible for content to see events related to this feature, I'd say a boolean pref isn't enough since malicious content can detect when this is enabled. A secret of some sort (as you have here) means that the secret must be known before the API is used. This is better than a boolean pref (but perhaps something that uses a secret per-origin would be better).

I can't really comment on the best mechanism for content interactions; - I expect ehsan will have thoughts on this
Flags: needinfo?(mgoodwin)
(In reply to Jan Honza Odvarko [:Honza] from comment #14)
> We want to keep the HAR export feature simple and avoid various complexities
> related to how the result HAR is processed - and exposing API to the page
> content
> is perfect from this perspective. There are two new methods:

I don't understand why this needs to be exposed *to the Web page*.  It seems like what you're asking for is an API that Selenium can call, _not_ the Web page.  I have no idea how Selenium works, but presumably it has some way of running chrome privileged JS.  According to <https://seleniumhq.github.io/selenium/docs/api/java/org/openqa/selenium/firefox/FirefoxDriver.html>, it uses a Firefox extension.  The extension can call a chrome JS API to get the HAR exported data.  The Web page doesn't need to be involved here.

(Note that I'm not familiar with Selenium at all, so if this doesn't make sense, please let me know why.)

> HAR.triggerExport()  // export HAR from the current net panel content
> HAR.clear()          // clear net panel content
> 
> So, 3rd party automation tools can create simple script that is part
> of the testing procedure and executed within the page scope. Such script
> can trigger HAR export at any time (by calling the HAR API) and e.g.
> get HAR data into a string variable. Consequently tools can decide to
> send it to a remote server, or do immediate page load analyses, or
> anything else. It's handy and flexible.

As Mark mentioned, this breaks the same origin policy, so we cannot export this to Web pages at all (not even behind a hidden pref.)  If we end up deciding that this needs to be exposed to Web page for some reason, the data needs to be sanitized to remove some of the information about cross origin content, which probably reduces the usefulness of the HAR export.
Flags: needinfo?(ehsan)
(In reply to :Ehsan Akhgari (not reading bugmail, needinfo? me!) from comment #19)
> (In reply to Jan Honza Odvarko [:Honza] from comment #14)
> > We want to keep the HAR export feature simple and avoid various complexities
> > related to how the result HAR is processed - and exposing API to the page
> > content
> > is perfect from this perspective. There are two new methods:
> 
> I don't understand why this needs to be exposed *to the Web page*.  It seems
> like what you're asking for is an API that Selenium can call, _not_ the Web
> page.  I have no idea how Selenium works, but presumably it has some way of
> running chrome privileged JS.  According to
> <https://seleniumhq.github.io/selenium/docs/api/java/org/openqa/selenium/
> firefox/FirefoxDriver.html>, it uses a Firefox extension.  The extension can
> call a chrome JS API to get the HAR exported data.  The Web page doesn't
> need to be involved here.
> 
> (Note that I'm not familiar with Selenium at all, so if this doesn't make
> sense, please let me know why.)

David, can we somehow use this to automate the HAR export?

Honza
Flags: needinfo?(dburns)
Attached patch bug1167080-2.patch (obsolete) — Splinter Review
Ryan, patch updated based on your comments:

- The actor is registered like the other built-in actors
- Introduced HarCollector responsible for collecting HTTP data (the Net panel not necessary).

We yet need to finish the API exposure discussion, but we can use the patch even if we decide that no API will be exposed to anywhere (in that case just the actor would be removed).

Honza
Attachment #8611041 - Attachment is obsolete: true
Attachment #8614724 - Flags: review?(jryans)
The reason why I suggested it from the page is that it can then be easily accessed from webdriver without having to create a new API endpoint and we can reuse the executeScript call.

We can create a Mozilla extension package that monkey patches the webdriver code and create an extension call endpoint on the http server[1]. 

The firefoxdriver linked in comment 19 is soon to be dead and we will be moving the webdriver community over to Marionette. I am happy having a call there but it would have to be a Mozilla specific thing that users will have to download to take advantage of. This might mean we don't get the traction we want. I totally understand the security fears of this, I have them myself but simplicity of localend (client) languages accessing the feature will mean people would try it via execute script.

Let's keep it accessible via chrome and then we can add a special Mozilla http endpoint to our HTTPD that speaks to WebDriver(Marionette)

[1] http://w3c.github.io/webdriver/webdriver-spec.html#extending-the-protocol
Flags: needinfo?(dburns)
(In reply to David Burns :automatedtester from comment #22)
> The reason why I suggested it from the page is that it can then be easily
> accessed from webdriver without having to create a new API endpoint and we
> can reuse the executeScript call.
Precisely, that was my expectation soon.

> 
> We can create a Mozilla extension package that monkey patches the webdriver
> code and create an extension call endpoint on the http server[1]. 
> 
> The firefoxdriver linked in comment 19 is soon to be dead and we will be
> moving the webdriver community over to Marionette. I am happy having a call
> there but it would have to be a Mozilla specific thing that users will have
> to download to take advantage of. This might mean we don't get the traction
> we want. I totally understand the security fears of this, I have them myself
> but simplicity of localend (client) languages accessing the feature will
> mean people would try it via execute script.
What if the HAR API (exposed into the content) only allows to trigger the export
and clear the net panel content? It would *not* be possible to get the collected
HAR data in the content.

This wouldn't be so flexible as before, but it would solve the security concerns
while still being quite useful for automated tools, no?

> 
> Let's keep it accessible via chrome and then we can add a special Mozilla
> http endpoint to our HTTPD that speaks to WebDriver(Marionette)
> 
> [1] http://w3c.github.io/webdriver/webdriver-spec.html#extending-the-protocol
Is there an existing example/extension that extends the protocol?

Honza
Flags: needinfo?(mgoodwin)
Flags: needinfo?(ehsan)
Flags: needinfo?(dburns)
(In reply to Jan Honza Odvarko [:Honza] from comment #23)
> (In reply to David Burns :automatedtester from comment #22)
> > The reason why I suggested it from the page is that it can then be easily
> > accessed from webdriver without having to create a new API endpoint and we
> > can reuse the executeScript call.
> Precisely, that was my expectation soon.
> 
> > 
> > We can create a Mozilla extension package that monkey patches the webdriver
> > code and create an extension call endpoint on the http server[1]. 
> > 
> > The firefoxdriver linked in comment 19 is soon to be dead and we will be
> > moving the webdriver community over to Marionette. I am happy having a call
> > there but it would have to be a Mozilla specific thing that users will have
> > to download to take advantage of. This might mean we don't get the traction
> > we want. I totally understand the security fears of this, I have them myself
> > but simplicity of localend (client) languages accessing the feature will
> > mean people would try it via execute script.
> What if the HAR API (exposed into the content) only allows to trigger the
> export
> and clear the net panel content? It would *not* be possible to get the
> collected
> HAR data in the content.
> 
> This wouldn't be so flexible as before, but it would solve the security
> concerns
> while still being quite useful for automated tools, no?

Probably but it would still block accessing the real data. I would rather this feature was all chrome access or we had proper hooks in content. Since the latter is raising concerns let's move to having our own library (multiple language versions)

E.g.

https://github.com/AutomatedTester/AutomatedPagePerf/blob/master/pageperfdriver.py#L22
https://github.com/AutomatedTester/AutomatedTester.PagePerf/blob/master/AutomatedTester.PagePerf/Extensions.cs#L27

Both used an older way to access the HARs so it's not biggie...

> 
> > 
> > Let's keep it accessible via chrome and then we can add a special Mozilla
> > http endpoint to our HTTPD that speaks to WebDriver(Marionette)
> > 
> > [1] http://w3c.github.io/webdriver/webdriver-spec.html#extending-the-protocol
> Is there an existing example/extension that extends the protocol?

Not that I could find easily. We would just be adding a HTTP end point to https://github.com/jgraham/webdriver-rust/blob/master/src/httpapi.rs#L9 and then having an endpoint in Marionette for this. I suggest raising a bug for us to be able to access it.
Flags: needinfo?(dburns)
(In reply to Jan Honza Odvarko [:Honza] from comment #23)
> > We can create a Mozilla extension package that monkey patches the webdriver
> > code and create an extension call endpoint on the http server[1]. 
> > 
> > The firefoxdriver linked in comment 19 is soon to be dead and we will be
> > moving the webdriver community over to Marionette. I am happy having a call
> > there but it would have to be a Mozilla specific thing that users will have
> > to download to take advantage of. This might mean we don't get the traction
> > we want. I totally understand the security fears of this, I have them myself
> > but simplicity of localend (client) languages accessing the feature will
> > mean people would try it via execute script.
> What if the HAR API (exposed into the content) only allows to trigger the
> export
> and clear the net panel content? It would *not* be possible to get the
> collected
> HAR data in the content.
> 
> This wouldn't be so flexible as before, but it would solve the security
> concerns
> while still being quite useful for automated tools, no?

Yes, but the bigger point remains: why would we expose this to Web pages in the first place?

I don't really understand much of the jargon here, but it seems like the reason why you wanted to expose this to Web pages was to make some implementation detail simpler in Selenium (please excuse my poor understanding of how these things talk to each other).  That is usually not how we decide to expose something to web pages.  APIs that are exposed to the Web typically do something that is useful to the page, and I still don't understand why a page would want to call this function (without this Selenium stuff being in the picture...

The bigger point is that automated tools do not need to be restricted to running code in a web page environment, so since they can have more privileges, we don't have to shoehorn little helpers for them into the API namespace that a page can see.
Flags: needinfo?(ehsan)
(In reply to David Burns :automatedtester from comment #24)
> Probably but it would still block accessing the real data. I would rather
> this feature was all chrome access or we had proper hooks in content. Since
> the latter is raising concerns let's move to having our own library
> (multiple language versions)
Yes, it makes sense to me.

(In reply to :Ehsan Akhgari (not reading bugmail, needinfo? me!) from comment #25)
> I don't really understand much of the jargon here, but it seems like the
> reason why you wanted to expose this to Web pages was to make some
> implementation detail simpler in Selenium (please excuse my poor
> understanding of how these things talk to each other).  That is usually not
> how we decide to expose something to web pages.  APIs that are exposed to
> the Web typically do something that is useful to the page, and I still don't
> understand why a page would want to call this function (without this
> Selenium stuff being in the picture...
> 
> The bigger point is that automated tools do not need to be restricted to
> running code in a web page environment, so since they can have more
> privileges, we don't have to shoehorn little helpers for them into the API
> namespace that a page can see.
Yeah, I understand the perspective. Let's do what David is suggesting and
use alternative way (no content exposure) and see what we can come up with.

Thanks for your feedback!

Honza
Flags: needinfo?(mgoodwin)
Comment on attachment 8614724 [details] [diff] [review]
bug1167080-2.patch

Review of attachment 8614724 [details] [diff] [review]:
-----------------------------------------------------------------

Overall, this is looking better structurally, though still a few things to clean up.

From discussion in the bug, it sounds like we've agreed to remove the content exposure, so I'll definitely need to see another round.

Update the commit message to replace jsantell -> jryans.

::: browser/devtools/netmonitor/har/har-automation.js
@@ +1,1 @@
> +/* See license.txt for terms of usage */

Needs MPL license header.

@@ +10,5 @@
> +const { XPCOMUtils } = Cu.import("resource://gre/modules/XPCOMUtils.jsm", {});
> +
> +Cu.import("resource://gre/modules/Task.jsm");
> +
> +XPCOMUtils.defineLazyGetter(this, "HarCollector", function() {

To do a lazy require, you can also you the form:

loader.lazyRequireGetter(this, "HarCollector", "devtools/netmonitor/har/har-collector", true);

|loader| is accessible inside anything loaded by the DevTools loader.
The ".js" suffix is optional with (our) require.
The |true| argument means "destructuring enabled".

Not a big deal, but could allow you to remove |XPCOMUtils|.

@@ +22,5 @@
> +XPCOMUtils.defineLazyGetter(this, "HarUtils", function() {
> +  return devtools.require("devtools/netmonitor/har/har-utils.js").HarUtils;
> +});
> +
> +const DRIVER_ACTOR_URL = "resource:///modules/devtools/netmonitor/har/har-driver.js";

This is not used.

@@ +72,5 @@
> +    }
> +
> +    this.debuggerClient = client;
> +
> +    client.attachTab(tabGrip.actor, (response, tabClient) => {

The toolbox |target| already does the |attachTab| step, and offers the tabClient via the |activeTab| property, so I don't think we need to repeat the work here.

@@ +154,5 @@
> +
> +  // HAR Backend Driver
> +
> +  attachDriverActor: function(toolbox) {
> +    const { HarDriverFront } = devtools.require("devtools/server/actors/har");

Make this a lazy require getter and move it to the head of the file.

@@ +161,5 @@
> +    if (!client) {
> +      return resolve();
> +    }
> +
> +    return client.getTab().then(response => {

This step is not needed: the |form| is on |target| already, so you can just use that.

@@ +202,5 @@
> +   * Handle RDP event from the backend. HAR.clear() has been
> +   * executed in the page and the Network panel content
> +   * needs to be cleared.
> +   */
> +  clear: function() {

Is it necessary to clear the collector too?  I would think the net monitor is less important now, since we don't depend on it for the items.  But if you want to clear it anyway, it seems okay.

::: browser/devtools/netmonitor/har/har-collector.js
@@ +1,1 @@
> +/* See license.txt for terms of usage */

Needs MPL license header.

@@ +9,5 @@
> +const { makeInfallible } = devtools["require"]("devtools/toolkit/DevToolsUtils.js");
> +
> +const { Services } = Cu.import("resource://gre/modules/Services.jsm", {});
> +
> +// Helper tracer. Should be generic sharable by other modules (Q3 goal)

Don't need the "Q3 goal" part in here.

@@ +209,5 @@
> +    // reset the timeout and wait for all responses as usual.
> +
> +    // xxxHonza: There is no promise to wait for now. It's created
> +    // in onNetworkEventUpdate.
> +    //this.resetPageLoadTimeout();

Remove this block if not needed.

::: browser/devtools/netmonitor/har/toolbox-overlay.js
@@ +35,5 @@
> +   * Executed when a new toolbox is ready.
> +   */
> +  onInit: function(eventId, toolbox) {
> +    let token = Services.prefs.getCharPref(
> +      "devtools.netmonitor.har.contentAPIToken");

How do you intend for someone to discover the value this pref should be set to in order to enable the API?

Currently it's blank by default, so it will always match.  Doesn't the server-side need to create a token value somewhere?

@@ +66,5 @@
> +  // Automation
> +
> +  initAutomation: function(toolbox) {
> +    let context = this.getContext(toolbox);
> +    context.automation = new HarAutomation(toolbox);

I am not sure I see the value of this overlay module anymore.  It had more work to perform in the last version, where it installed an actor, but now there's not really much to do.

How about if the toolbox always calls |HarAutomation| on load, but then it only does something if the token matches?  Or is there are reason you still prefer this design?

::: toolkit/devtools/server/actors/har.js
@@ +17,5 @@
> +
> +const actorTypeName = "harExportDriver";
> +
> +// TODO: should be shared API (protocol.js, protocol-utils.js?)
> +function expectState(expectedState, method) {

Probably DevToolsUtils is fine.  It seems unrelated to protocol.js specifically.

...but actually a very similar method already exists[1].

[1]: https://dxr.mozilla.org/mozilla-central/source/toolkit/devtools/server/actors/common.js#508

@@ +294,5 @@
> +
> +/**
> + * @front Client object for the HAR driver actor.
> + */
> +var HarDriverFront = FrontClass(HarDriverActor,

Nit: we tend to use let everywhere, unless var is required in some particular case (which is quite rare)

@@ +301,5 @@
> +  // Initialization
> +
> +  initialize: function(client, form) {
> +    Front.prototype.initialize.apply(this, arguments);
> +

Nit: remove this line
Attachment #8614724 - Flags: review?(jryans) → review-
Attached patch bug1167080-3.patch (obsolete) — Splinter Review
(In reply to J. Ryan Stinnett [:jryans] (use ni?) from comment #27)
> I am not sure I see the value of this overlay module anymore.  It had more
> work to perform in the last version, where it installed an actor, but now
> there's not really much to do.
> 
> How about if the toolbox always calls |HarAutomation| on load, but then it
> only does something if the token matches?  Or is there are reason you still
> prefer this design?

The only reason is that this way the entire HAR feature lives entirely within the netmonitor/har directory and isn't spread within the code base.

Otherwise, all comments fixed, API not exposed into the content (the backend actor removed) and patch updated.


I'll yet need to test with Selenium (and discuss the API with David Burns)

Honza
Attachment #8614724 - Attachment is obsolete: true
Attachment #8621530 - Flags: review?(jryans)
Comment on attachment 8621530 [details] [diff] [review]
bug1167080-3.patch

Review of attachment 8621530 [details] [diff] [review]:
-----------------------------------------------------------------

::: browser/devtools/main.js
@@ +40,5 @@
>    }
>  };
>  Services.obs.addObserver(unloadObserver, "sdk:loader:destroy", false);
>  
> +require("devtools/netmonitor/har/toolbox-overlay.js");

Can we move this to toolbox.js, since that is the module it effects?

Also, it is pretty rare to require a module, but not actually use anything it returns.  I realize it works since you bind listeners inside, but it reads oddly to me, and makes me wonder why this is here.

All other uses of |gDevTools.on| are inside some kind of function rather than at the module level (except a few in gDevTools.jsm itself where it doesn't really matter anyway).  Could you wrap the |gDevTools.on| step in a function such as "register" or "listen" that you export, and then call in toolbox.js, so that it's clear why we are pulling in this module?

::: browser/devtools/netmonitor/har/har-automation.js
@@ +19,5 @@
> +
> +/**
> + * This object is responsible for automated HAR export.
> + *
> + * The main responsibility is registering an actor that exposes HAR API

Need to rewrite these comments, there is no actor anymore.

@@ +64,5 @@
> +    this.debuggerClient = client;
> +    this.tabClient = this.toolbox.target.activeTab;
> +
> +    let listeners = [ "NetworkActivity" ];
> +    client.attachConsole(tabGrip.consoleActor, listeners,

Actually, this is another step the target has already performed[1] for you.

The console client is available as |target.activeConsole|.

[1]: https://dxr.allizom.org/mozilla-central/source/browser/devtools/framework/target.js#436

@@ +108,5 @@
> +   * so export all after all has been properly received from the backend.
> +   *
> +   * This collector still works and collects any consequent HTTP
> +   * traffic (e.g. XHRs) happening after the page is loaded and
> +   * The additional traffic can be exported through 'NetExport.triggerExport'

Need to update this, there is no content API.

@@ +165,5 @@
> +   * Network panel (asynchronously) and saves it into a file.
> +   *
> +   * When the process is done a notification is sent to the backend.
> +   * If 'getData' flag has been set by the user, HAR string
> +   * is also sent to the backend and passed to the content

No content caller exists now.

@@ +203,5 @@
> +      return jsonString;
> +    });
> +  },
> +
> +  // TODO: copy from the NetworkEventsHandler

Did you mean to do this before review?

@@ +339,5 @@
> +
> +  /**
> +   * Called when the monitored tab is closed.
> +   */
> +  onTabDetached: function() {

I guess this can be removed?

::: browser/devtools/netmonitor/har/har-collector.js
@@ +6,5 @@
> +const { Cu, Ci, Cc } = require("chrome");
> +const { defer, all } = require("sdk/core/promise");
> +const { setTimeout, clearTimeout } = require("sdk/timers");
> +const { devtools } = Cu.import("resource://gre/modules/devtools/Loader.jsm", {});
> +const { makeInfallible } = devtools["require"]("devtools/toolkit/DevToolsUtils.js");

Use regular require and remove the devtools import above.

@@ +54,5 @@
> +    this.debuggerClient.removeListener("networkEventUpdate", this.onNetworkEventUpdate);
> +  },
> +
> +  clear: function() {
> +    // xxxHonza: any pending requests events will be ignored (they

Nit: We tend to not have developer-specific comments, so I would say just remove "xxxHonza".  If this seems like something to fix in the future, you could also file a follow up and note the bug number here.

@@ +64,5 @@
> +    this.requests = [];
> +  },
> +
> +  waitForHarLoad: function() {
> +    // There should be yet another timeout 'netexport.pageLoadTimeout'

Update pref name

@@ +66,5 @@
> +
> +  waitForHarLoad: function() {
> +    // There should be yet another timeout 'netexport.pageLoadTimeout'
> +    // that should force export even if page isn't fully loaded.
> +    let deffered = defer();

Nit: deffered -> deferred

@@ +105,5 @@
> +  // Page Loaded Timeout
> +
> +  /**
> +   * The page is loaded when there are no new requests within given period
> +   * of time. The time is set in preferences: 'netexport.pageLoadedTimeout'

Update pref name

::: browser/devtools/netmonitor/har/toolbox-overlay.js
@@ +7,5 @@
> +const { gDevTools } = Cu.import("resource:///modules/devtools/gDevTools.jsm", {});
> +const { devtools } = Cu.import("resource://gre/modules/devtools/Loader.jsm", {});
> +const { XPCOMUtils } = Cu.import("resource://gre/modules/XPCOMUtils.jsm", {});
> +
> +XPCOMUtils.defineLazyGetter(this, "HarAutomation", function() {

Nit: Another place you could use lazyRequireGetter and skip XPCOMUtils

@@ +26,5 @@
> + * that automates the browser. Primarily, it is for automating web apps
> + * and getting HAR file for every loaded page.
> + *
> + * The HAR API are exposed *only* in these automated scenarios where
> + * "devtools.netmonitor.har.contentAPIToken" preference is set in

Need to rewrite this comment, there's no token or content API.
Attachment #8621530 - Flags: review?(jryans)
Summary: Trigger HAR export from within the content → HAR export automation
(In reply to J. Ryan Stinnett [:jryans] (use ni?) from comment #29)
> Comment on attachment 8621530 [details] [diff] [review]
> bug1167080-3.patch
> 
> Review of attachment 8621530 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> ::: browser/devtools/main.js
> @@ +40,5 @@
> >    }
> >  };
> >  Services.obs.addObserver(unloadObserver, "sdk:loader:destroy", false);
> >  
> > +require("devtools/netmonitor/har/toolbox-overlay.js");
> 
> Can we move this to toolbox.js, since that is the module it effects?
> 
> Also, it is pretty rare to require a module, but not actually use anything
> it returns.  I realize it works since you bind listeners inside, but it
> reads oddly to me, and makes me wonder why this is here.
> 
> All other uses of |gDevTools.on| are inside some kind of function rather
> than at the module level (except a few in gDevTools.jsm itself where it
> doesn't really matter anyway).  Could you wrap the |gDevTools.on| step in a
> function such as "register" or "listen" that you export, and then call in
> toolbox.js, so that it's clear why we are pulling in this module?
Fixed, the import is in toolbox.js, there is a register method and there are no listeners for gDevTools gloabal. Agree, this fits more into the code base.


> ::: browser/devtools/netmonitor/har/har-automation.js
> @@ +19,5 @@
> > +
> > +/**
> > + * This object is responsible for automated HAR export.
> > + *
> > + * The main responsibility is registering an actor that exposes HAR API
> 
> Need to rewrite these comments, there is no actor anymore.
Fixed

> 
> @@ +64,5 @@
> > +    this.debuggerClient = client;
> > +    this.tabClient = this.toolbox.target.activeTab;
> > +
> > +    let listeners = [ "NetworkActivity" ];
> > +    client.attachConsole(tabGrip.consoleActor, listeners,
> 
> Actually, this is another step the target has already performed[1] for you.
> 
> The console client is available as |target.activeConsole|.
> 
> [1]:
> https://dxr.allizom.org/mozilla-central/source/browser/devtools/framework/
> target.js#436
Fixed

> 
> @@ +108,5 @@
> > +   * so export all after all has been properly received from the backend.
> > +   *
> > +   * This collector still works and collects any consequent HTTP
> > +   * traffic (e.g. XHRs) happening after the page is loaded and
> > +   * The additional traffic can be exported through 'NetExport.triggerExport'
> 
> Need to update this, there is no content API.
Done

> 
> @@ +165,5 @@
> > +   * Network panel (asynchronously) and saves it into a file.
> > +   *
> > +   * When the process is done a notification is sent to the backend.
> > +   * If 'getData' flag has been set by the user, HAR string
> > +   * is also sent to the backend and passed to the content
> 
> No content caller exists now.
Done

> 
> @@ +203,5 @@
> > +      return jsonString;
> > +    });
> > +  },
> > +
> > +  // TODO: copy from the NetworkEventsHandler
> 
> Did you mean to do this before review?
No, I'll fix this part as soon as Bug 1171408 is fixed.
(use shared WebConsoleClient.getString method).
I've also updated the comment.

> 
> @@ +339,5 @@
> > +
> > +  /**
> > +   * Called when the monitored tab is closed.
> > +   */
> > +  onTabDetached: function() {
> 
> I guess this can be removed?
Yes, removed.


> 
> ::: browser/devtools/netmonitor/har/har-collector.js
> @@ +6,5 @@
> > +const { Cu, Ci, Cc } = require("chrome");
> > +const { defer, all } = require("sdk/core/promise");
> > +const { setTimeout, clearTimeout } = require("sdk/timers");
> > +const { devtools } = Cu.import("resource://gre/modules/devtools/Loader.jsm", {});
> > +const { makeInfallible } = devtools["require"]("devtools/toolkit/DevToolsUtils.js");
> 
> Use regular require and remove the devtools import above.
Done

> 
> @@ +54,5 @@
> > +    this.debuggerClient.removeListener("networkEventUpdate", this.onNetworkEventUpdate);
> > +  },
> > +
> > +  clear: function() {
> > +    // xxxHonza: any pending requests events will be ignored (they
> 
> Nit: We tend to not have developer-specific comments, so I would say just
> remove "xxxHonza".  If this seems like something to fix in the future, you
> could also file a follow up and note the bug number here.
Fixed


> 
> @@ +64,5 @@
> > +    this.requests = [];
> > +  },
> > +
> > +  waitForHarLoad: function() {
> > +    // There should be yet another timeout 'netexport.pageLoadTimeout'
> 
> Update pref name
Done

> 
> @@ +66,5 @@
> > +
> > +  waitForHarLoad: function() {
> > +    // There should be yet another timeout 'netexport.pageLoadTimeout'
> > +    // that should force export even if page isn't fully loaded.
> > +    let deffered = defer();
> 
> Nit: deffered -> deferred
Fixed


> 
> @@ +105,5 @@
> > +  // Page Loaded Timeout
> > +
> > +  /**
> > +   * The page is loaded when there are no new requests within given period
> > +   * of time. The time is set in preferences: 'netexport.pageLoadedTimeout'
> 
> Update pref name
Done


> 
> ::: browser/devtools/netmonitor/har/toolbox-overlay.js
> @@ +7,5 @@
> > +const { gDevTools } = Cu.import("resource:///modules/devtools/gDevTools.jsm", {});
> > +const { devtools } = Cu.import("resource://gre/modules/devtools/Loader.jsm", {});
> > +const { XPCOMUtils } = Cu.import("resource://gre/modules/XPCOMUtils.jsm", {});
> > +
> > +XPCOMUtils.defineLazyGetter(this, "HarAutomation", function() {
> 
> Nit: Another place you could use lazyRequireGetter and skip XPCOMUtils
Done


> 
> @@ +26,5 @@
> > + * that automates the browser. Primarily, it is for automating web apps
> > + * and getting HAR file for every loaded page.
> > + *
> > + * The HAR API are exposed *only* in these automated scenarios where
> > + * "devtools.netmonitor.har.contentAPIToken" preference is set in
> 
> Need to rewrite this comment, there's no token or content API.
Done

Thanks Ryan!

Honza
Attachment #8621530 - Attachment is obsolete: true
Attachment #8623631 - Flags: review?(jryans)
Comment on attachment 8623631 [details] [diff] [review]
bug1167080-4.patch

Review of attachment 8623631 [details] [diff] [review]:
-----------------------------------------------------------------

Looks good to me, thanks!

::: browser/devtools/netmonitor/har/toolbox-overlay.js
@@ +68,5 @@
> +
> +// Registration
> +function register(toolbox) {
> +  if (overlays.has(toolbox)) {
> +    throw Error("Theere is an existing overlay for the toolbox");

Nit: Theere -> There
Attachment #8623631 - Flags: review?(jryans) → review+
(In reply to J. Ryan Stinnett [:jryans] (use ni?) from comment #32)
> > +// Registration
> Nit: Theere -> There
Fixed

Since bug 1171408 landed yesterday night, I could yet simplify the code by using getString() method from WebConsoleClient (and remove dup implementation).

I put both changes into a new patch, so it's simpler to review.

Try:
https://treeherder.mozilla.org/#/jobs?repo=try&revision=c7da3f54c18d

Honza
Attachment #8624082 - Flags: review?(jryans)
Thanks Ryan!

Honza
Keywords: checkin-needed
https://hg.mozilla.org/mozilla-central/rev/c934be66965d
https://hg.mozilla.org/mozilla-central/rev/543e22ba9046
Status: NEW → RESOLVED
Closed: 9 years ago
Resolution: --- → FIXED
Target Milestone: --- → Firefox 41
I don't see secretToken or contentAPIToken in the source code (MXR).

How do you make HAR.triggerExport available in a a web page?
(In reply to Mike Kaply [:mkaply] from comment #37)
> I don't see secretToken or contentAPIToken in the source code (MXR).
> 
> How do you make HAR.triggerExport available in a a web page?

Exposing HAR API into the page is done by an extension.
You need to install HARExportTrigger:
https://github.com/firebug/har-export-trigger/releases

In-page script examples:
https://github.com/firebug/har-export-trigger/wiki/Examples

Honza
Product: Firefox → DevTools
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: